meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
dataflow:general_dataflow [2020/10/27 10:25] birgitdataflow:general_dataflow [2023/04/03 11:20] (current) – [Publication of Data] pgrobe
Line 1: Line 1:
-====== Dataflow for Preservation of Digital Information at ZFMK Data Center ======+====== Dataflow for Preservation of Digital Information at LIB Biodiversity Data Center ======
  
-===== Data pipeline of research data and corresponding metadata using ZFMK in-house-management systems (DWB, Morph·D·Base, easydb) =====+===== Data pipeline of research data and corresponding metadata using LIB in-house-management systems (DWB, Morph·D·Base, easydb) =====
  
-The [[https://www.gfbio.org/data-centers/zfmk|ZFMK Data Center]] is one of the seven [[https://www.gfbio.org/data-centers|GFBio Collection Data Centers]] that are part and form the backbone of the GFBio Submission, Repository and Archiving Infrastructure. The data archiving and publication at ZFMK includes the management systems [[https://diversityworkbench.net/Portal/Diversity_Workbench|Diversity Workbench]] as well as the online platform [[https://www.morphdbase.de/|Morph·D·Base]], and digital asset management system [[https://www.programmfabrik.de/|easydb]]. Management tools and archiving processes as done at the GFBio data center ZFMK are described under [[https://gfbio.biowikifarm.net/wiki/Technical_Documentations|Technical Documentations]]. This includes services for documentation, processing and archiving of the provided original data and metadata sets (source data; SIP). Data producers are welcome to use xls templates as provided under [[https://gfbio.biowikifarm.net/wiki/Forms_and_Assessments|Templates for data submission]].+The [[https://www.gfbio.org/data-centers/LIB|LIB Biodiversity Data Center]] is one of the seven [[https://www.gfbio.org/data-centers|GFBio Collection Data Centers]] that are part and form the backbone of the GFBio Submission, Repository and Archiving Infrastructure. The data archiving and publication at LIB includes the management systems [[https://diversityworkbench.net/Portal/Diversity_Workbench|Diversity Workbench]] as well as the online platform [[https://www.morphdbase.de/|Morph·D·Base]], and digital asset management system [[https://www.programmfabrik.de/|easydb]]. Management tools and archiving processes as done at the Datacenter are described under [[https://gfbio.biowikifarm.net/wiki/Technical_Documentations|Technical Documentations]]. This includes services for documentation, processing and archiving of the provided original data and metadata sets (source data; SIP). Data producers are welcome to use Spreadsheet templates as provided under [[https://gfbio.biowikifarm.net/wiki/Forms_and_Assessments|Templates for data submission]]. 
 +The workflow for submission, archiving and publication of data follows the standard for a __O__pen __A__rchival __I__nformation __S__ystem ([[https://www.iso.org/standard/57284.html|OAIS - Open archival information system]] and [[https://public.ccsds.org/pubs/650x0m2.pdf|Reference Model for an Open Archival Information System (pdf)]]). This ISO standard basically distinguished between different information packages for submission (SIP), archiving (AIP), and dissemination (DIP). For an overview of ISO standards for digital archives see [[ https://gfbio.biowikifarm.net/wiki/ISO_Standards_for_Digital_Archives|ISO Standards for Digital Archives]].
  
-The workflow for submission, archiving and publication of data at ZFMK Datacenter follows the standard for a __O__pen __A__rchival __I__nformation __S__ystem (OAIS, [[https://www.iso.org/standard/57284.html|https://www.iso.org/standard/57284.html]] and https://public.ccsds.org/pubs/650x0m2.pdf). This ISO standard basically distinguished between different information packages for submission (SIP), archiving (AIP), and dissemination (DIP). For an overview of ISO standards for digital archives see: https://gfbio.biowikifarm.net/wiki/ISO_Standards_for_Digital_Archives. +The different modules from Diversity Workbench for specimen occurrence data, literature, taxonomies, and others are used at LIB for data and metadata import, metadata enrichment and data quality control (see [[https://www.gfbio.org/data/tools|Tools & Workbenches for Data Management at GFBio]]). 
- +
-The different modules from Diversity Workbench for specimen occurrence data, literature, taxonomies, and others are used at ZFMK for data and metadata import, metadata enrichment and data quality control (see https://www.gfbio.org/data/tools). +
  
 The workflow with these central components is illustrated in figure 1 and described in the text below. The workflow with these central components is illustrated in figure 1 and described in the text below.
  
-**Figure 1: The ZFMK Data Workflow.** +**Figure 1: The LIB Biodiversity Data Center Data-Flow.**
- +
-{{ :dataflow:workflow_zfmk_cts.svg |Figure 1: The ZFMK Data Workflow.}} +
- +
- +
-ABCD - Access to Biological Collections Data schema +
- +
-SIP - Submission Information Package +
- +
-AIP - Archival Information Package +
- +
-DIP - Dissemination Information Package+
  
-VAT - Visualizing and Analysing Tool+{{ :dataflow:workflow_zfmk_data_center_wiki.svg|Figure 1: The LIB Data Workflow.}}
  
 +  ; ABCD : Access to Biological Collections Data schema 
 +  ; SIP : Submission Information Package
 +  ; AIP : Archival Information Package
 +  ; DIP : Dissemination Information Package
 +  ; VAT : Visualizing and Analysing Tool
  
  
 ==== Submission and Ingestion of Data ==== ==== Submission and Ingestion of Data ====
  
-Data providers submit their original research data and corresponding metadata via the [[https://submissions.gfbio.org/|GFBio Submission System]] to ZFMK data center. Completeness of the data and metadata are checked and missing data are requested from the data provider. A Submission Information Package (SIP according to OAIS) is build by several steps, including corrections, back-answers, cleansing, and refinement of the original data. Changes on the data are tracked in GitLab revision control system at ZFMK Data Center, following a standard procedure as documented in [[dataflow:raw_dataflow|Data flow for Original Data]] in the internal Wiki of ZFMK Data Center (login required). Correspondence with data providers are stored and documented in ticketing system. All relevant information is stored and archived on tape.+Data providers submit their original research data and corresponding metadata via the [[https://submissions.gfbio.org/|GFBio Submission System]] to our datacenter or contact it directly using the Email: <datacenter@leibniz-lib.de>. Completeness of the data and metadata are checked and missing data are requested from the data provider. A Submission Information Package (SIP according to OAIS) is build by several steps, including corrections, back-answers, cleansing, and refinement of the original data. Changes on the data are tracked in the GitLab revision control system at LIB, following a standard procedure as documented in [[dataflow:raw_dataflow|Data flow for Original Data]]. Correspondence with data providers are stored and documented in our ticketing system. All relevant information is stored and archived on tape.
  
 For multimedia data is [[https://www.morphdbase.de/|Morph·D·Base]] used, where a user account is provided and the user can transfer his data directly. All available metadata are stored for each record. For multimedia data is [[https://www.morphdbase.de/|Morph·D·Base]] used, where a user account is provided and the user can transfer his data directly. All available metadata are stored for each record.
Line 39: Line 32:
 ==== Curation of data and metadata ==== ==== Curation of data and metadata ====
  
-Different types of data require different types of management systems for curation. At ZFMK we use for curation of the following data types specialized software suits:+Different types of data require different types of management systems for curation. At LIB we use for curation of the following data types specialized software suits:
  
   ; Occurence data : All specimen related data are integrated in [[http://diversityworkbench.net/Portal/Diversity_Workbench|DiversityWorkbench]] (DWB) database suite via the integrated import wizard and can be actively curated and managed by domain experts and/or data providers (user account on request). The occurrence data (according to [[https://gfbio.biowikifarm.net/wiki/Concepts_and_Standards|GFBio consensus documents]]) are stored at unit level in the DWB Moduls DiversityCollection, DiversityAgents, DiversityTaxonNames and DiversityReferences and linked within each other.    ; Occurence data : All specimen related data are integrated in [[http://diversityworkbench.net/Portal/Diversity_Workbench|DiversityWorkbench]] (DWB) database suite via the integrated import wizard and can be actively curated and managed by domain experts and/or data providers (user account on request). The occurrence data (according to [[https://gfbio.biowikifarm.net/wiki/Concepts_and_Standards|GFBio consensus documents]]) are stored at unit level in the DWB Moduls DiversityCollection, DiversityAgents, DiversityTaxonNames and DiversityReferences and linked within each other. 
-At dataset level there are also stored in DiversityProjects (in the setting elements). As far as mandatory or recommended as part of GFBio consensus documents they will be published.+Metadata are cataloged in DiversityProjects. As far as mandatory or recommended as part of GFBio consensus documents they will be published.
  
   ; Morphological data : The online web-repository [[https://www.morphdbase.de/|Morph·D·Base]] is used to store, manage and publish structured morphological data and associated multimedia. Entries can be cross-linked to other entries in Morph·D·Base and linked to corresponding data entries in DiversityCollection.   ; Morphological data : The online web-repository [[https://www.morphdbase.de/|Morph·D·Base]] is used to store, manage and publish structured morphological data and associated multimedia. Entries can be cross-linked to other entries in Morph·D·Base and linked to corresponding data entries in DiversityCollection.
  
-  ; Multimedia : The Digital Asset Management System [[https://media.zfmk.de/|easyDB]] allows for uploading, curating and publishing all sorts of multimedia data, e.g. images, sound files, and documents. Entries can be cross-linked to other entries in easyDB and linked to corresponding data entries in DiversityCollection.+  ; Multimedia : The Digital Asset Management System [[https://media.LIB.de/|easyDB]] allows for uploading, curating and publishing all sorts of multimedia data, e.g. images, sound files, and documents. Entries can be cross-linked to other entries in easyDB and linked to corresponding data entries in DiversityCollection.
  
-  ; Metadata : Metadata describing data and associated multimedia are either stored together with the data entries (unit level) or handled in different management modules of DiversityWorkbench, such as DiversityProjects or DiversityAgents. The latter provide information about a set of entries, i.e. the "dataset level".+  ; Metadata : Metadata describing data and associated multimedia are either stored together with the data entries (unit level) or handled in different management modules of DiversityWorkbench, such as DiversityProjects or DiversityAgents. The latter provide information about a set of entries, i.e. the dataset, or metadata.
      
-**Sensible data**: Each of the specialized systems listed above allows to withhold or blur data for publication. This can be the complete entry or part of an entry, e.g. information about the exact sampling location of a specimen. All sensible data are handled according to our [[:datapolicy|Data Policy: Data provision for upload]]. For personal data the GDPR as described in the [[:privacypolicy|ZFMK Privacy Policy]] applies. +**Sensible data**: Each of the specialized systems listed above allows to withhold or blur data for publication. This can be the complete entry or part of an entry, e.g. information about the exact sampling location of a specimen. All sensible data are handled according to our [[:datapolicy|Data Policy: Data provision for upload]]. For personal data the GDPR as described in the [[:privacypolicy|LIB Privacy Policy]] applies. 
  
  
 === Enrichment and Annotation of Data and Metadata === === Enrichment and Annotation of Data and Metadata ===
  
-The data and metadata submitted to ZFMK can be enriched and annotated within the specialized management systems listed above. This is done manually by the  by ZFMK data curator in close cooperation with the data provider or by domain experts with access to the management systems.+The data and metadata submitted to the LIB Biodiversity Data Center can be enriched and annotated within the management systems listed above. This is done manually by one of the LIB data curators in close cooperation with the data provider or by domain experts with access to the management systems.
  
-As far as part of GFBio consensus documents they will be published. 
  
 **Identifiers:** Identifiers are used to provide unambiguous identification of information, e.g. unique identifiers for person names such as ORCID or to interlink information with one another. Identifiers can be added to the (meta-)data by using controlled classifications (i.e. whether the identifier is a sequence information, a person identifier, or a crossref for literature, etc.) and URLs.  **Identifiers:** Identifiers are used to provide unambiguous identification of information, e.g. unique identifiers for person names such as ORCID or to interlink information with one another. Identifiers can be added to the (meta-)data by using controlled classifications (i.e. whether the identifier is a sequence information, a person identifier, or a crossref for literature, etc.) and URLs. 
  
-**Licenses:** Different Licenses can be applied to the data submitted to ZFMK. They are part of the metadata on unit or dataset level. All metadata stored and published by ZFMK receive the (https://creativecommons.org/publicdomain/zero/1.0/deed.en|Creative Common CC0 waiver). Creative Common licences are recommended by GFBio. The most frequently used license at ZFMK for specimen related data and multimedia is the [[https://creativecommons.org/licenses/by-sa/4.0/|CC BY-SA 4.0]]. An overview about all available CC licenses are [[https://creativecommons.org/about/cclicenses/|here]].+**Licenses:** Different licenses can be applied to the submitted data. They are part of the metadata on unit or dataset level. All metadata stored and published by the Dtacenter receive the [[https://creativecommons.org/publicdomain/zero/1.0/deed.en|Creative Common CC0 waiver]]. The most frequently used license for specimen related data and multimedia is the [[https://creativecommons.org/licenses/by-sa/4.0/|CC BY-SA 4.0]]. An overview about all available CC licenses are [[https://creativecommons.org/about/cclicenses/|here]].
  
  
Line 67: Line 59:
 ====  Publication of Data ==== ====  Publication of Data ====
  
-All data uploaded, curated, and archived in the management systems of ZFMK Datacenter can be published. Publishing of datasets are negotiated with the data provider. Aspects to consider are sensible data for withhold (see above), or publishing restrictions caused by third parties. +All data uploaded, curated, and archived in the management systems of LIB Biodiversity Data Center can be published. Publishing of datasets are negotiated with the data provider. Aspects to consider are sensible data for withhold (see above), or publishing restrictions caused by third parties. 
  
  
Line 74: Line 66:
  
 Datasets containing occurrence data are published by creating a snapshot from the data and metadata in DiversityWorkbench for one dataset. This is done with the external helper tool, available from: [[ Datasets containing occurrence data are published by creating a snapshot from the data and metadata in DiversityWorkbench for one dataset. This is done with the external helper tool, available from: [[
-https://datacenter.zfmk.de/gitlab/BioCASe/biocase_media/releases|ZFMK GitLab: VCAT-Transfer]]. The tool transfers the data and metadata to a MySQL database. There all data are mapped using the [[https://wiki.bgbm.org/bps|BioCASe Provider Software]] to the [[https://archive.bgbm.org/TDWG/CODATA/Schema/ABCD_2.1/ABCD_2.1.html|ABCD 2.1 Standard]]. A Dissemination Information Package (DIP according to OAIS) is created and stored as zip-archive in the digital asset management system [[https://media.zfmk.de/biocase-archives|easydb at ZFMK]]. Each DIP is versioned and the version is identified by a date suffix and its version number consisting of a major version and a minor version (e.g. 2.1). Major changes, such as the addition of further data, increment the major version. Minor changes, e.g. correction of typing errors or changes in the metadata are reflected in an increment of the minor version.+https://datacenter.LIB.de/gitlab/BioCASe/biocase_media/releases|LIB GitLab: VCAT-Transfer]]. All data are mapped using the [[https://wiki.bgbm.org/bps|BioCASe Provider Software]] to the [[https://archive.bgbm.org/TDWG/CODATA/Schema/ABCD_2.1/ABCD_2.1.html|ABCD 2.1 Standard]]. A Dissemination Information Package (DIP according to OAIS) is created and stored as zip-archive in the digital asset management system [[https://media.leibniz-lib.de/biocase-archives|easydb at LIB]]. Each DIP is versioned and the version is identified by a date suffix and its version number consisting of a major version and a minor version (e.g. 2.1). Major changes, such as the addition of further data, increment the major version. Minor changes, e.g. correction of typing errors or changes in the metadata are reflected in an increment of the minor version.
  
-Datasets stored and curated in [[https://morphdbase.de|Morph·D·Base]] or [[https://media.zfmk.de|easyDB]] are published from within the software.+Datasets stored and curated in [[https://morphdbase.de|Morph·D·Base]] or [[https://media.leibniz-lib.de|easyDB]] are published from within the software.
  
  
Line 83: Line 75:
 For each published major version of an occurrence dataset a DOI is assigned. Datasets in Morph·D·Base or easyDB receive a DOI on demand. For each published major version of an occurrence dataset a DOI is assigned. Datasets in Morph·D·Base or easyDB receive a DOI on demand.
  
-The ZFMK is registered at [[https://www.zbmed.de/|ZB MED]] and can therefore create a DOI at [[https://doi.datacite.org/|DataCite DOI Fabrica]]. The DOI is added to the corresponding version of the information package and is also part of the citation of the data set (see below).+The LIB is registered at [[https://www.zbmed.de/|ZB MED]] and can therefore create a DOI at [[https://doi.datacite.org/|DataCite DOI Fabrica]]. The DOI is added to the corresponding version of the information package and is also part of the citation of the data set (see below).
  
  
 == Citation == == Citation ==
  
-Published datasets are citable using direct URLs to the DIP or via the DOIs. Based on the data provider's input the citation of the dataset will be prepared by the ZFMK Data curator adjusting the input (submission metadata) to be conform with the GFBio citation pattern. The citation is finalized in close collaboration with the data provider. For details see General part of [[https://gfbio.biowikifarm.net/wiki/Data_Publishing/General_part:_GFBio_publication_of_type_1_data_via_BioCASe_data_pipelines|GFBio publication of type 1 data via BioCASe data pipelines]]+Published datasets are citable using direct URLs to the DIP or via the DOIs. Based on the data provider's input the citation of the dataset will be prepared by the LIB data curator adjusting the input (submission metadata) to be conform with the GFBio citation pattern. The citation is finalized in close collaboration with the data provider. For details see General part of [[https://gfbio.biowikifarm.net/wiki/Data_Publishing/General_part:_GFBio_publication_of_type_1_data_via_BioCASe_data_pipelines|GFBio publication of type 1 data via BioCASe data pipelines]]
  
-Example: **ZFMK Ichthyology Working Group (2018). The Ichthyology collection at the Zoological Research Museum Alexander Koenig. [Dataset]. Version: 2.0. Data Publisher: Zoological Research Museum Koenig - Leibniz Institute for Animal Biodiversity. https://doi.org/10.20363/ZFMK-Coll.Ichthyology-2018-03**+Example: ''ZFMK Coleoptera Working Group (2023). ZFMK Coleoptera Oberthuer collection. [Dataset]. Version: 2.0. Data Publisher: LIB Biodiversity Datacenter. https://doi.org/10.20363/ZFMK-Coll.Oberthuer-2023-02''
  
  
  
-==== ZFMK archiving system ====+==== Archiving ====
  
-Archival Information Packages (AIPs according to OAIS) are created from all data and metadata submitted and curated within the ZFMK in-house-management systems.+Archival Information Packages (AIPs according to OAIS) are created from all data and metadata submitted and curated within the LIB in-house-management systems.
  
   ; GitLab : In GitLab are all submitted files - as they are - archived. Furthermore the used import schemes for DiversityWorkbench are archived here.   ; GitLab : In GitLab are all submitted files - as they are - archived. Furthermore the used import schemes for DiversityWorkbench are archived here.
-  ; DWB : Occurences data stored in DiversityWorkbench are exported on a regular basis as tab-separated csv-files and archived in the intranet filesystem of ZFMK+  ; DWB : Occurence data stored in DiversityWorkbench are exported on a regular basis as tab-separated csv-files and archived in the intranet filesystem of LIB
-  ; ZFMK Intranet Filesystem : Backups stored within specific folders in the intranet filesystem of ZFMK are transferred to tapes in the ZFMK tape library on a regular basis. +  ; LIB Intranet Filesystem : Backups stored in specific folders on the LIB intranet file system are transferred to tapes in the internal tape library on a regular basis. 
-  ; easyDB : Multimedia files and versioned ABCD packages are stored in easyDB, which has its own backup in the ZFMK Tape Library+  ; easyDB : Multimedia files and versioned ABCD packages are stored in easyDB, which has its own backup in the LIB tape library
-  ; ZFMK Tape Library : The generated AIPs are archived in the ZFMK Tape Library. These tapes are stored with two identical copies at two different locations in the ZFMK+  ; LIB Tape Library : The generated AIPs are archived in the LIB tape library. These tapes are stored with two identical copies at two different locations in the LIB
-  ; Morph·D·Base : The data in MDB is regularly backed up. This backup is available as a redundant copy separate from the running production system. The backup is copied to a file server located in the ZFMK IT department, whereas the running system is housed within the data center of the University of Bonn.+  ; Morph·D·Base : The data in MDB is regularly backed up. This backup is available as a redundant copy separate from the running production system. The backup is copied to a file server located in the LIB IT department, whereas the running system is housed within the data center of the University of Bonn.
  
-For detailed information about backups and recovery see [[:digital_preservation_plan|ZFMK Preservation Plan]].+For detailed information about backups and recovery see [[:digital_preservation_plan|Preservation Plan]].
  
  
Line 112: Line 104:
 ==== Access to data via different portals ==== ==== Access to data via different portals ====
  
-Indexed and faceted data are available in public portals such as GBIF, Europeana and GFBio, which are operated by national or international consortia. Specialized web portals for access to the data are developed and provided by the ZFMK Data Center. These include the [[https://collections.zfmk.de|online collection catalogue]], the portal of the [[https://bolgermany.de|German Barcode of Life projectGBOL]], or interfaces to the data, which also provide APIs for machine readable formats and access to the data using CETAF stable identifiers. +Indexed and faceted data are available in public portals such as GBIF, Europeana and GFBio, which are operated by national or international consortia. Specialized web portals for access to the data are developed and provided by the LIB Data Center. These include the [[https://collections.leibniz-lib.de|LIB digital collection catalogue]], the portal of the [[https://bolgermany.de|German Barcode of Life project (GBOL)]], or interfaces to the data, which also provide APIs for machine readable formats and access to the data using CETAF stable identifiers ([[https://id.zfmk.de|id.zfmk.de]], or [[https://id.zmh-coll.de|id.zmh-coll.de]])
  
 The published data are provided with a recommended citation, license and DOI (see above). The published data are provided with a recommended citation, license and DOI (see above).
Line 119: Line 111:
 === Access to published data (unit level) === === Access to published data (unit level) ===
  
-  ; GFBio and VAT :  GFBio has developed a web portal that provides search functionalities for datasets and data. Data are annotated by GFBio's Terminology server, thus providing a richer search experience. A Visualization and Annotation Tool (VAT) allows for analysis and modelling of geo-referenced data. See General part of GFBio Data Publishing: [[https://gfbio.biowikifarm.net/wiki/Data_Publishing/General_part:_GFBio_publication_of_type_1_data_via_BioCASe_data_pipelines|GFBio publication of type 1 data via BioCASe data pipelines]].+  ; GFBioVAT, and LAND :  GFBio has developed a web portal that provides search functionalities for biodiversity related datasets and data. All uploaded data are annotated by GFBio's Terminology server, thus providing a richer search experience. A Visualization and Annotation Tool (VAT) allows for analysis and modelling of geo-referenced data. See General part of [[https://gfbio.biowikifarm.net/wiki/Data_Publishing/General_part:_GFBio_publication_of_type_1_data_via_BioCASe_data_pipelines|GFBio publication of type 1 data via BioCASe data pipelines]]. The "Lebendiger Atlas - Natur Deutschland (LAND)" provides an overview of Biodiversity data from Germany: [[https://land.gbif.de/|land.gbif.de]]. Here data from Germany, which are made available for GFBio, are made findable.
  
   ; Europeana : The multimedia data are accessible via [[https://www.europeana.eu/|Europeana]].   ; Europeana : The multimedia data are accessible via [[https://www.europeana.eu/|Europeana]].
  
-  ; Digital Collection Catalogue : All data based on physical vouchers within the natural history collections of ZFMK are accessible via the [[https://collections.zfmk.de/|ZFMK Digital Collection Catalogue]] +  ; Digital Collection Catalogue : All data based on physical vouchers within the natural history collections of LIB are accessible via the [[https://collections.leibniz-lib.de/|LIB Digital Collection Catalogue]] 
  
   ; Morph·D·Base : The online web-repository for morphological data provides public access to specimen, taxon, literature and multimedia data. All data are directly accessible in [[https://www.morphdbase.de/|Morph·D·Base]].   ; Morph·D·Base : The online web-repository for morphological data provides public access to specimen, taxon, literature and multimedia data. All data are directly accessible in [[https://www.morphdbase.de/|Morph·D·Base]].
  
-  ; easyDB : the Digital Asset Management System at ZFMK provides access to the digital assets (i.e. multimedia, documents, zip archives) stored in easyDB. They are published from within the software via [[https://media.zfmk.de/|media.zfmk.de]]. An API to easyDb is avaliable under: https://media.zfmk.de/eaurls/+  ; easyDB : the Digital Asset Management System at LIB provides access to the digital assets (i.e. multimedia, documents, zip archives) stored in easyDB. They are published from within the software via [[https://media.leibniz-lib.de/|media.leibniz-lib.de]]. An API to easyDb is avaliable under: https://media.LIB.de/eaurls/
  
-  ; id.zfmk.de : the API to all occurrence data are accessible by humans and machines in html, json, oder rdf format using https://id.zfmk.de/collection_ZFMK/.+  ; id.LIB.de : the API to all occurrence data are accessible by humans and machines in html, json, oder rdf format using [[https://id.zfmk.de/collection_zfmk/|id.zfmk.de/collection_zfmk/]], or [[https://id.zmh-coll.de|id.zmh-coll.de/collection_zmh]].
  
  
 === Access to original and raw data (dataset level) === === Access to original and raw data (dataset level) ===
  
-We provide landing pages and direct download links to the datasets from within search results of the [[https://www.gfbio.org/search?q=ZFMK+zip|GFBio web portal]], our GitLab installation at gitlab.zfmk.de (login required), the digital asset management system [[https://media.zfmk.de/biocase-archives|easydb]] (see above), and the BioCASe Provider Software (BPS) and [[https://biocase.zfmk.de/biocase/querytool/main.cgi|local query tool of BPS]] as operated at ZFMK.+We provide landing pages and direct download links to the datasets from within search results of the [[https://www.gfbio.org/search?q=zfmk+zip|GFBio web portal]], our GitLab installation at gitlab..leibniz-lib.de (login required), the digital asset management system [[https://media..leibniz-lib.de/biocase-archives|easydb]] (see above), and the BioCASe Provider Software (BPS) and [[https://biocase.zfmk.de/biocase/querytool/main.cgi|local query tool of BPS]] as operated at LIB.