meta data for this page
  •  

This is an old revision of the document!


Dataflow for Preservation of Digital Information at ZFMK Data Center

Data pipeline of research data and corresponding metadata using ZFMK in-house-management systems (DWB, Morph·D·Base, easydb)

The ZFMK Data Center is one of the seven GFBio Collection Data Centers that are part and form the backbone of the GFBio Submission, Repository and Archiving Infrastructure. The data archiving and publication at ZFMK includes the management systems Diversity Workbench as well as the online platform Morph·D·Base, and digital asset management system easydb. Management tools and archiving processes as done at the GFBio data center ZFMK are described under Technical Documentations. This includes services for documentation, processing and archiving of the provided original data and metadata sets (source data; SIP). Data producers are welcome to use xls templates as provided under Templates for data submission.

The workflow for submission, archiving and publication of data at ZFMK Datacenter follows the standard for a Open Archival Information System (OAIS, https://www.iso.org/standard/57284.html and https://public.ccsds.org/pubs/650x0m2.pdf). This ISO standard basically distinguished between different information packages for submission (SIP), archiving (AIP), and dissemination (DIP). For an overview of ISO standards for digital archives see: https://gfbio.biowikifarm.net/wiki/ISO_Standards_for_Digital_Archives.

The different modules from Diversity Workbench for specimen occurrence data, literature, taxonomies, and others are used at ZFMK for data and metadata import, metadata enrichment and data quality control (see https://www.gfbio.org/data/tools).

The workflow with these central components is illustrated in figure 1 and described in the text below.

Figure 1: The ZFMK Workflow, BioCASe data pipelines for GFBio Type 1 Data.

Figure 1: The ZFMK Workflow, BioCASe data pipelines for GFBio Type 1 Data.

ABCD - Access to Biological Collections Data schema

SIP - Submission Information Package

AIP - Archival Information Package

DIP - Dissemination Information Package

VAT - Visualizing and Analysing Tool

Submission and Ingestion of Data

Data providers submit their original research data and corresponding metadata via the GFBio Submission System to ZFMK data center. Completeness of the data and metadata are checked and missing data are requested from the data provider. A Submission Information Package (SIP according to OAIS) is build by several steps, including corrections, back-answers, cleansing, and refinement of the original data. Changes on the data are tracked in a GitLab revision control system at ZFMK Data Center, following a standard procedure as documented in Data flow for Original Data in the internal Wiki of ZFMK Data Center. Correspondence with data providers are stored and documented in a ticketing system. All relevant information is stored and archived on tape.

For multimedia data is Morph·D·Base used, where a user account is provided and the user can transfer his data directly. All available metadata are stored for each record.

Each SIP is imported into the management systems and prepared for dissemination by transforming the original research data and corresponding metadata to meet domain specific requirements as well as requirements data exchange, such as standards like ABCD.

Curation of data and metadata

Different types of data require different types of management systems for curation. At ZFMK we use for curation of the following data types specialized software suits:

Occurence data
All specimen related data are integrated in DiversityWorkbench (DWB) database suite via the integrated import wizard and can be actively curated and managed by domain experts and/or data providers (user account on request). The occurrence data (according to GFBio consensus documents) are stored at unit level in the DWB Moduls DiversityCollection, DiversityAgents, DiversityTaxonNames and DiversityReferences and linked within each other. At dataset level there are also stored in DiversityProjects (in the setting elements). As far as mandatory or recommended as part of GFBio consensus documents they will be published.
Morphological data
The online web-repository Morph·D·Base is used to store, manage and publish structured morphological data and associated multimedia. Entries can be cross-linked to other entries in Morph·D·Base and linked to corresponding data entries in DiversityCollection.
Multimedia
The Digital Asset Management System easyDB allows for uploading, curating and publishing all sorts of multimedia data, e.g. images, sound files, and documents. Entries can be cross-linked to other entries in easyDB and linked to corresponding data entries in DiversityCollection.
Metadata
Metadata describing data and associated multimedia are either stored together with the data entries (unit level) or handled in different management modules of DiversityWorkbench, such as DiversityProjects or DiversityAgents. The latter provide information about a set of entries, i.e. the “dataset level”.

Sensible data: Each of the specialized systems listed above allows to withhold or blur data for publication. This can be the complete entry or part of an entry, e.g. information about the exact sampling location of a specimen. All sensible data are handled according to our Data Policy: Data provision for upload. For personal data the GDPR as described in the ZFMK Privacy Policy applies.

Enrichment and Annotation of Data and Metadata

The data and metadata submitted to ZFMK can be enriched and annotated within the specialized management systems listed above. This is done manually by the by ZFMK data curator in close cooperation with the data provider or by domain experts with access to the management systems.

As far as part of GFBio consensus documents they will be published.

Identifiers: Identifiers are used to provide unambiguous identification of information, e.g. unique identifiers for person names such as ORCID or to interlink information with one another. Identifiers can be added to the (meta-)data by using controlled classifications (i.e. whether the identifier is a sequence information, a person identifier, or a crossref for literature, etc.) and URLs.

Licenses: Different Licenses can be applied to the data submitted to ZFMK. They are part of the metadata on unit or dataset level. All metadata stored and published by ZFMK receive the Creative Common CC0 waiver (https://creativecommons.org/publicdomain/zero/1.0/deed.en). Creative Common licences are recommended by GFBio, The most frequently used license at ZFMK for specimen related data and multimedia is the CC BY-SA 4.0. An overview about all available CC licenses are here.

Publication of Data

All data uploaded, curated, and archived in the management systems of ZFMK Datacenter can be published. Publishing of datasets are negotiated with the data provider. Aspects to consider are sensible data for withhold (see above), or publishing restrictions caused by third parties.

Provision of versioned Datasets

Datasets containing occurrence data are published by creating a snapshot from the data and metadata in DiversityWorkbench for one dataset. This is done with the external helper tool, available from: ZFMK GitLab: VCAT-Transfer. The tool transfers the data and metadata to a MySQL database. There all data are mapped using the BioCASe Provider Software to the ABCD 2.1 Standard. A Dissemination Information Package (DIP according to OAIS) is created and stored as zip-archive in the digital asset management system easydb at ZFMK. Each DIP is versioned and the version is identified by a date suffix and its version number consisting of a major version and a minor version (e.g. 2.1). Major changes, such as the addition of further data, increment the major version. Minor changes, e.g. correction of typing errors or changes in the metadata are reflected in an increment of the minor version.

Datasets stored and curated in Morph·D·Base or easyDB are published from within the software.

DOI assignment

For each published major version of an occurrence dataset a DOI is assigned. Datasets in Morph·D·Base or easyDB receive a DOI on demand.

The ZFMK is registered at ZB MED and can therefore create a DOI at DataCite DOI Fabrica. The DOI is added to the corresponding version of the information package and is also part of the citation of the data set (see below).

Citation

Published datasets are citable using direct URLs to the DIP or via the DOIs. Based on the data provider's input the citation of the dataset will be prepared by the ZFMK Data curator adjusting the input (submission metadata) to be conform with the GFBio citation pattern. The citation is finalized in close collaboration with the data provider. For details see General part: GFBio publication of type 1 data via BioCASe data pipelines

Example: ZFMK Ichthyology Working Group (2018). The Ichthyology collection at the Zoological Research Museum Alexander Koenig. [Dataset]. Version: 2.0. Data Publisher: Zoological Research Museum Koenig - Leibniz Institute for Animal Biodiversity. https://doi.org/10.20363/ZFMK-Coll.Ichthyology-2018-03.

ZFMK archiving system

Archival Infomation Packages (AIPs according to OAIS) are created from all data and metadata submitted and curated within the ZFMK in-house-management systems.

GitLab
In GitLab are all submitted files - as they are - archived. Furthermore the used import schemes for DiversityWorkbench are archived here.
DWB
Occurences data stored in DiversityWorkbench are exported on a regular basis as tab-separated csv-files and archived in the intranet filesystem of ZFMK.
ZFMK Intranet Filesystem
Backups stored within specific folders in the intranet filesystem of ZFMK are transferred to tapes in the ZFMK tape library on a regular basis.
easyDB
Multimedia files and versioned ABCD packages are stored in easyDB, which has its own backup in the ZFMK Tape Library.
ZFMK Tape Library
The generated AIPs are archived in the ZFMK Tape Library. These tapes are stored with two identical copies at two different locations in the ZFMK.
Morph·D·Base
The data in MDB is regularly backed up. This backup is available as a redundant copy separate from the running production system. The backup is copied to a file server located in the ZFMK IT department, whereas the running system is housed within the data center of the University of Bonn.

For detailed information about backups and recovery see ZFMK Preservation Plan.

Access to data via different portals

Indexed and faceted data are available in public portals such as GBIF, Europeana and GFBio, which are operated by national or international consortia. Specialized web portals for access to the data are developed and provided by the ZFMK Data Center. These include the online collection catalogue, the portal of the German Barcode of Life project, GBOL, or interfaces to the data, which also provide APIs for machine readable formats and access to the data using CETAF stable identifiers.

The published data are provided with a recommended citation, license and DOI (see above).

Access to published data (Unit level)

GFBio and VAT
GFBio has developed a web portal that provides search functionalities for datasets and data. Data are annotated by GFBio's Terminology server, thus providing a richer search experience. A Visualization and Annotation Tool (VAT) allows for analysis and modelling of geo-referenced data. See General part: GFBio publication of type 1 data via BioCASe data pipelines.
Europeana
The multimedia data are accessible via https://www.europeana.eu/.
Digital Collection Catalogue
All data based on physical vouchers within the natural history collections of ZFMK are accessible via the Collection Catalogue https://collections.zfmk.de/
Morph·D·Base
The online web-repository for morphological data provides public access to specimen, taxon, literature and multimedia data. All data are directly accessible in Morph·D·Base.
easyDB
the Digital Asset Management System at ZFMK provides access to the digital assets (i.e. multimedia, documents, zip archives) stored in easyDB. They are published from within the software via media.zfmk.de. An API to easyDb is avaliable under: https://media.zfmk.de/eaurls/
id.zfmk.de
the API to all occurrence data are accessible by humans and machines in html, json, oder rdf format using https://id.zfmk.de/collection_ZFMK/.

Access to original and raw data (dataset level)

We provide landing pages and direct download links to the datasets from within search results of the GFBio web portal, our GitLab installation at gitlab.zfmk.de (login required), the digital asset management system easydb (see above), and the BioCASe Provider Software (BPS) and local query tool of BPS as operated at ZFMK.


For GFBio Wiki only:

BioCASe Local Query Tool, landing page: All ZFMK datasets are accessible using the query tool of BioCASe Provider Software. A landing page for each data package is additionally available under ZFMK easydb. Additionally dataset or project specific websites may be available as landing page for the data.

The BioCASe Monitor service (BMS): See general part: GFBio publication of type 1 data via BioCASe data pipelines