====== Dataflow for Preservation of Digital Information at LIB Biodiversity Data Center ====== ===== Data pipeline of research data and corresponding metadata using LIB in-house-management systems (DWB, Morph·D·Base, easydb) ===== The [[https://www.gfbio.org/data-centers/LIB|LIB Biodiversity Data Center]] is one of the seven [[https://www.gfbio.org/data-centers|GFBio Collection Data Centers]] that are part and form the backbone of the GFBio Submission, Repository and Archiving Infrastructure. The data archiving and publication at LIB includes the management systems [[https://diversityworkbench.net/Portal/Diversity_Workbench|Diversity Workbench]] as well as the online platform [[https://www.morphdbase.de/|Morph·D·Base]], and digital asset management system [[https://www.programmfabrik.de/|easydb]]. Management tools and archiving processes as done at the Datacenter are described under [[https://gfbio.biowikifarm.net/wiki/Technical_Documentations|Technical Documentations]]. This includes services for documentation, processing and archiving of the provided original data and metadata sets (source data; SIP). Data producers are welcome to use Spreadsheet templates as provided under [[https://gfbio.biowikifarm.net/wiki/Forms_and_Assessments|Templates for data submission]]. The workflow for submission, archiving and publication of data follows the standard for a __O__pen __A__rchival __I__nformation __S__ystem ([[https://www.iso.org/standard/57284.html|OAIS - Open archival information system]] and [[https://public.ccsds.org/pubs/650x0m2.pdf|Reference Model for an Open Archival Information System (pdf)]]). This ISO standard basically distinguished between different information packages for submission (SIP), archiving (AIP), and dissemination (DIP). For an overview of ISO standards for digital archives see [[ https://gfbio.biowikifarm.net/wiki/ISO_Standards_for_Digital_Archives|ISO Standards for Digital Archives]]. The different modules from Diversity Workbench for specimen occurrence data, literature, taxonomies, and others are used at LIB for data and metadata import, metadata enrichment and data quality control (see [[https://www.gfbio.org/data/tools|Tools & Workbenches for Data Management at GFBio]]). The workflow with these central components is illustrated in figure 1 and described in the text below. **Figure 1: The LIB Biodiversity Data Center Data-Flow.** {{ :dataflow:workflow_zfmk_data_center_wiki.svg|Figure 1: The LIB Data Workflow.}} ; ABCD : Access to Biological Collections Data schema ; SIP : Submission Information Package ; AIP : Archival Information Package ; DIP : Dissemination Information Package ; VAT : Visualizing and Analysing Tool ==== Submission and Ingestion of Data ==== Data providers submit their original research data and corresponding metadata via the [[https://submissions.gfbio.org/|GFBio Submission System]] to our datacenter or contact it directly using the Email: . Completeness of the data and metadata are checked and missing data are requested from the data provider. A Submission Information Package (SIP according to OAIS) is build by several steps, including corrections, back-answers, cleansing, and refinement of the original data. Changes on the data are tracked in the GitLab revision control system at LIB, following a standard procedure as documented in [[dataflow:raw_dataflow|Data flow for Original Data]]. Correspondence with data providers are stored and documented in our ticketing system. All relevant information is stored and archived on tape. For multimedia data is [[https://www.morphdbase.de/|Morph·D·Base]] used, where a user account is provided and the user can transfer his data directly. All available metadata are stored for each record. Each SIP is imported into the management systems and prepared for dissemination by transforming the original research data and corresponding metadata to meet domain specific requirements as well as requirements data exchange, such as standards like [[https://abcd.tdwg.org/|ABCD]]. ==== Curation of data and metadata ==== Different types of data require different types of management systems for curation. At LIB we use for curation of the following data types specialized software suits: ; Occurence data : All specimen related data are integrated in [[http://diversityworkbench.net/Portal/Diversity_Workbench|DiversityWorkbench]] (DWB) database suite via the integrated import wizard and can be actively curated and managed by domain experts and/or data providers (user account on request). The occurrence data (according to [[https://gfbio.biowikifarm.net/wiki/Concepts_and_Standards|GFBio consensus documents]]) are stored at unit level in the DWB Moduls DiversityCollection, DiversityAgents, DiversityTaxonNames and DiversityReferences and linked within each other. Metadata are cataloged in DiversityProjects. As far as mandatory or recommended as part of GFBio consensus documents they will be published. ; Morphological data : The online web-repository [[https://www.morphdbase.de/|Morph·D·Base]] is used to store, manage and publish structured morphological data and associated multimedia. Entries can be cross-linked to other entries in Morph·D·Base and linked to corresponding data entries in DiversityCollection. ; Multimedia : The Digital Asset Management System [[https://media.LIB.de/|easyDB]] allows for uploading, curating and publishing all sorts of multimedia data, e.g. images, sound files, and documents. Entries can be cross-linked to other entries in easyDB and linked to corresponding data entries in DiversityCollection. ; Metadata : Metadata describing data and associated multimedia are either stored together with the data entries (unit level) or handled in different management modules of DiversityWorkbench, such as DiversityProjects or DiversityAgents. The latter provide information about a set of entries, i.e. the dataset, or metadata. **Sensible data**: Each of the specialized systems listed above allows to withhold or blur data for publication. This can be the complete entry or part of an entry, e.g. information about the exact sampling location of a specimen. All sensible data are handled according to our [[:datapolicy|Data Policy: Data provision for upload]]. For personal data the GDPR as described in the [[:privacypolicy|LIB Privacy Policy]] applies. === Enrichment and Annotation of Data and Metadata === The data and metadata submitted to the LIB Biodiversity Data Center can be enriched and annotated within the management systems listed above. This is done manually by one of the LIB data curators in close cooperation with the data provider or by domain experts with access to the management systems. **Identifiers:** Identifiers are used to provide unambiguous identification of information, e.g. unique identifiers for person names such as ORCID or to interlink information with one another. Identifiers can be added to the (meta-)data by using controlled classifications (i.e. whether the identifier is a sequence information, a person identifier, or a crossref for literature, etc.) and URLs. **Licenses:** Different licenses can be applied to the submitted data. They are part of the metadata on unit or dataset level. All metadata stored and published by the Dtacenter receive the [[https://creativecommons.org/publicdomain/zero/1.0/deed.en|Creative Common CC0 waiver]]. The most frequently used license for specimen related data and multimedia is the [[https://creativecommons.org/licenses/by-sa/4.0/|CC BY-SA 4.0]]. An overview about all available CC licenses are [[https://creativecommons.org/about/cclicenses/|here]]. ==== Publication of Data ==== All data uploaded, curated, and archived in the management systems of LIB Biodiversity Data Center can be published. Publishing of datasets are negotiated with the data provider. Aspects to consider are sensible data for withhold (see above), or publishing restrictions caused by third parties. == Provision of versioned Datasets == Datasets containing occurrence data are published by creating a snapshot from the data and metadata in DiversityWorkbench for one dataset. This is done with the external helper tool, available from: [[ https://datacenter.LIB.de/gitlab/BioCASe/biocase_media/releases|LIB GitLab: VCAT-Transfer]]. All data are mapped using the [[https://wiki.bgbm.org/bps|BioCASe Provider Software]] to the [[https://archive.bgbm.org/TDWG/CODATA/Schema/ABCD_2.1/ABCD_2.1.html|ABCD 2.1 Standard]]. A Dissemination Information Package (DIP according to OAIS) is created and stored as zip-archive in the digital asset management system [[https://media.leibniz-lib.de/biocase-archives|easydb at LIB]]. Each DIP is versioned and the version is identified by a date suffix and its version number consisting of a major version and a minor version (e.g. 2.1). Major changes, such as the addition of further data, increment the major version. Minor changes, e.g. correction of typing errors or changes in the metadata are reflected in an increment of the minor version. Datasets stored and curated in [[https://morphdbase.de|Morph·D·Base]] or [[https://media.leibniz-lib.de|easyDB]] are published from within the software. == DOI assignment == For each published major version of an occurrence dataset a DOI is assigned. Datasets in Morph·D·Base or easyDB receive a DOI on demand. The LIB is registered at [[https://www.zbmed.de/|ZB MED]] and can therefore create a DOI at [[https://doi.datacite.org/|DataCite DOI Fabrica]]. The DOI is added to the corresponding version of the information package and is also part of the citation of the data set (see below). == Citation == Published datasets are citable using direct URLs to the DIP or via the DOIs. Based on the data provider's input the citation of the dataset will be prepared by the LIB data curator adjusting the input (submission metadata) to be conform with the GFBio citation pattern. The citation is finalized in close collaboration with the data provider. For details see General part of [[https://gfbio.biowikifarm.net/wiki/Data_Publishing/General_part:_GFBio_publication_of_type_1_data_via_BioCASe_data_pipelines|GFBio publication of type 1 data via BioCASe data pipelines]] Example: ''ZFMK Coleoptera Working Group (2023). ZFMK Coleoptera Oberthuer collection. [Dataset]. Version: 2.0. Data Publisher: LIB Biodiversity Datacenter. https://doi.org/10.20363/ZFMK-Coll.Oberthuer-2023-02'' ==== Archiving ==== Archival Information Packages (AIPs according to OAIS) are created from all data and metadata submitted and curated within the LIB in-house-management systems. ; GitLab : In GitLab are all submitted files - as they are - archived. Furthermore the used import schemes for DiversityWorkbench are archived here. ; DWB : Occurence data stored in DiversityWorkbench are exported on a regular basis as tab-separated csv-files and archived in the intranet filesystem of LIB. ; LIB Intranet Filesystem : Backups stored in specific folders on the LIB intranet file system are transferred to tapes in the internal tape library on a regular basis. ; easyDB : Multimedia files and versioned ABCD packages are stored in easyDB, which has its own backup in the LIB tape library. ; LIB Tape Library : The generated AIPs are archived in the LIB tape library. These tapes are stored with two identical copies at two different locations in the LIB. ; Morph·D·Base : The data in MDB is regularly backed up. This backup is available as a redundant copy separate from the running production system. The backup is copied to a file server located in the LIB IT department, whereas the running system is housed within the data center of the University of Bonn. For detailed information about backups and recovery see [[:digital_preservation_plan|Preservation Plan]]. ==== Access to data via different portals ==== Indexed and faceted data are available in public portals such as GBIF, Europeana and GFBio, which are operated by national or international consortia. Specialized web portals for access to the data are developed and provided by the LIB Data Center. These include the [[https://collections.leibniz-lib.de|LIB digital collection catalogue]], the portal of the [[https://bolgermany.de|German Barcode of Life project (GBOL)]], or interfaces to the data, which also provide APIs for machine readable formats and access to the data using CETAF stable identifiers ([[https://id.zfmk.de|id.zfmk.de]], or [[https://id.zmh-coll.de|id.zmh-coll.de]]). The published data are provided with a recommended citation, license and DOI (see above). === Access to published data (unit level) === ; GFBio, VAT, and LAND : GFBio has developed a web portal that provides search functionalities for biodiversity related datasets and data. All uploaded data are annotated by GFBio's Terminology server, thus providing a richer search experience. A Visualization and Annotation Tool (VAT) allows for analysis and modelling of geo-referenced data. See General part of [[https://gfbio.biowikifarm.net/wiki/Data_Publishing/General_part:_GFBio_publication_of_type_1_data_via_BioCASe_data_pipelines|GFBio publication of type 1 data via BioCASe data pipelines]]. The "Lebendiger Atlas - Natur Deutschland (LAND)" provides an overview of Biodiversity data from Germany: [[https://land.gbif.de/|land.gbif.de]]. Here data from Germany, which are made available for GFBio, are made findable. ; Europeana : The multimedia data are accessible via [[https://www.europeana.eu/|Europeana]]. ; Digital Collection Catalogue : All data based on physical vouchers within the natural history collections of LIB are accessible via the [[https://collections.leibniz-lib.de/|LIB Digital Collection Catalogue]] ; Morph·D·Base : The online web-repository for morphological data provides public access to specimen, taxon, literature and multimedia data. All data are directly accessible in [[https://www.morphdbase.de/|Morph·D·Base]]. ; easyDB : the Digital Asset Management System at LIB provides access to the digital assets (i.e. multimedia, documents, zip archives) stored in easyDB. They are published from within the software via [[https://media.leibniz-lib.de/|media.leibniz-lib.de]]. An API to easyDb is avaliable under: https://media.LIB.de/eaurls/ ; id.LIB.de : the API to all occurrence data are accessible by humans and machines in html, json, oder rdf format using [[https://id.zfmk.de/collection_zfmk/|id.zfmk.de/collection_zfmk/]], or [[https://id.zmh-coll.de|id.zmh-coll.de/collection_zmh]]. === Access to original and raw data (dataset level) === We provide landing pages and direct download links to the datasets from within search results of the [[https://www.gfbio.org/search?q=zfmk+zip|GFBio web portal]], our GitLab installation at gitlab..leibniz-lib.de (login required), the digital asset management system [[https://media..leibniz-lib.de/biocase-archives|easydb]] (see above), and the BioCASe Provider Software (BPS) and [[https://biocase.zfmk.de/biocase/querytool/main.cgi|local query tool of BPS]] as operated at LIB.