Introduction
Technological data production capacity is revolutionising biology 1, but is not necessarily correlated with the ability to efficiently analyse and integrate data, or with enabling long-term data sharing and reuse. There are selfish as well as altruistic benefits to making research data reusable 2: it allows one to find and reuse one’s own previously-generated data easily; it is associated with higher citation rates 3, 4; and it ensures eligibility for funding from and publication in venues that mandate data sharing, an increasingly common requirement (e.g. Final NIH statement on sharing research data, Wellcome Trust policy on data management and sharing, Bill & Melinda Gates Foundation open access policy). Currently we are losing data at a rapid rate, with up to 80% unavailable after 20 years 5. This affects reproducibility - assessing the robustness of scientific conclusions by ensuring experiments and findings can be reproduced - which underpins the scientific method. Once access to the underlying data is lost, replicability, reproducibility and extensibility 6 are reduced.
At a broader societal level, the full value of research data may go beyond the initial use case in unforeseen ways 7, 8, so ensuring data quality and reusability is crucial to realising its potential value 9– 12. The recent publication of the FAIR principles 9, 13 identifies four key criteria for high-quality research data: the data should be Findable, Accessible, Interoperable and Reusable. Whereas a traditional view of data focuses on collecting, processing, analysing data and publishing results only, a life cycle view reveals the additional importance of finding, storing and sharing data 11. Throughout this article, we present a researcher-focused data life cycle framework that has commonalities with other published frameworks [e.g. the DataONE Data Life Cycle, the US geological survey science data lifecycle model and 11, 14– 15], but is aimed at life science researchers specifically ( Figure 1).
Figure 1.
The Data Life Cycle framework for bioscience, biomedical and bioinformatics data that is discussed throughout this article.
Black arrows indicate the ‘traditional’, linear view of research data; the green arrows show the steps necessary for data reusability. This framework is likely to be a simplified representation of any given research project, and in practice there would be numerous ‘feedback loops’ and revisiting of previous stages. In addition, the publishing stage can occur at several points in the data life cycle.
Learning how to find, store and share research data is not typically an explicit part of undergraduate or postgraduate training in the biological sciences 16– 18, though some subdomains (e.g. ecology) have a history of data management advice 8, 19. The scope, size and complexity of datasets in many fields has increased dramatically over the last 10–20 years, but the knowledge of how to manage this data is currently limited to specific cohorts of ‘information managers’ (e.g. research data managers, research librarians, database curators and IT professionals with expertise in databases and data schemas 18). In response to institutional and funding requirements around data availability, a number of tools and educational programs have been developed to help researchers create Data Management Plans to address elements of the data lifecycle 20; however, even when a plan is mandated, there is often a gap between the plan and the actions of the researcher 10.
This publication targets life science researchers wanting to improve their data management practice but will also be relevant to life science journals, funders, and research infrastructure bodies. It arose from a 2016 workshop series on the data lifecycle for life science researchers run by EMBL Australia Bioinformatics Resource 21, which provided opportunities to (i) map the current approaches to the data life cycle in biology and bioinformatics, and (ii) present and discuss best practice approaches and standards for key international projects with Australian life scientists and bioinformaticians. Throughout the article we highlight some specific data management challenges mentioned by participants. An earlier version of this article can be found on bioRxiv ( https://doi.org/10.1101/167619).
Finding data
In biology, research data is frequently published as supplementary material to articles, on personal or institutional websites, or in non-discipline-specific repositories like Figshare and Dryad 22. In such cases, data may exist behind a paywall, there is no guarantee it will remain extant, and, unless one already knows it exists and its exact location, it may remain undiscovered 23. It is only when a dataset is added to a public data repository, along with accompanying standardized descriptive metadata (see Collecting data), that it can be indexed and made publicly available 24. Data repositories also provide unique identifiers that increase findability by enabling persistent linking from other locations and permanent association between data and its metadata.
In the field of molecular biology, a number of bioinformatics-relevant organisations host public data repositories. National and international-level organisations of this kind include the European Bioinformatics Institute (EMBL-EBI) 25, the National Centre for Biotechnology Information (NCBI) 26, the DNA Data Bank of Japan (DDBJ) 27, the Swiss Institute of Bioinformatics (SIB) 28, and the four data center members of the worldwide Protein Data Bank 29, which mirror their shared data with regular, frequent updates. This shared central infrastructure is hugely valuable to research and development. For example, EMBL-EBI resources have been valued at over £270 million per year and contribute to ~£1 billion in research efficiencies; a 20-fold return on investment 30.
Numerous repositories are available for biological data (see Table 1 for an overview), though repositories are still lacking for some data types and sub-domains 31. Due to privacy regulations, human data is generally not freely available and these repositories typically require access requests on an individual dataset basis 32, 33. Tools like the dbGAP browser 34 and the Beacon Network 35 can assist in identifying relevant limited-access datasets and reduce the burden associated with requesting and downloading data.
Many specialised data repositories exist outside of the shared central infrastructure mentioned, often run voluntarily or with minimal funding. Support for biocuration, hosting and maintenance of these smaller-scale but key resources is a pressing problem 36– 38. The quality of the user-submitted data in public repositories 39, 40 can mean that public datasets require extra curation before reuse. Unfortunately, due to low uptake of established methods (see the EMBL-EBI and NCBI third-party annotation policies; 41) to correct the data 40, the results of extra curation may not find their way back into the repositories. Repositories are often not easily searched by generic web search engines 31. Registries, which form a secondary layer linking multiple, primary repositories, may offer a more convenient way to search across multiple repositories for data relevant to a researcher’s topics of interest 42.
Table 1.
Overview of some representative databases, registries and other tools to find life science data.
A more complete list can be found at FAIRsharing.
| Database/
| Name | Description | Datatypes | URL |
|---|---|---|---|---|
| Database | Gene Ontology | Repository of functional roles of gene products,
| Functional roles as determined experimentally or
| http://geneontology.org/ |
| Database | Kyoto
| Repository for pathway relationships of
| Protein, gene, cell, and genome pathway
| http://www.genome.jp/kegg/ |
| Database | OrthoDB | Repository for gene ortholog information | Protein sequences and orthologous group
| http://www.orthodb.org/ |
| Database
| eggNOG | Repository for gene ortholog information with
| Protein sequences, orthologous group
| http://eggnogdb.embl.de/ |
| Database | European
| Repository for nucleotide sequence information | Raw next-generation sequencing data, genome
| http://www.ebi.ac.uk/ena |
| Database | Sequence Read
| Repository for nucleotide sequence information | Raw high-throughput DNA sequencing and
| https://www.ncbi.nlm.nih.gov/sra/ |
| Database | GenBank | Repository for nucleotide sequence information | Annotated DNA sequences | https://www.ncbi.nlm.nih.gov/genbank/ |
| Database | ArrayExpress | Repository for genomic expression data | RNA-seq, microarray, CHIP-seq, Bisulfite-seq and
| https://www.ebi.ac.uk/arrayexpress/ |
| Database | Gene
| Repository for genetic/genomic expression data | RNA-seq, microarray, real-time PCR data on
| https://www.ncbi.nlm.nih.gov/geo/ |
| Database | PRIDE | Repository for proteomics data | Protein and peptide identifications, post-translational
| https://www.ebi.ac.uk/pride/archive/ |
| Database | Protein Data
| Repository for protein structure information | 3D structures of proteins, nucleic acids and
| https://www.wwpdb.org/ |
| Database | MetaboLights | Repository for metabolomics experiments and
| Metabolite structures, reference spectra and
| http://www.ebi.ac.uk/metabolights/ |
| Ontology/
| ChEBI | Ontology and repository for chemical entities | Small molecule structures and chemical
| https://www.ebi.ac.uk/chebi/ |
| Database | Taxonomy | Repository of taxonomic classification information | Taxonomic classification and nomenclature data
| https://www.ncbi.nlm.nih.gov/taxonomy |
| Database | BioStudies | Repository for descriptions of biological studies,
| Study descriptions and supplementary files | https://www.ebi.ac.uk/biostudies/ |
| Database | Biosamples | Repository for information about biological
| Sample descriptions | https://www.ebi.ac.uk/biosamples/ |
| Database
| IntAct | Repository for molecular interaction information | Molecular interactions and evidence type | http://www.ebi.ac.uk/intact/ |
| Database | UniProtKB
| Repository for protein sequence and function
| Protein sequences, protein function and
| http://www.uniprot.org/ |
| Database | European
| Controlled-access repository for sequence and
| Raw, processed and/or analysed sequence and
| https://www.ebi.ac.uk/ega/ |
| Database
| EBI
| Repository and analysis service for
| Next-generation sequencing metagenomic
| https://www.ebi.ac.uk/metagenomics/ |
| Database
| MG-RAST | Repository and analysis service for
| Next-generation sequencing metagenomic and
| http://metagenomics.anl.gov/ |
| Registry | Omics DI | Registry for dataset discovery that currently
| Genomic, transcriptomic, proteomic and
| http://www.omicsdi.org |
| Registry | DataMed | Registry for biomedical dataset discovery that
| Genomic, transcriptomic, proteomic,
| https://datamed.org |
| Registry | Biosharing | Curated registry for biological databases, data
| Information on databases, standards and
| https://biosharing.org/ |
| Registry | re3data | Registry for research data repositories across
| Information on research data repositories, terms
| http://www.re3data.org |
Collecting data
The most useful data has associated information about its creation, its content and its context - called metadata. If metadata is well structured, uses consistent element names and contains element values with specific descriptions from agreed-upon vocabularies, it enables machine readability, aggregation, integration and tracking across datasets: allowing for Findability, Interoperability and Reusability 9, 31. One key approach in best-practice metadata collection is to use controlled vocabularies built from ontology terms. Biological ontologies are tools that provide machine-interpretable representations of some aspect of biological reality 31, 43. They are a way of organising and defining objects (i.e. physical entities or processes), and the relationships between them. Sourcing metadata element values from ontologies ensures that the terms used in metadata are consistent and clearly defined. There are several user-friendly tools available to assist researchers in accessing, using and contributing to ontologies ( Table 2).
Table 2.
Useful ontology tools to assist in metadata collection.
| Tool | Task | URL |
|---|---|---|
| Ontology Lookup
| Discover different ontologies and their contents | http://www.ebi.ac.uk/ols/ |
| OBO Foundry | Table of open biomedical ontologies with information
| http://obofoundry.org/ |
| Zooma | Assign ontology terms using curated mapping | http://www.ebi.ac.uk/spot/zooma/ |
| Webulous | Create new ontology terms easily | https://www.ebi.ac.uk/efo/webulous/ |
| Ontobee | A linked data server that facilitates ontology data
| http://www.ontobee.org |
Adopting standard data and metadata formats and syntax is critical for compliance with FAIR principles 9, 24, 31, 42, 44. Biological and biomedical research has been considered an especially challenging research field in this regard, as datatypes are extremely heterogeneous and not all have defined data standards 44, 45; many existing data standards are complex and therefore difficult to use 45, or only informally defined, and therefore subject to variation, misrepresentation, and divergence over time 44. Nevertheless, well-established standards exist for a variety of biological data types ( Table 3). FAIRsharing is a useful registry of data standards and policies that also indicates the current status of standards for different data types and those recommended by databases and research organisations 42.
Table 3.
Overview of common standard data formats for ‘omics data.
A more complete list can be found at FAIRsharing.
| Data type | Format name | Description | Reference or URL for format specification | URLs for repositories
|
|---|---|---|---|---|
| Raw DNA/RNA
| FASTA
| FASTA is a common text format to store DNA/RNA/Protein
|
74
| |
| Assembled
| FASTA
| Assemblies without annotation are generally stored in
|
41
|
http://www.ebi.ac.uk/ena
|
| Aligned DNA
| SAM/BAM
| Sequences aligned to a reference are represented in
| https://samtools.github.io/hts-specs/ |
https://www.ncbi.nlm.nih.gov/
|
| Gene model or
| GTF/GFF/
| General feature format or general transfer format are
|
https://github.com/The-Sequence-Ontology/
|
http://www.ensembl.org/info/
|
| Gene functional
| GAF
| A GAF file is a GO Annotation File containing annotations
|
http://geneontology.org/page/go-annotation-file-
|
http://geneontology.org/page/
|
| Genetic/genomic
| VCF | A tab-delimited text format to store meta-information as
| https://samtools.github.io/hts-specs/VCFv4.2.pdf |
http://www.ensembl.org/info/
|
| Interaction data | PSI-MI XML
| Data formats developed to exchange molecular interaction
| http://psidev.info/groups/molecular-interactions | http://www.ebi.ac.uk/intact |
| Raw metabolite
| mzML
| XML based data formats that define mass spectrometry
|
http://www.psidev.info/mzml
| |
| Protein sequence | FASTA | A text-based format for representing nucleotide sequences
| 74 | www.uniprot.org |
| Raw proteome
| mzML | A formally defined XML format for representing mass
| http://www.psidev.info/mzml | www.ebi.ac.uk/pride |
| Organisms and
| Darwin Core | The Darwin Core (DwC) standard facilitates the exchange
| http://rs.tdwg.org/dwc/ |
Most public repositories for biological data (see Table 1 and Storing data section) require that minimum metadata be submitted accompanying each dataset ( Table 4). This minimum metadata specification typically has broad community input 46. Minimum metadata standards may not include the crucial metadata fields that give the full context of the particular research project 46, so it is important to gather metadata early, understand how to extend a minimum metadata template to include additional fields in a structured way, and think carefully about all the relevant pieces of metadata information that might be required for reuse.
Table 4.
Some community-designed minimum information criteria for metadata specifications in life sciences.
A more complete list can be found at FAIRsharing.
| Name | Description | Examples of projects/databases that
| URL |
|---|---|---|---|
| MINSEQE | Minimum Information about a high-
| Developed by the Functional Genomics
| http://fged.org/site_media/pdf/MINSEQE_1.0.pdf |
| MIxS - MIGS/MIMS | Minimum Information about a
| Developed by the Genomic Standards
| http://wiki.gensc.org/index.php?title=MIGS/MIMS |
| MIMARKS | Minimum Information about a
| Developed by the Genomic Standards
| http://wiki.gensc.org/index.php?title=MIMARKS |
| MIMIx | Minimum Information about a
| Developed by the Proteomics Standards
| http://www.psidev.info/mimix |
| MIAPE | Minimum Information About a
| Developed by the Proteomics Standards
| http://www.psidev.info/miape |
| Metabolomics
| Minimal reporting structures that
| Developed by the Metabolomics
| http://www.metabolomics-msi.org/ |
| MIRIAM | Minimal Information Required
| Initiated by the BioModels.net effort.
| http://co.mbine.org/standards/miriam |
| MIAPPE | Minimum Information About a Plant
| Adopted by the Plant Phenomics and
| http://cropnet.pl/phenotypes/wp-content/uploads/2016/04/MIAPPE.pdf |
| MDM | Minimal Data for Mapping for
| Developed by the Global Microbial
| http://www.ebi.ac.uk/ena/submit/pathogen-data |
| FAANG sample
| Metadata specification for biological
| Developed and used by the Functional
|
https://github.com/FAANG/faang-metadata/blob/master/docs/faang_
|
| FAANG experimental
| Metadata specification for
| Developed and used by the Functional
|
https://github.com/FAANG/faang-metadata/blob/master/docs/faang_
|
| FAANG analysis
| Metadata specification for analysis
| Developed and used by the Functional
|
https://github.com/FAANG/faang-metadata/blob/master/docs/faang_
|
| SNOMED-CT | Medical terminology and
| Commercial but collaboratively-designed
| http://www.snomed.org/snomed-ct |
Integrating, processing and analysing data
Where existing and/or newly-collected datasets are to be used in the same experiment, they must first be integrated. This may involve initial processing of one or more datasets so that they share format and granularity, or so that relevant fields map correctly. The researcher also needs to ensure integration at ‘dependency’ level: for example, controlled vocabularies or genome assemblies used in data generation/processing must match or be easily converted. The plethora of autonomous data repositories has created problems with mapping data and annotations among repositories 47, 48. Current large-scale efforts aim to improve interoperability using Linked Data and other Semantic Web tools 48 as well as extensive ontology development (see Collecting data section). The Monarch Initiative is an example of a project that achieves new insights by integrating existing data from multiple sources: in this case, data from animal and human genetic, phenotypic and other repositories is brought together via a custom data flow to help identify unrecognised animal models for human disease 49. In smaller projects, the need for individual researchers to integrate data will often inform the way new data is collected, to ensure it matches existing datasets, creating a feedback loop in the data lifecycle that highlights the need for prior planning ( Figure 2). Seamless solutions are still some way off 50 for all but a handful of applications.
Figure 2.
Flowchart of the data life cycle stages applied to an example research project.
Bold text indicates new data, software or workflow objects created during the project. Solid thin arrows indicate movement of objects from creation to storage and sharing. Dashed thin arrows indicate where downstream entities should influence decisions made at a given step. (For example, the choice of format, granularity, metadata content and structure of new data collected may be influenced by existing software requirements, existing data characteristics and requirements of the archive where the data will be deposited). Purple stars indicate objects for which the FAIR principles 9 can provide further guidance. Dotted thin arrows indicate citation of an object using its unique persistent identifier. Brown stars indicate where FAIRsharing can help identify appropriate archives for storing and sharing.
Recording and reporting how research data is processed and analysed computationally is crucial for reproducibility and assessment of research quality 1, 51. This can be aided by scientific workflow approaches that facilitate both recording and reproducing processing and analysis steps 1, though many experiments will require ‘one-off’ workflows that may not function with existing workflow management systems. Full reproducibility requires access to the software, software versions, workflow, dependencies and operating system used as well as the data and software code itself 1, 52. Therefore, although computational work is often seen as enabling reproducibility in the short term, in the long term it is fragile and reproducibility is limited (e.g. discussion by D. Katz, K. Hinsen and C.T. Brown). Best-practice approaches for preserving data processing and analysis code involve hosting source code in a repository where it receives a unique identifier and is under version control; where it is open, accessible, interoperable and reusable - broadly mapping to the FAIR principles for data. Github and Bitbucket, for example, fulfil these criteria, and Zenodo additionally generates Digital Object Identifiers (DOIs) for submissions and guarantees long-term archiving. Workflows can also be preserved in repositories along with relevant annotations (reviewed in 1). A complementary approach is containerised computing (e.g. Docker) which bundles operating system, software, code and potentially workflows and data together. Several recent publications have suggested ways to improve current practice in research software development to aid in reproducibility 15, 53– 55.
The same points hold for wet-lab data production: for full reproducibility within and outside the lab, it is important to capture and enable access to specimen cell lines, tissue samples and/or DNA as well as reagents 56. Wet-lab methods can be captured in electronic laboratory notebooks and reported in the Biosamples database 57, protocols.io or OpenWetWare; specimens can be lodged in biobanks, culture or museum collections 58– 62; but the effort involved in enabling full reproducibility remains extensive. Electronic laboratory notebooks are frequently suggested as a sensible way to make this information openly available and archived 63. Some partial solutions exist (e.g. LabTrove, BlogMyData, Benchling and others 64), including tools for specific domains such as the Scratchpad Virtual Research Environment for natural history research 65. Other tools can act as or be combined to produce notebooks for small standalone code-based projects (see 66 and update), including Jupyter Notebook, Rmarkdown, and Docker. However, it remains a challenge to implement online laboratory notebooks to cover both field/lab work and computer-based work, especially when computer work is extensive, involved and non-modular 51. Currently, no best-practice guidelines or minimum information standards exist for use of electronic laboratory notebooks 6. We suggest that appropriate minimum information to be recorded for most computer-based tasks should include date, task name and brief description, aim, actual command(s) used, software names and versions used, input/output file names and locations, script names and locations, all in a simple text format.
In the authors’ experience, the data processing and analysis stage is one of the most challenging for openness. As reported elsewhere 16– 18, we have observed a gap between modern biological research as a field of data science, and biology as it is still mostly taught in undergraduate courses, with little or no focus on computational analysis, or project or data management. This gap has left researchers lacking key knowledge and skills required to implement best practices in dealing with the life cycle of their data.
Publishing data
Traditionally, scientific publications included raw research data, but in recent times datasets have grown beyond the scope of practical inclusion in a manuscript 11, 51. Selected data outputs are often included without sharing or publishing the underlying raw data 14. Journals increasingly recommend or require deposition of raw data in a public repository [e.g. 67], although exceptions have been made for publications containing commercially-relevant data 68. The current data-sharing mandate is somewhat field-dependent 5, 69 and also varies within fields 70. For example, in the field of bioinformatics, the UPSIDE principle 71 is referred to by some journals (e.g. Bioinformatics), while others have journal- or publisher-specific policies (e.g. BMC Bioinformatics).
The vast majority of scientific journals require inclusion of processing and analysis methods in ‘sufficient detail for reproduction’ (e.g. Public Library of Science submission and data availability guidelines; International Committee of Medical Journal Editors manuscript preparation guidelines; Science instructions for authors; Elsevier Cell Press STAR Methods; and 72), though journal requirements are diverse and complex 73, and the level of detail authors provide can vary greatly in practice 76, 77. More recently, many authors have highlighted that full reproducibility requires sharing data and resources at all stages of the scientific process, from raw data (including biological samples) to full methods and analysis workflows 1, 6, 61, 77. However, this remains a challenge 78, 79, as discussed in the Processing and analysing data section. To our knowledge, strategies for enabling computational reproducibility are currently not mandated by any scientific journal.
A recent development in the field of scientific publishing is the establishment of ‘data journals’: scientific journals that publish papers describing datasets. This gives authors a vehicle to accrue citations (still a dominant metric of academic impact) for data production alone, which can often be labour-intensive and expensive yet is typically not well recognised under the traditional publishing model. Examples of this article type include the Data Descriptor in Scientific Data and the Data Note in GigaScience, which do not include detailed new analysis but rather focus on describing and enabling reuse of datasets.
The movement towards sharing research publications themselves (‘Open Access Publishing’) has been discussed extensively elsewhere [e.g. 23, 80, 81]. Publications have associated metadata (creator, date, title etc.; see Dublin Core Metadata Initiative metadata terms) and unique identifiers (PubMed ID for biomedical and some life science journals, DOIs for the vast majority of journals; see Table 5). The ORCID system enables researchers to claim their own unique identifier, which can be linked to their publications. The use of unique identifiers within publications referring to repository records (e.g. genes, proteins, chemical entities) is not generally mandated by journals, although it would ensure a common vocabulary is used and so make scientific results more interoperable and reusable 82. Some efforts are underway to make this easier for researchers: for example, Genetics and other Genetics Society of America journals assist authors in linking gene names to model organism database entries.
Table 5.
Identifiers throughout the data life cycle.
| Name | Relevant stage of
| Description | URL |
|---|---|---|---|
| Digital Object Identifier (DOI) | Publishing, Sharing,
| A unique identifier for a digital (or physical or
| https://www.doi.org/ |
| Open Researcher and
| Publishing | An identifier for a specific researcher that
| https://orcid.org/ |
| Repository accession
| Finding, Processing/
| A unique identifier for a record within a
| For example,
https://support.ncbi.nlm.nih.gov/link/portal/28045/28049/
|
| Pubmed ID (PMID) | Publishing | An example of a repository-specific unique
| https://www.ncbi.nlm.nih.gov/pubmed/ |
| International Standard
| Publishing | A unique identifier for a journal, magazine or
| http://www.issn.org/ |
| International Standard Book
| Publishing | A unique identifier for a book, specific to the
| https://www.isbn-international.org |
Storing data
While primary data archives are the best location for raw data and some downstream data outputs ( Table 1), researchers also need local data storage solutions during the processing and analysis stages. Data storage requirements vary among research domains, with major challenges often evident for groups working on taxa with large genomes (e.g. crop plants), which require large storage resources, or on human data, where privacy regulations may require local data storage, access controls (e.g. the GA4GH Security Technology Infrastructure document) and conversion to non-identifiable data if data is to be shared (see Sharing data section). For data where privacy is a concern, one approach is separating the data storage from the analysis location and limiting the analysis outputs to ‘nondisclosive’ results 83. An example is DataShield 83, which is mostly used for public health rather than ‘omics’ data. Subdomain-specific practice should be considered when choosing appropriate formats and linking metadata, as outlined in 84. In addition, long-term preservation of research data should consider threats such as storage failure, mistaken erasure, bit rot, outdated media, outdated formats, loss of context and organisational failure 85.
Sharing data
The best-practice approach to sharing biological data is to deposit it (with associated metadata) in a primary archive suitable for that datatype 8 that complies with FAIR principles. As highlighted in the Storing data section, these archives assure both data storage and public sharing as their core mission, making them the most reliable location for long-term data storage. Alternative data sharing venues (e.g. FigShare, Dryad) do not require or implement specific metadata or data standards. This means that while these venues have a low barrier to entry for submitters, the data is not FAIR unless submitters have independently decided to comply with more stringent criteria. If available, an institutional repository may be a good option if there is no suitable archive for that datatype.
Data with privacy concerns (for example, containing human-derived, commercially-important or sensitive environmental information) can require extensive planning and compliance with a range of institutional and regulatory requirements as well as relevant laws 86 (for the Australian context, see the Australian National Data Service Publishing and Sharing Sensitive Data Guide, the National Health and Medical Research Council statement on ethical conduct in human research, and the Australian National Medical Research Storage Facility discussion paper on legal, best practice and security frameworks). In particular, it is often necessary for users of the data to be correctly identified, and to subsequently be authenticated via a mechanism such as OpenID, eduGAIN, or (in the Australian context), AAF, which places the onus on ensuring users are correctly identified with institutions that issue their credentials. Knowing who the users are can be used to restrict access, require compliance with the conditions under which the data is provided, and track user activity as an audit trail. The Data Access Compliance Office of the International Cancer Genome Consortium is an example of how to manage requests for access to controlled data. Large-scale collaborations such as the Global Alliance for Genomics and Health (GA4GH) are leading the way in approaches to sharing sensitive data across institutions and jurisdictions ( 87; also see the GA4GH Privacy and Security Policy). Importantly, plans for data sharing should be made at the start of a research project and reviewed during the project, to ensure ethical approval is in place and that the resources and metadata needed for effective sharing are available at earlier stages of the data life cycle 3.
In our experience, the majority of life science researchers are familiar with at least some public primary data repositories, and many have submitted data to them previously. A common complaint is around usability of current data submission tools and a lack of transparency around metadata requirements and the rationale for them. Some researchers raise specific issues about the potential limitations of public data repositories where their data departs from the assumptions of the repository (e.g. unusual gene models supported by experimental evidence can be rejected by the automated NCBI curation system). In such cases, researchers can provide feedback to the repositories to deal with such situations, but may not be aware of this - it could be made clearer on the repository websites. Again, this points in part to existing limitations in the undergraduate and postgraduate training received by researchers, where the concepts presented in this article are presented as afterthoughts, if at all. On the repository side, while there is a lot of useful information and training material available to guide researchers through the submission process (e.g. the EMBL-EBI Train Online webinars and online training modules), it is not always linked clearly from the database portals or submission pages themselves. Similarly, while there are specifications and standards available for many kinds of metadata [ Table 4; also see FAIRsharing], many do not have example templates available, which would assist researchers in implementing the standards in practice.
What can the research community do to encourage best-practice?
We believe that the biological/biomedical community and individual researchers have a responsibility to the public to help advance knowledge by making research data FAIR for reuse 9, especially if the data were generated using public funding. There are several steps that can assist in this mission:
1.
Researchers reusing any data should openly acknowledge this fact and fully cite the dataset, using unique identifiers 8, 10, 31.
2.
Researchers should endeavour to improve their own data management practices in line with best practice in their subdomain – even incremental improvement is better than none!
3.
Researchers should provide feedback to their institution, data repositories and bodies responsible for community resources (data standards, controlled vocabularies etc.) where they identify roadblocks to good data management.
4.
Senior scientists should lead by example and ensure all the data generated by their laboratories is well-managed, fully annotated with the appropriate metadata and made publicly available in an appropriate repository.
5.
The importance of data management and benefits of data reuse should be taught at the undergraduate and postgraduate levels 18. Computational biology and bioinformatics courses in particular should include material about data repositories, data and metadata standards, data discovery and access strategies. Material should be domain-specific enough for students to attain learning outcomes directly relevant to their research field.
6.
Funding bodies are already taking a lead role in this area by requiring the incorporation of a data management plan into grant applications. A next step would be for a formal check, at the end of the grant period, that this plan has been adhered to and data is available in an appropriate format for reuse 10.
7.
Funding bodies and research institutions should judge quality dataset generation as a valued metric when evaluating grant or promotion applications.
8.
Similarly, leadership and participation in community efforts in data and metadata standards, and open software and workflow development should be recognised as academic outputs.
9.
Data repositories should ensure that the data deposition and third-party annotation processes are as FAIR and painless as possible to the naive researcher, without the need for extensive bioinformatics support 40.
10.
Journals should require editors and reviewers to check manuscripts to ensure that all data, including research software code and samples where appropriate, have been made publicly available in an appropriate repository, and that methods have been described in enough detail to allow re-use and meaningful reanalysis 8.
Conclusions
While the concept of a life cycle for research data is appealing from an Open Science perspective, challenges remain for life science researchers to put this into practice. Among attendees of the workshop series that gave rise to this publication, we noted limited awareness among attendees of the resources available to researchers that assist in finding, collecting, processing, analysis, publishing, storing and sharing FAIR data. We believe this article provides a useful overview of the relevant concepts and an introduction to key organisations, resources and guidelines to help researchers improve their data management practices.
Furthermore, we note that data management in the era of biology as a data science is a complex and evolving topic and both best practices and challenges are highly domain-specific, even within the life sciences. This factor may not always be appreciated at the organisational level, but has major practical implications for the quality and interoperability of shared life science data. Finally, domain-specific education and training in data management would be of great value to the life science research workforce, and we note an existing gap at the undergraduate, postgraduate and short course level in this area.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright: © 2018 Griffin PC et al. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Throughout history, the life sciences have been revolutionised by technological advances; in our era this is manifested by advances in instrumentation for data generation, and consequently researchers now routinely handle large amounts of heterogeneous data in digital formats. The simultaneous transitions towards biology as a data science and towards a ‘life cycle’ view of research data pose new challenges. Researchers face a bewildering landscape of data management requirements, recommendations and regulations, without necessarily being able to access data management training or possessing a clear understanding of practical approaches that can assist in data management in their particular research domain.
Here we provide an overview of best practice data life cycle approaches for researchers in the life sciences/bioinformatics space with a particular focus on ‘omics’ datasets and computer-based data processing and analysis. We discuss the different stages of the data life cycle and provide practical suggestions for useful tools and resources to improve data management practices.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer




