Transparency, openness, and reproducibility are readily recognized as vital features of science. One might expect that these valued features would be routine in daily practice, but growing evidence suggests that this is not the reality. The present culture that encourages innovation can undermine practices that support verification. Traditional practices in the view of scholarly communication are where data only serves as a by-product of research, are a tool to article publications, and are forgotten once they have served their purpose.
Moreover, there are no universally accepted measures that provide norms and guidelines that enable open practices in research. Most research institutions and labs have their own guidelines that pave the road to open science with obstacles.
Wilkinson et. al. 2016 published an open article that coined the abbreviation called FAIR i.e. FINDABLE, ACCESSIBLE, INTEROPERABLE, and REUSABLE. Making data FAIR is about data being as open as possible and as closed as necessary. The principles provide a set of guidelines for publication of digital resources such as datasets, code, workflows, and research objects, in a manner that makes them FAIR.
With the rising awareness of the FAIR guiding principles, it is safe to say that these guidelines are universally accepted across domains and institutions, even though, there exists no universally accepted set of metrics that are applicable to all digital objects. The reasons being, first, FAIR guidelines provide a set of guiding principles that are more subjective in nature. Second, the stakeholders, data stewards have their own interpretation of a FAIR digital object.
There are numerous reasons as to how making data FAIR is beneficial to an organisation, some of which are as follows
- Time conservation for data collection
- Increased citations
- Encourages more funding opportunities
- Prevents data loss
- Benefits stakeholders
- Data reuse for innovation in private sectors
What does FAIR mean to us?
Metadata are crucial ingredients to perform the evaluation. Rich and descriptive metadata that is machine-actionable can be utilised are key tools to evaluate and answer questions that are asked about the data through a well-defined set of metrics. These measurable indicators give a clear idea about the extent of information that is available and steps that can be taken to maximise the FAIRness of data.
At Elucidata, our cloud platform, Polly uses proprietary ML-based curation technology for the FAIRification of publically available molecular data. Polly makes attaining FAIR, an opportunity, and not a chore. In our previous blog, we mentioned how FAIR principles can bolster reproducibility and research efforts in an organization.
Making data FAIR is a continuous process, especially in the field of biomedical science due to the rapid advancement in data mining and curation techniques. It is of utmost importance to provide datasets that follow domain-relevant standards. In the era of big data, it has also been realized that making data FAIR also means to reduce human intervention as much as possible, in order to ensure robust data analysis. “Timeliness” of biomedical data sharing is a crucial factor, especially during public health emergencies, in order to ensure that research communities can collaborate effectively and advance the speed of response and discovery. Hence, data (especially, data that is public) should be made FAIR as early as possible.
- Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., … & Contestabile, M. (2015). Promoting an open research culture. Science, 348(6242), 1422-1425.
- Wilkinson, M. D., Dumontier, M., Aalbersberg, I. J., Appleton, G., Axton, M., Baak, A., … & Bouwman, J. (2016). The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 3(1), 1-9.
Want to FAIR-ify your data? Contact us at email@example.com or book a demo here.