Don't Let Your Precious Data Become Unusable

Ashwin Ramesh
April 13, 2022

According to a 2021 survey by Gartner[1], poor-quality data cost organizations $12.9 million annually on average. That number is increasing continuously, given that the data/information created, captured, copied, and consumed worldwide is estimated to have grown [2] from about 9 zettabytes in 2013 to about 97 zettabytes in 2022. If the average cost of data management is 3.5% of a company's revenue, and half of that information has no value, there is a material waste of capital. 聽For companies in verticals such as life sciences, with a research and development function, data management costs are significantly higher because of the large volumes of data generated and accumulated while developing products. For instance, a single sequenced human genome is approximately 200 gigabytes[3]. How do you extract meaningful insights from that much data?

And more importantly, where does the rest of the data go? You guessed it, "somewhere" in the forbidden server room of your office. Now you face the double whammy of capturing and maintaining troves of unusable data in data dumps.


Why Should Life Sciences Organizations Care about Large-scale Biomolecular Data Problems from the Initial Stages?


While managing large volumes of data is problematic for sensitive fields such as particle physics or high-resolution satellite data, life sciences companies that continue to use data dumps in 2022 have data and process problems that are unique in two aspects. Firstly, the data in these dumps is heterogeneous. This is because data dumps accumulate data from research institutes, CROs, labs, and other institutions for inference and validation, and different sources represent the same type of data using different file formats, storage modes, and naming conventions. Secondly, bioinformatics data, which is already massive and growing in terms of dimension and quantity, is not fully transferable due to size, cost, and other factors. Consequently, the massive quantities of structured and unstructured data generated and accumulated in research projects cannot be contextualized and reused even if they are stored in cloud data dumps. This is because they are siloed and are not mapped to a common data model to which quality control metrics can be applied.



As your organization evolves, you will generate and consume data in much higher volume, variety, and velocity than what rudimentary data dumps and repositories will allow you to manage. At this stage, you will probably consider an upgrade from typical office productivity software to scientific software tools that compound problems because they are not compatible with other systems. Not only have you spent time and funds on buying tools that complicate things, but more importantly, you are missing out on valuable insights and critical decision-making data because of poor implementation.



Clearly, a lot of trouble could have been saved had you anticipated these problems during the initial stages.

How Can Organizations Mitigate Issues Related to Generating and Consuming Biomolecular Research Data?


There are multiple reasons why challenges associated with storing and managing genomic and other forms of biomolecular data can easily spiral out of control quickly. Firstly, the primary data files in most life sciences disciplines are large and considered precious since they require a significant investment in computing and storage. A local storage array (on-premise) solution for genomic data can cost $300-500 per year per TB[4]. Besides, there are regulatory and scientific reasons to keep the data safeguarded for long periods.

One of the most effective methods to tackle big data challenges in bioinformatics is implementing a data management strategy and defining a data quality framework in the early phases of a research project. In addition to minimizing 驶bad data problems', it helps keep data safe, secure, and accessible at all times and saves time and money. There are primarily three phases of implementing a data strategy.

The first phase involves aligning the data strategy with the corporate strategy to solve specific problems. For example, a large pharmaceutical company with several siloed teams would require a more complex data strategy than an upcoming biotech startup.

The second phase involves setting principles and objectives to identify places where the data is causing a problem. A large organization that had started collecting data and set up an infrastructure to manage it, would have different data problems than one which has implemented a cloud-native solution for a similar problem.

Finally, implementing a data management framework involves performing coordinated actions, such as identifying the steps needed to solve data issues and the sequence in which they will be executed.

Will Switching to a Cloud Repository from Data Dumps Solve All My Data Problems?


In our previous panel discussion on the challenges & opportunities of setting up data infrastructure, our experts noted that there are hundreds, if not thousands of publicly available data sets, that are structured, curated, and analysis-ready in their own way. While it is straightforward to import two or more datasets into data dumps and use joins to answer a biological question, the process is not scalable or repeatable. Consequently, there is an increased focus on data lakes and lakehouses to tackle these big data challenges. With decoupled compute and storage, and better scalability and agility, these repositories provide better control over structured and unstructured data. A purpose-built repository, such as OmixAtlas can help in streamlining the process better with curated, and harmonized ML-ready data.

However, switching from data dumps to a cloud repository, such as data lakes or lakehouses cannot solve all your data problems. Organizations need a combination of tooling, processes, and people to overcome the challenges associated with the use of biomolecular data.

Polly鈥檚 OmixAtlas is one of the ways to implement a data-centric approach to unlock the true potential of biomolecular data.

Blog Categories

Blog Categories

Request Demo