Enabler for Single Cell Technologies

What’s Behind the Rapid Rise of Single-Cell Technologies Today?

If you’re working in the field of cell biology or adjacent areas, you’ve probably noticed that single cell technologies are emerging and becoming increasingly important. The appeal of single cell analysis lies in its ability to reveal data and properties at the individual cell level, something that bulk analysis often misses. For instance, in research on circulating tumor cells, you might have only a few tumor cells present among a large number of non-tumor cells. Bulk analysis could easily overlook these rare but critical cells. Similarly, in studies of highly heterogeneous cell populations, the average data from bulk analysis might not accurately represent any specific subgroup—this is often seen in immune cell research. 

Given these clear benefits, why has single cell analysis only recently gained traction rather than 30 years ago? The answer isn’t straightforward, as emerging single cell technologies differ in many ways, but they typically share a few common factors:

1. Cost of Computing 

Single cell analysis doesn't necessarily involve only a few cells; in fact, many applications require analyzing large numbers of cells to obtain better statistics and increase the likelihood of identifying rare, significant cells within a population. However, large numbers of cells generate large amounts of data, whether it’s sequencing data or microscope images. As data collection increases, particularly when samples are repeatedly analyzed throughout an experiment, the demand for computing power escalates rapidly. This includes data storage capacity, the actual processing of data, and often the need to transfer or access this data online. These factors significantly drive up costs and were, in the past, prohibitively high for most applications. 

Recent drastic reductions in computing costs have made these analyses more feasible.

(Source: John C. McCallum; Gordon Moore; The Linley Group; Nielsen Norman Group; The Economist)

 

The graph highlights the significant decline in costs and the improvement in speeds for key computing metrics over time. Since the mid-1980s, the cost per megabyte has sharply decreased, making data storage highly cost-effective. Average transistor prices have also dropped substantially, reflecting ongoing advancements in semiconductor technology. Additionally, internet connection speeds have improved, reducing download times from minutes to mere seconds per megabyte. 

2. Cost of Sequencing 

In many single-cell technologies sequencing of the genome or transcriptome plays a crucial role in the overall understanding of cell properties and characteristics. 

At the start of the 21st century, the cost per megabase (1 million base pairs) was around $10 million, meaning it would cost $100 million to sequence a full human genome. Such enormous costs made the regular use of these techniques impractical. However, the Human Genome Project —the world’s largest biological collaborative project involving six nations and twenty research institutions and universities— spurred advancements in sequencing techniques, leading to a significant, though still modest, price reduction by 2001. 

The real breakthrough came in 2008 with the advent of next-generation sequencing (NGS), which accelerated the price decline drastically. Within just a few years, sequencing costs plummeted to a fraction of what they had been. Today, the cost per megabase is mere cents, and sequencing a human genome costs less than $1,000—a staggering 99.999% reduction in just 20 years. 

This technological progress, while impressive on its own, has made sequencing accessible to nearly every university research group and a routine tool for PhD students worldwide. It also made technologies like droplet sequencing economically viable, enabling the screening of hundreds of thousands of cells and the study of their transcriptomes without breaking the bank. 

3. Micromachining 

Handling large numbers of cells at the single-cell level requires the ability to manipulate them with micrometer precision in a controlled environment at the micrometer scale. This demands manufacturing techniques capable of producing microstructures in various materials such as plastic, glass, or silicon with high precision, mass production capabilities, and affordability. 

Recent improvements have been seen particularly in high-precision injection molding techniques, the adaptation of semiconductor industry methods to life sciences, and emerging technologies for micromachining glass. These innovations have significantly enhanced the ability to study cells at the micro level, further driving the adoption of single cell technologies. 

 

In summary, the convergence of these factors—declining costs of computing and sequencing, and advancements in micromachining—has made single cell technologies increasingly viable, driving their rapid adoption in modern research. 

Dr. Robin-Alexander Krüger

Dr. Robin A. Krüger
Robin studied chemistry and biochemistry at Philipps University in Marburg, Germany, where he also conducted his doctoral research on fluorescent biomarkers and bacterial photoreceptors. After a postdoctoral stay at the University in Calgary/Canada, he joined LPKF in 2011, where he held various development positions. Since 2020, Robin is leading the ARRALYZE team.

Further Information

Blog

Machine Learning Models

Blog

Using Glass for Microfluidic Applications

Blog

NK Killing Assays