The History of Biochips
-
Chemical Sensors
-
Biochips must be able to sense chemical qualities of the samples they test. The development of chemical sensors that could be miniaturized was the first step toward developing the biochip. In 1922, W.S. Hughes invented the first such sensor, called a glass pH electrode, which used chemical exchanges in a thin glass membrane to detect a substance's pH level. Over the next several decades chemical sensors were developed to detect levels of oxygen, glucose and other substances.
DNA
-
In 1953, Watson and Crick famously discovered the double helix structure of DNA. By 1977 scientists had developed DNA sequencing techniques. In 1983 Kary Mullis invented the polymerase chain reaction technique, which amplified DNA concentrations, allowing scientists to detect DNA in very small quantities within a sample.
DNA Sensors
-
The subsequent development of biochip DNA sensors incorporating these techniques allowed biochip technology to be a crucial part of mapping the complete human genome. Today, biochips are regularly used to sequence not only individual DNA samples but to quickly sequence bacteria and virus DNA in order to quickly develop vaccines.
Semiconductor Microminiaturization
-
Biochips rely heavily on the miniaturization techniques developed by the semiconductor industry. A biochip is an array of chemical sensors linked together that can convert data into a computer readable form. Technologies for miniaturizing the circuitry to both connect the sensors and for converting data was developed throughout the 1980s and commercialized in the 1990s.
Implantable Biochips
-
Work in the first decade of the 21st century by scientists at Clemson University has introduced the possibility of an implantable biochip. The chip, about the size of a grain of sand, is coated in a special gel to keep the body's immune system from rejecting it and is designed to deliver instant data about injured soldiers' oxygen and glucose levels on the battlefield.
-