Thursday, April 30, 2015

Blogger's Desk #6: Spark of Ethical Debate on editing the embryonic genome


There is a lot of pages right now talking about, the sparked ethical debate of editing a human embryo. Though I see a very responsible reporting of what actually the whole study is about in some of standard reference journals and science websites, I see a lot of pages and even some news articles hyping the actual work or there is a clear misinterpretation of the whole work. In this blog space, I almost never touch upon topics such as ethics, human genetics etc. But in this case I would like to make an exception. Well that's what Blogger's desk series is about...

With increasing genetic tools to edit the gene for a variety of research purposes in laboratory, there is an increase in fidelity and ability to edit genomes of both small (such as viruses) and large genomes (such as mouse genomes). Lab created animals via genetic techniques (such as mouse models) are a common place. Studies also have been conducted to rectify human genes especially in an attempt to correct genetic disorders. For example there is a reasonable expected success in treating SCID (Severe combined immunodeficiency syndrome), caused by a single gene mutation using gene therapy.

Fig 1: Method for mitochondrial gene therapy.
Mitochondrial genetic diseases is another case. In a recent historical decision (Link), mitochondrial gene editing tool has been approved to be used so as to avoid passing the mutations to progeny. The procedure is controversial since the genetic makeup of the resulting progeny is derived from 3 people instead of natures standard. In a recent paper published in cell, mitochondria-targeted nucleases selectively reduced mtDNA haplotypes in germline. This was successfully used to reduce human mutated mtDNA levels responsible for Leber’s hereditary optic neuropathy (LHOND), and neurogenic muscle weakness, ataxia, and retinitis pigmentosa (NARP), in mammalian oocytes under lab conditions. Bruce Whitelaw speaks, "Conceptually this is an alternative to the ‘three person embryo’ strategy. Society needs to grapple with this. You could imagine every IVF clinic in the country being able to do this. But is the genome editor technology robust enough yet? I think that’s an open question. I genuinely believe it will be in the near future, so we have to have the debate now: what applications are beneficial and which ones does society has concerns about?”

Editing genomes has never been easier. CRISPR/CAS technology (See my earlier post) has exponentially improved in the last 3 years. The technique has allowed editing of genome in an unprecedented scale in the laboratory. By using some tricks in design of the guide RNA, Gantz and Bier showed that they could create homologous stable genetic changes in fruit fly models. The method was called as Mutagenic Chain Reaction.

I'm trying to impress on you the fact that the gene editing is a technology that has not suddenly emerged. Genetic modification is of two types- Somatic and Germ Line. Most of the genetic editing technology has been focused on Somatic cell line changes. In case of genetically modified mice or other laboratory animals, the mutations are in the germline. In other words changes are made in embryonic system therefore they continue to breed changes. In contrast human embryo is a special case.  One of the first large scale genetic editing technology was the Zinc finger Nuclease technique (ZFN method; Link). With advent of CRISPR the technology has been powerful enough in cellular models to warrant further application. There has been a general unscripted agreement that heritable genetic changes will not be made in humans till we have the sufficient technology to try to do so. The question is how do we know we are ready or not?

Photo 1: 8-cell embryo, at 3 days.
The debate on embryonic gene editing was raised with a series of articles by multiple people sounding an alarm over gene editing in human embryo. It was cited that people have already sought permission to do so, from discarded embryo's. The paper's further urged scientists for temporary voluntary moratorium to discuss and debate the issue before proceeding further. In the meantime as the stir of debate was heating up, publication in Protein & Cell, of having done the procedure accelerated the stem of debate.

A little bit about of the study. Chinese scientists have used CRISPR gene-editing technique to modify non viable human tripronuclear zygotes. Of course the use of used non-viable embryos obtained from fertility clinics was a deliberate attempt to avoid ethical problems. These are embryos that will die and will never mature to complete human though it will divide to produce a few cells. The team injected 86 embryos with CRISPR/Cas9 custom designed to add in new DNA. After 48 hours, by which time the embryos would have grown to about eight cells each. Of the 71 embryos that survived, 54 were genetically tested by exome sequencing revealing that just 28 were successfully spliced endogenous β-globin gene, and that only 4 of those contained the genetic material designed to repair the cuts. The efficiency of homologous recombination directed repair was low and the edited embryos were mosaic. The relative inefficiency was evident and the research was halted at this point.

I gathered from reliable sources, the paper was peer reviewed in Nature and Science, but was rejected because of questions raised on ethical concerns. However, John Harris a bioethicist says, “It’s no worse than what happens in IVF all the time, which is that non-viable embryos are discarded. I don’t see any justification for a moratorium on research”.

Fig 2: Off-target cleavage in human embryos. Source
In my view the paper actually gives the data to talk about if research on human embryo needs to go further and is it time to make it to clinics. The idea of authors in this case was to answer the question "Are we ready for embryonic gene therapy?". The paper also noted that there were serious off target effects. For example there was 8 bp mismatch between the G1 gRNA and C1QC gene. Yet the C1QC locus was targeted. The follow up sequencing was done only for Exome and not whole genome. As the authors point out, there will be likely a lot more they have missed in the non exome region and is a concern.

The data from the paper basically says "We are not yet ready for germline level genetic editing in humans". But the debate has gone further and NIH is seriously considering not encouraging these type of experiments. However, David Baltimore remarks "I am not in favor of the NIH policy and I believe that the Chinese paper shows a responsible way to move forward. But it is the will of Congress that there be no work with human embryos and I assume that means even ones that are structurally defective".

The basic ethical debate in the current study doesn't question if the study was useful. There is almost an universal agreement, indeed we are not yet there and the paper has provided the data for it. But the debate is about the use of human embryo's. Supporter's of the study pointed that the there was no other ethical way to know and the study was ethically and morally right.
Thrasher AJ etal (2014). A modified γ-retrovirus vector for X-linked severe combined immunodeficiency. The New England journal of medicine, 371 (15), 1407-17 PMID: 5295500

Ewen Callaway. Scientists cheer vote to allow three-person embryos. Nature. Link

Gantz VM, & Bier E (2015). Genome editing. The mutagenic chain reaction: a method for converting heterozygous to homozygous mutations. Science (New York, N.Y.), 348 (6233), 442-4 PMID: 25908821

Reddy P, Ocampo A, Suzuki K, Luo J, Bacman SR, Williams SL, Sugawara A, Okamura D, Tsunekawa Y, Wu J, Lam D, Xiong X, Montserrat N, Esteban CR, Liu GH, Sancho-Martinez I, Manau D, Civico S, Cardellach F, Del Mar O'Callaghan M, Campistol J, Zhao H, Campistol JM, Moraes CT, & Izpisua Belmonte JC (2015). Selective elimination of mitochondrial mutations in the germline by genome editing. Cell, 161 (3), 459-69 PMID: 25910206

Lanphier E, Urnov F, Haecker SE, Werner M, & Smolenski J (2015). Don't edit the human germ line. Nature, 519 (7544), 410-1 PMID: 25810189

Cyranoski, D. (2015). Scientists sound alarm over DNA editing of human embryos Nature DOI: 10.1038/nature.2015.17110

Cyranoski D (2015). Ethics of embryo editing divides scientists. Nature, 519 (7543) PMID: 25788074

Liang P, Xu Y, Zhang X, Ding C, Huang R, Zhang Z, Lv J, Xie X, Chen Y, Li Y, Sun Y, Bai Y, Songyang Z, Ma W, Zhou C, & Huang J (2015). CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes. Protein & cell PMID: 25894090

Reardon, S. (2015). NIH reiterates ban on editing human embryo DNA Nature DOI: 10.1038/nature.2015.17452

Baltimore D, Berg P, Botchan M, Carroll D, Charo RA, Church G, Corn JE, Daley GQ, Doudna JA, Fenner M, Greely HT, Jinek M, Martin GS, Penhoet E, Puck J, Sternberg SH, Weissman JS, & Yamamoto KR (2015). Biotechnology. A prudent path forward for genomic engineering and germline gene modification. Science (New York, N.Y.), 348 (6230), 36-8 PMID: 25791083

Wednesday, April 29, 2015

Retroviral protection


What would be the virus that would be of most interest to you? The answer would be very diverse, depending on what you work and what excites you. But if I had ask you to name a virus taxa that has both undefeated pathogen and something that probably has helped you to live, you will be clearly stumped. One of the very earlier posts of this blog, I talked about retroviruses (Link), and how over the evolution humans owe very existence to retroviruses. Of course I was at that time talking about HERV's or Human endogenous retrovirus. So let us extend that discussion a little further.

Fig 1: Retroviral elements distribution in Humans.
Human endogenous retrovirus are mobile genetic elements derived from retroviral genomes that has fixated into our genomes. Most of the genes have decayed and are non functional. Some of them still are active, and its function made use by our cells. Its not a bad idea to think that they have been conserved from decaying over evolution for it is helpful. To summarize, more than a whooping 8% of human genome is composed of frank retroviral DNA signatures. In contrast, only 1.5% of genome is human gene. It is being estimated a massive 45 % of human genetic contains junk transposable elements. This is largely composed of Alu's, LINES, SINES etc (interspersed nuclear elements). Fig 1, is an approximation of distribution of retroviral content (based on data from Batzer etal).

I need to digress a little bit here. Though the emphasis is on endogenous retroviral elements, in depth genetic analysis have shown that non retroviral elements such as Bornavirus like elements (Link), Hepatitis C elements, Ebola/Marburg elements (Link) etc in mammalian lineages. But these are random events. It is speculated that these elements have been integrated to genome by active retroviral enzymes that happened to be active when the other virus was in the process. On the other hand retroviral integration are universal in humans. I however am not aware of how widespread is Endogenous Non retroviral elements (ENRE) is in human population.

The role of some retoviral coded proteins such as Syncytin- 1 (coded by HERV-W) and Syncytin-2 (Coded by HERV-FRD) cannot be challenged, and was accepted as a norm. They form the basis of formation of placental membrane. The tide of the story changed with studied showing that placental tissue is highly resistant to infections. An article from Coyne etal showed that cultured primary human placental trophoblasts was highly resistant to infection by a number of viruses in laboratory testing conditions. This was mediated by by exosome-mediated delivery of specific microRNAs. Work by others also showed that HERV's were more active in the placenta and correlated with immunity. This is probably mediated through placental micro vesicles (PMV). PMV are known to circulate in maternal circulation with a role in immunity. Further syncytin 1 is shed from the placenta in association with MV which has direct role. In other words, HERV products are actively involved in placental immunity.

A recent paper in nature, looks a little deep into what the HERV's has to offer for human development. The paper looked into 2 main aspects- When is HERV expression activated and how does it effect immune system. The study implied that HERV-K, which is the most recent member to be integrated into human genome, is activated at the eight-cell stage and continues to do so till the emergence of epiblast cells. Viral like particles (in this case it was essentially gag proteins) was demonstrable in the blastocyst cells.

Fig 2: HERV expression pattern. Source
Here's the interesting part. HERV-K is know to produce a protein called as Rec (Analogus to Rev protein in HIV), which is derived through an alternate splicing mechanism from Env gene. Rec is a helper for importing viral products from nucleus to cytoplasm. The authors hypothesized that by bringing the viral materials into the cytoplasm, an innate antiviral defense is triggered. As a proof of this idea an innate antiviral factor IFITM1 mRNA transcription was seen to be increased. IFITM1 (interferon induced transmembrane protein 1), also known as FRAGILIS2 is an IFN-induced antiviral protein which inhibits the entry of viruses to the host cell cytoplasm, permitting endocytosis, but preventing subsequent viral fusion and release of viral contents into the cytosol. Experiments also found that Rec alone was sufficient to account for IFITM1 up-regulation. The relevance of this was tested by challenging against Influenza H1N1 and found that indeed Rec induced IFITM1 was at least partially accounting for resistance.

Earlier paper's have suggested the involvement of miRNA's being effected by HERV activation and subsequent immune modulation. It has also been suggested by some that Endogenous retrovirus can inhibit other competing similar retrovirus from establishing (this phenomenon is known as xenotropism). In this paper innate immune stimulation has produced immunity. Perhaps the net effect is a combination of multiple pathways and effects. some parts HERV genome is still kept active for some reason. As Villarreal puts it, “These viruses have the genetic tools to refashion the hosts' genes, influencing which are active and when, and with which other genes they interact. This means they have the ability to reshape the physical characteristics of their hosts. It's a massive dynamic pool of colonizing genomes."

Now a question arises, as to why is this activity only during embryonic stages. Doesn't it make sense to keep being active so that we can avoid being infected by virus, if these mechanisms are so potent. I don't have an answer for this question. But I know that HERV activity in adults have been correlated with many conditions such as schizophrenia. Probably the HERV mediated immunity being a non specific one is silenced when development reaches a stage where other factors can take over. I think literature is going to pour on this question in coming time.
Cordaux, R., & Batzer, M. (2009). The impact of retrotransposons on human genome evolution Nature Reviews Genetics, 10 (10), 691-703 DOI: 10.1038/nrg2640

Horie M, Honda T, Suzuki Y, Kobayashi Y, Daito T, Oshida T, Ikuta K, Jern P, Gojobori T, Coffin JM, & Tomonaga K (2010). Endogenous non-retroviral RNA virus elements in mammalian genomes. Nature, 463 (7277), 84-7 PMID: 20054395

Douville RN, & Nath A (2014). Human endogenous retroviruses and the nervous system. Handbook of clinical neurology, 123, 465-85 PMID: 25015500

Holder BS, Tower CL, Forbes K, Mulla MJ, Aplin JD, & Abrahams VM (2012). Immune cell activation by trophoblast-derived microvesicles is mediated by syncytin 1. Immunology, 136 (2), 184-91 PMID: 22348442

Delorme-Axford E, Donker RB, Mouillet JF, Chu T, Bayer A, Ouyang Y, Wang T, Stolz DB, Sarkar SN, Morelli AE, Sadovsky Y, & Coyne CB (2013). Human placental trophoblasts confer viral resistance to recipient cells. PNAS, 110 (29), 12048-53 PMID: 23818581

Grow EJ, Flynn RA, Chavez SL, Bayless NL, Wossidlo M, Wesche DJ, Martin L, Ware CB, Blish CA, Chang HY, Reijo Pera RA, & Wysocka J (2015). Intrinsic retroviral reactivation in human preimplantation embryos and pluripotent cells. Nature PMID: 25896322

Tuesday, April 28, 2015

Lab Series #3: Flow Cytometry


One technique that has been under constant modification and up gradation, is the cell analysers. From the time Rayleigh (1880) has come up with stability of fluidics jet systems, improvements in cell counting and analysis has come a long way. Today most laboratories (Especially research) is exclusively dependent on Flow cytometers or their improved versions for most of the cell work ups. From about 1980's, Becton Dickinson's flow cytometers have become common cell analyzers in hematology laboratory. 

Photo 1: Leonard Arthur Herzenberg.
With that note, let me talk about flow cytometry. From the initial, classic flow cytometer with a single laser beam, with 2 detectors (one forward one side scatter), the modern day FACS (Fluorescence Activated Cell Sorting) machines have evolved to multi color analyzers, analysing more than 12 colors with multiple detectors (These are referred as Hi-D FACS). Thats a remarkable improvement. Herzenberg's group at Stanford University was the first to design and patent a FACS. Later in 1974, Becton Dickinson, licensed the technology and introduced the first commercial flow cytometer, which was called the FACS-1. But the real successful implication of flow cytometry method, probably came with the development of high-speed flow cytometer by Joe Gray's team that eventually came to be used for sorting human chromosomes in the Human Genome Project. The modern day "Hi-speed flow cytometer" can enumerate and distinguish cells in a mixed population at a rate of 500 to 5,000 cells per second. 

Fig 1: Flow Cytometer components. Source
So whats the basic mechansim of working of a flow cytometer? The simplest way I can put it is analysis of cell by measuring its imparted chromic properties (one at a time), in a constant flow. Let me elaborate. The whole setup consists of follwoing key components- Laser, fluidics, optics and electronics.

Laser stands for "Light Amplification by Stimulated Emission of Radiation". Lasers have the advantage of precision cause they have a very small scatter at great lengths. There are several types of lasers used in flow cytometers. The most common lasers preferred for biophotonic analysis are Argon gas lasers, Green and Yellow diode-pumped solid-state (DPSS) lasers, femtosecond fiber lasers, Supercontinuum lasers etc. DPSS lasers come in several set of varieties. Of the varieties, Nd:YAG (Neodymium doped Yttrium Aluminum Garnet), Nd:YLF (Neodymium yttrium lithium fluoride) and Nd:YVO4 (Neodymium doped Yttrium ortho-Vanadate) laser can cover a range of fluorescence excitation wavelengths. 

The second important component is the Fluidics. The important part of analysis is analyzing each cell one at a time. That means clustered cells cannot be analyzed. This requires that the cell moves in a single line one after the other. This achieved through fluidic design. The fluidics is supplied by a reservoir of liquid, called sheath fluid (pressurized with room air) and taken towards the illumination point. This is called as flow cell/ chamber. The flow cell chamber on the outside is usually made of quartz.

The sheath fluid should not interfere with cell integrity, for this purpose buffers are used. For most mammalian cell lines, phosphate-buffered saline solution is used as sheath fluid. Since the sheath fluid in itself is at high pressure, the sample to be analyzed is injected into the sheath fluid at a higher pressure. This leads to a formation with core made up of cells moving linearly in a single line and the outer sheath fluid. The amount of pressure can be controlled, thereby changing the width of the core allowing size exclusion or inclusion. For example, increasing the sample pressure increases the flow rate by increasing the width of the sample core stream. Such a type of fluidic flow with core and sheath based on pressure is called as coaxial flow and effect is as hydrodynamic focusing.

Fig 2: Scatter of light. Source
So now you have a laser that throws the light you want. The fluidic system brings the cells in a single pile. The light hits the cell. This leads to light scatter of two types- Forward and Side scatter. This is where the optics steps in. The light that is scattered, is detected by using detectors, one in line with the laser (to detect forward scatter, deviating up to 20 degree angle) and one perpendicular to the laser (to detect side scatter at an angle of 90 degree). A bar (Barrier filter) is also placed in front of the forward scatter channel (FSC) and Side scatter channel (SSC), to remove the effect of unsacttered light entering directly.

By using a variety of dichroic mirrors and optical allignment multiple different wavelengths can be detected from one single signal originating from the cell analyzed for. The final detectors are basicaly photomultiplier tubes (PMT) that can be detect specific data.

Fig 3: Filters and Detection paths. Source
In this case (Fig 3) the side scatter containing multiple wavelengths pass through filter 1, filtering the blue light and focusing it to 1st detetctor SS blue. The rest of the wavelength passes through and green is detected by same mechanism in 2nd detector. The last remaining wavelength set is filtered at 3rd dichroic mirror and detected for orange. The final remaining wavelength is directly detected at PMT-red. By adding multiple such "scavenge and focus" detection method, we can detect multiple signals at any given point of time. The practical technical details is not as simple as shown and often requires tricks in optics to get it right.

So far so good. But how did you get the cell to emit the signal at the first place? Simple. You can specifically color different cells with specific colors using immunofluorescence. So if you want to count say CD4 T cells, tag an antibody against CD4 that has a fluorochrome attached. Based on scatter plot generated using side scatter (number of signals) the number of cells can be counted.

Fig 4: Forward scatter for
determining size.
If you have noted the side scatter was used to find the color and count. So what's the importance of forward scatter? As I mentioned already forward scatter can detect usually a scatter of light at an angle of maximum 20 degrees. Hence it is sometimes also called as "Low-angle forward scatter". The the scatter will depend on the size of the cell. Look at the Fig 4 for illustration.

In a nutshell, the flow cytometer, gathers the cell you want to count and analyze in single file, with help of laser finds the right number and other data regarding the cell, transforms the data into a scatter plot. But there is more. We can go a step further. If you need to recover a specific set of cells for further analysis then you can do so by cell sorting. This can be done by a variety of methods such as Electrostatic Cell Sorting (Link).

Further Reading:

1. Herzenberg etal. The History and Future of the Fluorescence Activated Cell Sorter and Flow Cytometry: A View from Stanford. Clinical Chemistry October 2002 vol. 48 no. 10 1819-1827. Link

2. M Nunez Portela et al. A single-frequency, diode-pumped Nd:YLF laser at 657 nm: a frequency and intensity noise comparison with an extended cavity diode laser. 2013 Laser Phys. 23 025801. Link

3. Tkaczyk ER, Tkaczyk AH. Multiphoton flow cytometry strategies and applications. Cytometry A. 2011 Oct;79(10):775-88. Link

Monday, April 20, 2015

Lab Series #2: DNA sequencing


One of the most important techniques that is relied upon in the world of life science is to study the sequence of gene of interest. The technique to determine the sequence of bases in DNA is called DNA sequencing which is the central part of current genetic technologies. The classical methods available for sequencing DNA include
  1. Sanger’s sequencing method (dideoxynucleotide method)
  2. Direct PCR Pyrosequencing
  3. Maxam and Gilbert sequencing
Sanger’s method

Fig 1: Sanger sequencing.
The DNA is denatured by heat or more traditionally inserted and cloned into a vector M13 (which is naturally single stranded). The DNA is extracted and the reaction mixture is then divided into four aliquots. The tube A contains all the 4 nucleotides and 2’, 3’-dideoxyadenosine triphosphates (ddATP). Similarly tube T contains all nucleotides and ddATT and so on. The dideoxynucleotide doesn’t possess a 3’ end and so will terminate the synthesis, since polymerase an add nucleotides only to the 3’-end.

The incorporation of ddNTP will be a random event, the reaction producing molecules of various lengths culminating in the same ddNTP. The reaction products are then run using an electrophoresis method (commonly used is polyacrylamide gel method). The position of the various bands of each ddNTP will be the indication of sequence (See Fig 5). Under ideal conditions sequences up to about 300 bases in length can be read from as single gel run.

Direct PCR Pyrosequencing

Fig 2: Pyrosequencing
This is a sequencing method were a PCR template is hybridized to an oligonucleotide and incubated with a DNA polymerase, ATP sulphurylase, luciferase and apyrase. During the reaction the first of the 4 dNTP are added and if incorporated release pyrophosphate (PPi). The ATP sulphurylase converts the PPi to ATP. This ATP can now convert the luciferin to oxyluciferin to generate light. The overall reaction is

dNTP + ATP sulphurylase +luciferin ---------------------------> light

This is followed by another round of addition of dNTP. The resulting program can b used for the analysis of sequence. The method described represents a very fast sequencing result with good potential to be automated. It also provides highly precise and accurate analysis. Also it avoids the problems of gel electrophoresis.

Maxam and Gilbert sequencing:

The DNA is radiolabelled with 32P at the 5’ ends of each strand and the strands denatured separated and purified to give a population of labeled strands for the sequencing reactions. The next step is a chemical modification of the bases in the DNA strand. The modified bases are then removed from their sugar groups and the strands cleaved at these positions using the chemical piperidine. This will create a set of fragments known as nested fragments. It is then analysed for products as in Sanger’s method.

Automated fluorescent DNA sequencing:

The method is similar to that of the Sanger’s method. Here a specific fluorescent dye terminator (instead of ddNTP) is used in a single reaction cuvette. This allows a single gel column to be run. The sequence can be read using a laser (Light amplification by stimulated emission of radiation) and detecting the fluorescence which instead tells about the sequence. The method can be connected to various to various other bioinformatics software’s that allows high data collection.

Needless to say, the first generation technology "Sanger sequencing" method is still considered as a gold standard in resolving issues such as looking into SNP (single nucleotide polymorphism). With sequencing methods automated and access to better technology, the human genome project which was once a mammoth project is now a more feasible technology. But then the method in common laboratory remains cost prohibitive. So science seeks newer methods that can bring down the cost to less than 1000$ per human genome.

Many new techniques that rely on the basic DNA replication mechanisms are now introduced by the companies. Below I will discuss a couple since it has become too common now.

SMRT sequencer:

SMRT (pronounced as smart), stands for Single Molecule Real Time sequencers. This technology is a product of Pacific Biosciences corporation. The technology uses the same biological process that a natural system uses.

Fig 3: Phi29 polymerase.
The first requirement is a DNA polymerase. The polymerase used here is φ29 polymerase. This polymerase is derived from a bacteriophage Φ29. This phage is a native attacker of B. subtilis. This polymerase has exceptional strand displacement and an inherent 3´-5' proofreading exonuclease activity. Its exceptional qualities have been useful in genetic studies. The enzyme is obtained in industrial quantities by cloning the gene to an E coli.

The second component of this technology are the special fluorescent nucleotides. Nucleotides are the basic structural units of DNA or RNA. In this case the 4 nucleotides- A, T, C, G are labelled with different fluorescent colors. The speciality is that they are γ-labelled dNTPs. Labeling at the terminal nucleotide is done on purpose. In a normal replication process, DNA polymerase cleave the α-β-phosphoryl bond upon incorporating a nucleotide into DNA, releasing the pyrophosphate leaving group and attached fluorescent label. This means when cleaved the γ- labelled fluorescent molecule is free to move out without being effected.

Of note, the fluorophore is attached to the nucleotide by using linkers. This attachment is cleavable, with chemicals so that the dye can be detached from the DNA after it has been detected. This serves to remove the noise in detection and thus enhance the assay. One important thing of note is that extension of the triphosphate moiety to four and five phosphates can increase incorporation efficiency (Reference). However, to the best of my understanding this idea has not been used in the technology.

Fig 4: ZMW cells. Source
Third component of this system is the reaction chamber embedded in a chip. The chip is a glass cover slip with aprox 100 nm-thick layer of aluminum deposited on top of it. In this plate is an array of cylindrical wells each 70 nm–100 nm in diameter. The aluminum is chemically treated so that polymerase molecules will stick to the glass at the bottom of each well rather than the sides of the wells. Each well is designed to hold one polymerase molecule in it. The cover glass at the bottom is designed for feasibility of imaging.

The above mentioned special reaction cell is referred otherwise as ZMW (Zero mode waveguides). These cells are product of nano technology. The reaction chamber holds not more than a few atto- or zeptoliters. This is in the range of 10-18 to 10-21.They call it the microfluidics. But I think I better call it zeptofluidics. This permits use of extreme low sample volumes.

A problem often encountered in the whole genome sequencing is the limitedness of sequencing very large sequences. It is just not practical to sequence the whole set of chromosomes or even a single chromosome in one read due to technical constraints. The fastest method around the problem is a "shotgun" approach. This method breaks up the whole genome to fragments and then sequences each bits. Then the sequences are realigned using a computer program using unique matches.

So, this is how the technology works. The first step is to prepare the sample. The genomic DNA is obtained, and broken into fragments. Each fragmented DNA is then incorporated into a reaction chamber. The reaction starts with DNA polymerase which unwinds the DNA and incorporates the correct nucleotide. The γ-phosphate is released along with the fluorescent dye. Since the reaction can hold only one molecule of dNTP, in one of the four colors a binary value of 1 or 0 is generated. This implies the graph plot will show square peaks rather than the usual triangular peaks we are used to.

By simultaneously running a very large set of reaction in a complete chip, the full genome is sequenced at a very high speed. The method also avoids the requirement of running a gel and the sequence is obtained in real time. Since the signal generated is cleaved and released after each binding step, this reduces the otherwise background noise. Remember, reaction can hold only one molecule of dNTP.

As a proof of concept, the company has sequenced E coli O104:H4 strain with an accuracy of 99.9% and some sources claiming it to be 99.9999% accuracy. This level of accuracy is unheard of any other 1st or 2nd generation sequencers.

Dr. Schadt comments "The ability to sequence the outbreak strain with reads averaging 2,900 base pairs and our longest reads at over 7,800 bases, combined with our circular consensus sequencing to achieve high single molecule accuracy with a mode accuracy distribution of 99.9%, enabled us to complete a PacBio-only assembly without having to construct specialized fosmid libraries, perform PCR off the ends of contigs, or other such techniques that are required to get to similar assemblies with second generation DNA sequencing technologies." And it took them less than 8 hrs on average to complete sequencing.

Ion Proton Sequencer

Next lets discuss about another sequencer: Ion Proton Sequencer. The technology comes to the laboratory bench from Life technologies. The technology is vaguely referred as Semi conductor sequencing. The system was unpacked to the world of science in January 2012 and testified of $1,000, full genome sequence in a single day. Oh and by the way, Jonathan Rothberg is considered as the innovator of this technology.

Photo 1: Ion Proton™ Sequencer
The technology works, based on a semiconductor chip. The Proton I semiconductor chip is filled with millions of sensors. I don't have exact figures of how many sensors does an individual chip possess. But, I gather from reliable sources that the number is the Proton I semiconductor chip endorsed 165 million sensors, and the proton II 660 million sensors. The sensors are developed based on complementary metal-oxide semiconductor (CMOS) technology.

CMOS is actually a technology used in the manufacturing computer microchips, usually engineered from two semiconductor metals- Silicon and Germanium. The same technology is used in digital cameras, but here instead of sensing light, this sensor detects change in pH. What's a pH meter got to do here? The technology used here makes use of a simple principle in natural DNA replication. Each time a nucleotide is successfully incorporated into a growing DNA strand a hydrogen ion is released. For people, who are wondering where did this proton come from, Nucleic acids are acidic because of the phosphate groups. They can act as hydrogen ion donors, and proton is thrown out during nucleotide incorporation.
Photo 2: Semi conductor chip

So, take a human genome, prepare a DNA library, add it on to the sequencing chip and each sensor acts as a well for reaction. When flooded with nucleotides the correct match is taken up and a change in pH is recorded which is converted to digital data. A powerful computer program integrates all the data and bingo, you have the sequence.

If I had to say anything more about this technology, I would say it works pretty much the same as the SMRT did. But the difference lies in the method of detection. Here the detection is based on pH change.

The battle to make it to the top is in between the Ion sequencer and its rival Illumina was so evident. Just a few days before the proton was announced HiSeq 2500 was released. There isn't enough conclusive evidence on which is actually better, the scientific community is more inclined to the ion proton version as it costs less ($740,000 vs $150,000). As a sequencer at a smaller scale, a modified version known as Ion Personal Genome Machine™ (PGM™) Sequencer competes with a illumina mini version (MiSeq). Again cost is an important factor ($50,000 vs $100,000 per machine). (Source)

"DNA sequencing is going to affect everything," says Rothberg, predicting it will become a $100 billion industry. "This is biology's century, just as physics was the foundation of the last century." (Source). And they also argue it as better than other technologies like Nanopore sequencing (Link).

Illumina sequencer

Fig 5: dNTP used in SBS
Next let's talk something about the llumina/Solexa sequencer. The technology of sequencing is called as "Sequencing by synthesis technology". This technology is a brain child of 2 scientists from Cambridge- Shankar Balasubramanian and David Klenerman. They studied the movement of polymerase enzyme by using fluorescent dye labelled nucleotides. Based on their prime experience in sequencing project and their studies on polymerase, they theorized a massive parallel sequencing of short reads using a solid phase sequencing with reversible terminators as the basis of a new DNA sequencing approach. this technique came to be known as SBS or "Sequencing by synthesis" technology. Over the years, the technology was developed and a successful Solexa prototype was launched as a commercial sequencing instrument. A detailed history can be found here.

So how does the technology work? The technology is very much similar to Sanger sequencing method. The difference lies in use of modified dNTP's with terminators. 3′-O-fluorophore-labeled nucleotides were synthesized and used as reversible terminators of DNA polymerization. This reversible terminator ensures that in one step, only one nucleotide can be incorporated. After the template is flooded with nucleotides and binding step is accomplished, the unincorporated reagents are washed away. The terminator chemical is equipped with a fluorescent tag that allow it to be detected by using specific detection cameras. Since only one type of fluorescent color is used the detection of 4 nucleotides use 4 separate tubes. The 2nd step is to remove the terminators using a chemical reaction. This means, there is a removal of fluorescent tag also and the cycle is repeated.

The technology claims to be capable of a read length of nearly 50 bases for fragment libraries and 36 bases for mate-paired libraries, with a raw base-calling accuracy of 98.5% (Source is most probably outdated. I couldn't find the latest).

Of course there are more varieties coming up in the field of research and some new techniques are coming up in test mode. But I believe the post has give you a basic idea on some of the sequencer's that has now become the talk of DNA. A technology that is now gaining popularity is pore-fection or commonly known as nanopore sequencing. The technique is under development. Please refer to my previous post on pore-fection for more information on the technique (Link). The technique is emerging as potent, cost effective technology. Probably in a few years we will have machines that will sequence the human genome in less than an hour for less than 100$. And that will be a true genetic age of Lab science.

Further Reading:
  1. Chun-Xiao Song etal. Sensitive and specific single-molecule sequencing of 5-hydroxymethylcytosine. Nature Methods 9, 75–77. doi:10.1038/nmeth.1779. Link
  2. Paul Zhu and Harold G. Craighead. Zero-Mode Waveguides for Single-Molecule Analysis. Annual Review of Biophysics. June 2012; Vol . 41: 269-293. doi: 10.1146/annurev-biophys-050511-102338. Link
  3. Christian Castro etal.Two proton transfers in the transition state for nucleotidyl transfer catalyzed by RNA- and DNA-dependent RNA and DNA polymerases. PNAS March 13, 2007 vol. 104no. 11 4267-4272. Link
  4. Democratizing DNA sequencing by reducing time, cost and informatics bottleneck. Link
  5. aek-Soo Kim etal. Novel 3′-O-Fluorescently Modified Nucleotides for Reversible Termination of DNA Synthesis. ChemBioChem; January 4, 2010. Volume 11, Issue 1, pages 75–78. Link
  6. Luo C, Tsementzi D, Kyrpides N, Read T, Konstantinidis KT (2012) Direct Comparisons of Illumina vs. Roche 454 Sequencing Technologies on the Same Microbial Community DNA Sample. PLoS ONE 7(2): e30087. doi:10.1371/journal.pone.0030087.

Sunday, April 19, 2015

Lab Series #1: Luminescence


Earlier, I had been having a second blog to focus on writing some basic stuff in the laboratory. But the blog has been now inactive for more than a year and half. I plan to close the blog since I cannot keep it up. But then completely deleting the blog will close down a great deal of information that I have put down. Many posts have had more than 3K+ views each. So, I will migrate posts one by one to this blog so that the content will be out there. All the posts will be posted as Lab Series, and in course, am planning to add some posts on new topics.

Let me start with "luminescence", one of the most common laboratory tool. In simplest terms, Luminescence is emission of light by a substance. The most known among phenomenon of luminescence is fluorescence, phosphorescence and bioluminescence.

Fig 1: Energy states in fluorescence and phosphorescence
Fluorescence is defined as the emission of light by a chemical substance that has absorbed electromagnetic radiation. It should be noted that in fluorescence a high energy, short wavelength light is directed onto a chemical causing emission of lower energy, longer wavelength of spectra. The Figure 1 on right, shows the diagrammatic representation of fluorescence. In this case, a chemical absorbs certain spectra, gets excited and reaches back to a ground state. So how does it differ from Phosphorescence? In fluorescence, after excitation and reaching a high exited second singlet state (Half life of aprox 10-12 sec), it looses a considerable amount of heat and reaches exited First singlet state (Half life of aprox 10-9 sec). From here if the state drops of back to ground state its fluorescence. If it goes into a Metastable triplet state (Half life of aprox 10-3  sec), and then drops of to ground state it is phosphorescence. If that explanation sounds too much techie, then a simpler version is fluorescence is almost spontaneous, whereas phosphorescence is a bit late luminescence. Please note that fluorescence and phosphorescence are just different versions of luminescence.

Photo 1: Phosphorescence.
Photo 2: Fluorescence
As a note, let me add a point. Because of the significant difference in time, fluorescence works only as long as it is exposed to source light. In contrast, phosphorescence works for a significant time after the source light is ceased. This has important applications in wide range of field.

Coming to the 3rd important type, Bioluminescence. The term basically refers to a process of emitting light by a living cell which actually represents a complex set of reactions. The bioluminescence is of great importance to the organism and seen in organisms such as Vibrio fischeri (This phenomenon made way for discovery of quorum sensing. Link) and other deep sea creatures where sunlight doesn't reach (For a explanation go here). Why does this luminescence phenomenon happening in world of chemistry and deep sea biology interest we clinical technologists? Its their extreme sensitivity and high diagnostic significance that interests us.

Fluorescence Microscopy:

Let me start with an analogy. If your to look into the sky no matter what sophisticated technology you use, the chance that you see a star is very bleak. That is because your detection sensitivity is blurred by light. In night if you are to do the same experiment, you could see stars much easily. That is because in a dark background a shiny material is easily detected. (In fact the Dark Ground microscopy works on almost same principle). So if your able to coat specific substances with fluorescent dyes and then observe them in a dark background you are much more likely to detect. This is of great value in microbiology diagnostics, cell imaging, FISH analysis etc. There is a high sensitivity. But, there is a problem. Everything in the sample will glow and I may have hard time figuring out what I want to study. I can increase the specificity by creating some specific antibodies and then linking the antibody to a fluorescent dye.
Fig 2: Basic design of fluorescence microscope.

The basic design of a fluorescence microscope consists of 

1. Light source
2. Dichroic mirror
3. Set of filters and lenses.

Lets look into individual parts. The source of light is usually a mercury burner, ranging in wattage from 50 watts to 200 watts and the xenon burners ranging from 75 watts to 150 watts. Of note, mercury and xenon arc lamps pose a danger of explosion (if mishandled) due to very high internal gas pressures and extreme heat generated during use. Tungsten/halogen lamp versions are also used sometimes. However, the shelf life of these components are low. The modern designs use LED systems which have high shelf life.

Excitation filters are optical components placed in the path of illumination. The function is to filter out all wavelengths of the light source, except for the excitation range of the fluorophore under detection. A special dual band excitation filter is used in confocal microscopy (Read more).

The third most important composition is a dichroic mirror. People often use beam splitters as an inter-changeable term. However, this is not true. A beamsplitter directs light (reflects and transmits) independent of wavelength with a net efficiency of only 25%. Dichroic mirrors are wavelength dependent but with a higher efficiency. Thus for different wavelengths a different mirror is required (Reference). The most common dichroic mirrors come in 4 designs. Short wave pass design (SWP), Long wave pass (LWP) design, or Bandpass design (Notch filter is a variant of bypass). The newer high end designs such as polarizing band pass, ultrabroad band mirrors are modifications (See more). SWP dichroic show a high transmission for a short wavelength band, high damage thresholds, and high reflectivity for a longer band of wavelengths.A LWP transmits the longer wavelength and reflects the shorter wavelength. (Source). A bandpass design is a optical filter that has a well-defined short wavelength cut-on and longwavelength cut-off. Bandpass filters are denoted by their center wavelength and bandwidth (Reference). They are the most commonly used models. Notch filters are band pass filters in the upside down position. The dichroic filters are a subtype of Interference filters. Other filters that can be used is classified as

1. Interference filters: Dichroic, Dielectric, reflective filters- They reflect the unwanted wavelengths
2. Absorptive filters: Color glass filters- They absorb the unwanted wavelengths

Photo 3: Dichroic mirror.
The dichroic mirror is aligned at 45 degree inclination. The dichroic mirror reflects the source light (A monochromatic light), and directs it to specimen (See Fig 2). The specimen emits fluoresce under optimal conditions and the light pass through the same dichroic mirror. Note that the emerging wavelength is sufficiently different to allow passage of excited light (Mirrors are designed for allowing specific light only). The rest of the light which has come from simple reflection, from the source, or non fluorescence source are blocked. This produces a dark background and only the fluorescence is a seen.

Emission fliter (barrier filter), is typically a bandpass filter that passes only stringent wavelengths emitted by the fluorophore and restricts all undesired light outside this band – especially the excitation light. This ensures blocking of autofluorescence, reduction in noise, unwanted and dangerous signals such as UV etc. Most of the instruments incorporate a heat filter near the emission filters (this may be also be put in the lighting chamber).

And of course, a set of lenses that is basic for magnification and imaging of cells is incorporated to the design.

Direct fluorescence:

Photo 4: Direct Fluorescence.
Direct fluorescence is a method in which the cells are nonspecifically labelled with a fluorochrome and a microscopy is done. This method is very sensitive but relatively non specific. The Photo 4 shown to the right is an illustration of direct fluorescence by Auramine-Rhodamine staining. The dye bind to nuclear material. Hence wherever there is nuclear material staining is obtained. The method is made more specific in TB detection by using a 3% acid alcohol treatment that washes the fluorochrome from all the cells except that of Mycobacterium tuberculosis. 

The advantage of method is higher sensitivity,compared to routine staining methods but compromises specificity.


Photo 5: Immunofluorescence
In Direct Immunofluorescence (DIF), a antibody that attacks antigens of the desired cell component is purified. The antibody is linked to a Fluorochrome via a homolinker or heterolinker (Linker depends on the type of fluorochrome. The linkers are basically create an amine modification producing stickiness in Fc region. Mind you, the dyes should not be linked to Fab portion as it will hinder binding). In Indirect Immunofluorescence (IIF) an antibody (Non tagged) is allowed to bind to antigen. A fluorescent antibody is designed against the Fc portion (Which tends to be same in almost all antibodies in a species). If there is a specific antigen, the antibody binds to the antigen which is tagged with dye and visualized by fluorescent microscopy.

The advantages of DIM is that the method can be directed almost against any antigen and thus we can detect any type of cell by targeting a specific antigen. The advantage of IIF is that it can be used to detect antibody (usually evidence of infection) and the tagged antibody can be made universal. The disadvantage of all the fluorescence method is in its cost and specialized equipment's.

Noise in fluorescence microscope represents the non specific fluorescence producing a blurred image. This is often unavoidable in routine microscopes. In the ppt shown below, the images in page 2, the upper images represent the image in routine fluorescent imaging microscopes. They may be enhanced by various methods such as reducing the aperture that allows light. But confocal microscopy is the best answer. The signal to noise ratio is considered as the best indicator of microscope performance. Another important hindering concept in fluorescence microscopy is Photobleaching. It is the photochemical destruction of a fluorophore due to prolonged exposure of dye to light source.

The last point that I want to impress in this post is use of specific dyes. Dyes are available to track specific components of a cell without immuno-labeling and present with a great opportunity in live cell imaging. E.g include Hoechst 33258 and DAPI for DNA, Cy5 dye for calcium, Rhodamine123 for mitochondrion etc.

The second thing to talk about is Bioluminescence.

Photo 6: Glowing mushroom.
Bioluminescence as I have already described refers to production of light by a living cell in contrast to chemicals. The one thing that you must note here is that there is no excitation light. A reaction inside the cell causes the production of light. Here the chemical energy is converted to light energy. In my last post I had told that bioluminescence is usually seen in deep sea creatures. But, I forgot to mention that it can also be seen in land animals. The photo of a glowing mushroom (Mycena lux-coeli mushrooms) illustrates this fact.

So what exactly is this bioluminescence? First I would like to impress the fact, no matter which cell is lighting in the world, the basic chemistry is the same. (Fluorescent proteins such as GFP is a different phenomenon according to me which I will discuss later). The chemical machinery involves the following set of components (Source).
Fig 3: Basic chemistry of bioluminescence.
  1. Luciferase
  2. Adenosine triphosphate
  3. Luciferin
  4. Oxygen
The simplest formula for bioluminescence is expressed as follows

ATP(energy) + Luciferin (substrate) + Luciferase (enzyme) + O2(oxidizer)---> Light (photons)

Luciferase enzyme:

Luciferase are a class of enzyme (EC that can oxidize luciferin group of compounds to yield light. The structure of luciferase differs among species.


Luciferin is a group of molecule that can be oxidized catalytically to produce light and oxyluciferin. Different types of luciferin are used by different organisms. The most common naturally encountered in sea is the coelenterazine which is a type of imidazolopyrazine. By tweaking and jiggling with some amino acids we can create many coloring patterns. Owing to interesting color patterns produced they are sometimes referred to as rainbow proteins.

The chemistry reads so. Luciferin is combined with ATP to form a complex called luciferyl adenylate. This is fed into the luciferase active site with oxygen. The reaction yields the production of a cyclic peroxide that eventually becomes high-energy oxyluciferin. The high energy is unstable. This get backs to its ground state emitting energy. A great deal of debate is prevalent in scientific community as to what is the trigger. Its just worth to mention that there are 3 hypothesis here. For people interested to learn more I have given links
  1. NOS model (Link)
  2. Osmotic control model (Link)
  3. Hydrogen peroxide model (Link)
Bioluminescence has gained large popularity in Cellular research and in diagnostics. The basic idea reads so. By genetic engineering methods the luciferase gene system is attached to a gene of interest (The expression of which is to be studied). So when there are optimal conditions for the transcription, the luciferase is expressed and by using suitable substrate the expression is detected. This methods has been of extreme importance in studying various transcription factors. If you are thinking that this is only of research interest, then let me tell you. The method can be used for rapid detection of Mycobacterium tuberculosis. The Figure 4 below explains how it works.

Fig 4: Luciferase reporter phage assay
The method takes roughly 4 days to complete which is by far way ahead of other TB detection methods. How do you detect the signal. The luminescence signals are detected by the help of a special type of colorimeter called as a Luminometer. A luminometer measures the luminescence and quantifies the light. Higher the measurement more is the activity.

Photo 7: Aequorea victoria.
A molecule of extreme interest to biologists is the fluorescent protein. This elegant phenomenon is seen in proteins expressed by a jelly fish (Aequorea victoria; photo to the left). The naturally occurring GFP (Green fluorescent protein), is 238 AA long protein. Its role is to transduce the blue chemiluminescence of the protein aequorin into green fluorescent light by energy transfer with a absorbance/ excitation peak at 395 nm, fluoresces in vivo upon receiving energy from the Ca2+-activated photoprotein aequorin. (Source). The structure of green fluorescent protein shows a cylindrical shape, with eleven beta-strands make up the beta-barrel and an alpha-helix runs through the center.

Fig 5: Structure of GFP.
The chromophore is the most important part of a Fluorescent protein. It is proposed that the Arg96 plays a crucial role in self catalysis by GFP. By varying the chemistry of chromophore a battery of artificial fluorescent proteins have been created. One additional note here. Aequorin is a monomeric calcium binding protein that emits light upon reacting with calcium. The protein has three calcium binding sites, three cysteine residues, and a noncovalently bound chromophore that consists of coelenterazine and molecular oxygen. Light is emitted via an intramolecular reaction in which coelenterazine is oxidized by the bound oxygen (For source and more information click here).

Understanding of this leads me to one more common laboratory experiment. FRET tests. FRET stands for Fluorescent resonance energy transfer. The principle of this test less complex than the name itself. FRET is a non radiative transfer of excited state energy from one fluorophore to another. Let me simplify. The first step is to absorb a light from a source. This light causes fluorescence. But even before the light escapes a 2nd fluorphore (which has absorption spectra in the emission range of first fluorophore) is taken up and emits its emission spectra. The requirement of this sort of energy transfer. The distance between two fluorophores should be no more than 10 angstrom distance. This means only close molecules something like a receptor and ligand binding will show FRET.

Fig 6: FRET between CFP and YFP to
measure protein interactions.
In the above diagram CFP (Cyanine fluorescent protein) is linked to compound Z, and YFP (Yellow fluorescent protein) is linked to Y. We are interested to know if Y and Z have affinity to each other. If they do when i shine a light in absorption spectra, the light detected will be in emission spectra of YFP. If not, the emission spectra detected will be in CFP range (Distance between molecules is more to undergo FRET). Simple. A modification of this method is BRET (Bioluminescence resonance energy transfer).

So I was talking about luminescence. First I told you what is the concept of luminescence and what types we know about. Then I talked about fluorescence and about bioluminescence. I also described its application in clinical research and basic laboratory science. The next part is to talk about "Chemical Luminescence or Chemiluminescence". 

In general terms chemiluminescence is classified into 3 types (Source)
  1. Chemical reactions using synthetic compounds and usually involving a highly oxidized species such as a peroxide are commonly termed chemiluminescent reactions.
  2. Light-emitting reactions arising from a living organism, such as the firefly or jellyfish, are commonly termed bioluminescent reactions.
  3. Light-emitting reactions which take place by the use of electrical current are designated electro-chemiluminescent reactions.
Fig 7: Chemiluminescent reaction of luminol. Source
As I have already said, Bioluminescence should be otherwise considered as different form chemiluminescence. There exists a simple common principle behind bio and chemi- luminescence. The similarity is that, both involve reactions with O-O bond in peroxide compounds, to produce a high energy state culminating in ground state releasing extra energy as light. The distinction from fluorescence and phosphorescence here is that there is no source light. Fig 7, Illustrates the chemiluminescence reaction of a common lab compound, "Luminol".


IUPAC Name: 5-Amino-2,3-dihydro- 1,4-phthalazinedione.
Molecular formula: C8H7N3O2

Fig 8: Structure of luminol.
When luminol reacts with a hydroxide salt, a dianion is formed. The oxygen from the hydrogen peroxide reacts with luminol dianion to form an unstable organic peroxide (3-aminopthalate). Stability is achieved by losing a nitrogen, emitting the extra energy as a photon. This emitting of the photon produces a blue light.

Luminol is not the only compound of use in chemiluminescence. A large list of growing compounds is now available in the commercial market. This includes (but not limited to) the following list
  1. Isoluminol
  2. Aminoethyl isoluminol (AEI)
  3. Aminoethylethyl isoluminol (AEEI)
  4. Aminobutyl isoluminol (ABI)
  5. N- (4-Aminobutyl) N-ethyl isoluminol (ABEI)
  6. 6-isothiocyanatobenzophthalazine-1,4(2H,3H)-dione (IPO)
  7. 3-propyl-7,8-dihydropyridazino[4,5-g]quinoxaline-2,6,9(1H)-trione (PDIQ)
  8. 3-benzyl-7,8-dihydropyridazino[4,5-g]quinoxaline-2,6,9(1H)-trione (BDIQ)
Use of chemiluminescence in Forensics

Photo 8: The result of luminol and blood
mixed together.
Criminals are often organized and plan their action well in advance (Not always the case). To cover up their crime, after a lethal attack, they may try to clean the area and wipe the area of evidence. This leaves the forensics experts at a task. But despite a thorough clean up, often blood particles tend to remain on surfaces and this can be detected by using chemiluminescence principle of luminol. The key element here is iron. Iron is found in conjugation with the hemoglobin. By spraying a solution of luminol reagent (Reagent consists of luminol powder, hydrogen peroxide and a hydroxide compound), the iron in hemoglobin acts as the catayst which sets up the reaction producing a light which glows for almost 30 sec. Good enough to photograph and get some clue at least. Photo 1 (shown to the right) shows a luminescence, as evidence of blood stains. 

Chemiluminescence Immunoassay (CLIA)

CLIA is a simple modification of ELISA technique. The difference is in substrate which is luminol here. Lets consider one scenario. Detection of Antibody say for example. A microwell consisting of respective specific antigen is coated to the well. The sample in which Ab is to be detected is added to this microwell. The first step here is formation of antigen antibody complex. The second step is to use a conjugate that attacks the primary antibody. The conjugate contains enzyme (usually HRP or Horse-raddish peroxidase) that catalyzes the luminol to produce light. The light is measured in a special instrument called as luminometer. By varying this technique just like in ELISA, we can detect antigen or antibody. 

Fig 9: Chemiluminescence. Source
The basic advantage of CLIA over ELISA is sensitivity. ELISA is clearly sensitive, no question about that. ELISA detection range often lies in the range of nanograms but this method can go to a level of picograms. And that is clearly an advantage. By simply tweaking the substrate we can extend this method to many other techniques such as western blot detection. A subtype of chemiluminescence is ECL (Electro-chemiluminescence) now used for various purposes.

As a ending note, I leave you with some common dyes used in laboratory with their properties. (Source: Text book of Biochemistry and Molecular biology; Keith Wilson and John Walker).

Excitation maximum (in nm)
Emission maximum (in nm)
Tetramethyl rhodamine
Lissamine rhodamine
Texa Red
Nuclear specific dyes
Hoechest 33342
Acridine orange
Propidium iodide
Ethidium bromide
Ethidium homodimer
Calcium indicators
Calcium green
Mitochondrion specific dyes
Rhodamine 123
Reporter molecules
Ds Red

Further Reading:
  1. Introduction to Fluorescent microscopy, Nikon Website
  2. Fluorescence microscopy, Olympus website
  3. FRET with Spectral Imaging and Linear Unmixing. Link
  4. Rajesh Babu Sekar and Ammasi Periasamy. Fluorescence resonance energy transfer (FRET) microscopy imaging of live cell protein localizations. March 3, 2003 // JCB vol. 160 no. 5 629-633. doi: 10.1083/jcb.200210140. Link
  5. Junichi Ishida, Maki Takada, Tomohiro Yakabe, Masatoshi Yamaguchi. Chemiluminescent properties of some luminol related compounds. Dyes and Pigments. Volume 27, Issue 1, 1995, Pages 1–7. Link
  6. Cyanagen website Link
  7. Rhyne PW, Wong OT, Zhang YJ, Weiner RS. Electrochemiluminescence in bioanalysis. Bioanalysis. 2009 Aug;1(5):919-35. Link