Motivation: Recent technological advancements in the field of genomics have resulted in Next Generation DNA Sequencing Technologies. These technologies have created hype among scientists since they enable cheap and faster sequencing of the DNA as compared to traditional methods. Data Analysis, Genome sequencing and alignments have now become easier due to the NGS. NGS are gaining the market day by day and there is a fierce competition amongst companies to capture the market of bioinformatics. Nevertheless, NGS does have some error profiles; yet they have managed to revolutionize the field of bioinformatics and the perception of scientists on research and genome sequencing.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
Introduction
DNA sequencing has gained much popularity since 1977 when the sequencing method of Maxam and Gilbert as well as the Sanger sequencing method came to light (Hutchison III, 2007). However, the Sanger sequencing technology was more widely accepted and has captured the market for the past 20 years (Metzker, 2010). The Sanger technology also known as the dideoxy method (Casals et al., 2011) , played a crucial role in deciphering the whole genome sequence and according to Metzker (2010) this technique has contributed to a lot of major achievements, namely the Human Genome Project amongst many others. Bateman & Quackenbush (2009) even support that the major milestone of the Human Genome Project was the advent of panoply of new technologies that emerged from sequencing the first reference genome and technologies that enabled the DNA sequencing rather than finishing the sequencing of the whole genome.
It is true that the dideoxy method has been around for quite some time now, but due to its limitations and the new technological advancements, novel and more robust technologies known as the Next Generation Technologies have seen the light of the day. The Sanger technology is classified as the first generation technology and the latest technologies developed for sequencing genomes fall in the category of Next Generation Technologies (Metzker, 2010).
The main advantage of the Next Generation Sequencing Technology is the fact that the genome can be sequenced in parallel, thus producing a larger number of reads as compared to the Sanger method and in a much shorter amount of time (Hutchison III, 2007). The high efficiency of the newer technologies results from the fact that they use the latest instruments like high resolution imaging and more efficient algorithms amongst others. In general, the Next Generation Technologies use shorter reads to fasten the process of sequencing (Hutchison III, 2007); however, this raises the question of whether the assembly of the short individual reads is accurate enough to produce the correct sequence. From a scientific point of view, a larger number of reads implies a greater coverage across the genome and thus accounts for the good accuracy in the genome assembly of the Next Generation Technologies. Hutchison III (2007) agrees to the fact that this is one of the reasons behind the accuracy and rapidity of the newer technologies. Nevertheless, the genome size is another parameter that has to be taken into consideration since it plays an important role in determining the coverage. Moreover, another benefit of the Next Generation Technologies is the ability to sequence genomes at a lower cost since according to Mardis (n.d); the new technologies are far cheaper.
Despite being novel in the market, the Next Generation Technologies have captured a fair share of the industry and are causing scientists to look at biological problems in a different perspective (Mardis, 2008).
In this review, a few main commercial Next Generation Technologies are discussed and a comparison is made among them. A biological application using the Illumina/Solexa Genome Analyzer is described and the challenges that conventional bioinformatics is facing due to Next Generation Technologies are also brought forward.
Next Generation DNA SEQUENCING TECHNOLOGIES
Recently, there has been a major boom in commercially available software for genome sequencing. The most famous ones are Roche, Illumina/Solexa Genome Analyzer, Applied Biosystems SOLiDTM System, Helicos HeliscopeTM and Pacific Biosciences SMRT (Mardis, 2008).
Roche 454/ FLX pyrosequencer
Roche 454 DNA sequencer was released in year 2004 (Mardis, 2008).The first step to sequence the DNA involves a library preparation where the DNA sample is fragmented into smaller pieces of about 400 to 600 base pairs. After that, A and B adaptors are attached to the DNA fragments which are then split into single strands. The individual strands now have A and B adaptors attached to them. The DNA library fragments are placed upon very tiny agarose beads such that one bead relates to only one DNA fragment (Mardis, 2008). PCR reactants and emulsion oil is added to the solution which is shaken vigorously so that the Polymerase chain reaction can be initiated. The beads are usually isolated in individual water micelles where the DNA fragments get replicated producing about one million copies of each DNA fragment per bead (454LifeSciences, n.d). The beads are then placed on a PicoTiterPlate which contains small wells; one for each bead. The well is also filled with capture beads which contain an enzyme which helps in the sequence by synthesis approach that Roche uses (454LifeSciences, n.d).
Once this preparation has been done, the PicoTiterPlate is loaded in the Roche 454 machine. After that, the 4 nucleotides solutions are loaded in the machine and are washed over the plate sequentially in one sequencing run. Once, the nucleotide starts to bind with the DNA fragment, the enzyme in the bead detects the incorporation of the nucleotide and ultimately releases light (Mardis, 2008). This light signal is detected by a CCDA camera and is recorded on a flowgram. Normally, the amount of light produced is dependent on the number of nucleotides incorporated (454LifeSciences, n.d). Finally, a set of flowgrams is obtained and analysed to generate DNA sequences which are then mapped against a reference sequence for assembly.
Illumina/Solexa Genome Analyser
Illumina sequencing can be broken down into three steps. The initial step starts with the library preparation in which the DNA sample is sheared into fragments of about 800 base pairs and two specific adapters are ligated to each end of the fragments. The next phase is known as cluster generation in which Illumina uses bridge amplification PCR to produce multiple copies of the DNA. Illumina uses an 8 channel flow cell containing a huge amount of primers bounded to its surface. The single stranded DNA fragments are then bound at random in the surface of channels of the flow cell to create copies (Staehling, 2008). A series of unlabelled nucleotides and enzymes are washed over the channels to start the bridge amplification process. The single stranded fragments become double stranded during the reaction and they are denatured to obtain single stranded molecules. This cycle is repeated numerous times which ends in millions of clusters of DNA molecules found in the channels of the flow cell (Staehling, 2008). Once cluster generation has completed, the cluster are now ready for sequencing, which is the last stage. The flow cell is then loaded in Illumina which sequences millions of clusters simultaneously. In the first cycle, fluorescently labeled nucleotides are added and all of them compete to bind to the template. Once the incorporation takes place, the rest of the nucleotides are removed and the clusters are excited by a laser to get a picture of the flow cell and detect the newly incorporated base. This process is repeated several times. Base calling is used to identify the nucleotides in the sequence images as shown in Figure 1. A reference genome is also used to facilitate sequencing and analysis (Staehling, 2008).
Applied Biosystems SOLiDTM System
Applied Biosystems DNA sequencing is divided into five steps namely sample preparation, Emulsion PCR, Ligation, Imaging and Data Analysis respectively. Two choices for sample preparation are available namely a fragment library or a mate-pair library. In both choices, the DNA is sheared and adapters are ligated to the fragments. A fragment library incorporates a single piece of DNA fragment while a mate-pair library binds two pieces of DNA which are at a known distance in the sample. The libraries contain numerous molecules and each molecule undergoes clonal amplification under emulsion PCR. The sample is then enriched with magnetic beads which are then covalently bonded to a glass slide. Applied Biosystems provides the flexibility to analyse one, four or eight samples per slide. The template beads are then mixed with a universal sequence primer, ligase and a lot of Di-base probes. The latter are fluorescently labelled with four dyes. Each dye represents four of the sixteen dinucleotide bases. The template sequence gets hybridised with the probe and is ligated. Once fluorescence is measured, the dye is cleaved off leaving a 3-5 prime phosphate for further reaction. This process can be repeated n times to extend the read length which is normally 35 base pairs (Mardis, 2008). The synthesised strand is removed and a new primer is formed which has a one base shift and ligation cycles are repeated. The primer reset process is repeated for 5 rounds. Bar encoding and the decoding matrix is usually used to gather the sequenced data for analysis (Yutao et al., 2008).
Heliscope TM
Heliscope uses the single molecular sequencing approach. The DNA sample is cut in short lengths of about 100-200 base pairs (Wash & Image, 2008).A poly (A) priming universal sequence is added to the 3 prime end of each DNA strand. Each strand is then attached to a fluorescent adenosine polynucleotide. The strands are then transferred onto the heliscope flow cell which contains many T capture sites that are spread on its surface. Each individual DNA template then hybridizes to the surface of the flow cell. The flow cell is loaded into the HeliscopeTM instrument and a laser enlightens its surface showing the position of each fluorescently labelled template. A CCDA camera is used to generate a map of the templates by taking multiple images of the flow cell in an organized way. After imaging, the template label is cleaved and washed away. Sequencing takes place by adding DNA polymerase and any fluorescently labelled nucleotide to the flow cell. T capture sites service sequencing primers by the tSMS process (Wash & Image, 2008). DNA polymerase speeds up the binding of the labelled nucleotides to the set of primers according to the template. A wash up process removes the DNA polymerase and any unbounded nucleotides. The recent incorporation is then visualised by illuminating and imaging the flow cell surface. The cleavage is then removed and the process is repeated in the same way for all the remaining bases until the desired read length is achieved. Sequencing data is gathered with each new base addition. Using the tSMS process, every strand is unique and sequenced independently (Wash & Image, 2008).
Pacific Biosciences SMRT
Pacific Biosciences uses the single molecule approach in a real time fashion, hence SMRT. Firstly, the individual nucleotides are labeled with a different fluorescent colour which is attached to the terminal phosphate instead of the base of the nucleotide. This feature allows the DNA polymerase enzyme to cleave off the fluorescent label when a base is incorporated. The following process emits light which can be captured in a nano-photonic chamber known as the Zero Mode Waveguide (Metzker, 2010). Nucleotides flow in and out of the chamber of the ZMW and when DNA polymerase initiates the incorporation of a nucleotide, it takes several nanoseconds during which its fluorescent label is excited and the light emitted is captured by a detector. After binding, the label is cleaved off and it diffuses away. The whole process is repeated and the different burst of lights corresponds to different nucleotides which are recorded and analysed by researchers (Metzker, 2010).
Comparison of the platforms
The references [1] and [2] refers to (Gupta et al., 2010) and (Metzker, 2010) respectively. There are some discrepancies between the two papers concerning the throughput, run time and read length. Metzker states that one of the advantages of Illumina is the fact that it is widely popular which does not constitute a really strong point.
Biological Application
NGS Technologies can be used to find the positions of nucleosomes with respect to DNA which can be helpful to understand their role in the regulation of transcription (Schones et al., 2008).Schones et al. (2008) describes the experimental procedures in different stages.
The first step involved the preparation of the nucleosome solution. In this phase, CD4 + T cells were incubated with anti CD3 and anti CD28 so as to activate the cells for 18 hours. After that, the T cells were treated with MNASE to generate the mononucleosomes. DNA fragments of about 150 base pairs in length were obtained from the agarose gel and ligated to the Solexa flow cells. These were then sequenced using the Illumina/Solexa Genome Sequencing machine.
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
The next phase involved the analysis of all the data being generated from the sequencer. Solexa pipeline analysis was the first one to be carried out where sequenced reads of 25 base pairs was mapped to the human genome (hg18) and only the matching ones were kept and others discarded. Nucleosome scoring was facilitated since the sequenced reads acted as an input in the scoring function to generate a nucleosome profile. This was achieved by using a sliding window of about 10 base pairs. The next step involved classifying gene sets and this was achieved using microarray experiments. Polymerase II stalling analysis was carried out in an mRNA-level based approach so as to identify which genes contained stalled, elongated or no Polymerase II. The sequence reads were then modelled as a Poisson distribution of the whole genome to spot the sliding window with Polymerase II. Each gene set was then aligned so as to analyze the Transcription start site found near the genes. Nucleosome levels specific to a nucleosome position were then quantified by using aligned reads and window values.
At the end of the experiments, the results found by the researchers stated the nucleosome position relative to DNA had a direct correlation with transcription regulation involving RNA polymerase II binding. Some of the experiments results can be depicted in Figure 2, Figure 3 and Figure 4 respectively.
CHALLENGES
Next generation sequencing technologies have indeed created a revolution concerning DNA sequencing and has opened the doors to a new field which is very different from that of traditional sequencing methods. There is a fierce competition between companies to produce up to date, fast and reliable sequencing methods. However, despite all the advantages that NGS brought along, they still pose several challenges to the field of bioinformatics.
Data Deluge
Next Generation sequencing technologies are aiming at producing huge amount of data and at a lower price (Kircher & Kelso, 2010). In fact, it is even possible to contemplate the option of sequencing the whole genome of an organism at only $1000 in the near future (Pareek et al., 2011). All these new sequencing data seems really appealing at one end but considered form another point of view, it might become problematic in the long run. The mere fact of reducing the cost of sequencing or sequencing technologies implies that sequencing will be easily accessible. This implies that, any laboratory or even people at home would be able to sequence genomes. In this current era itself, information handling is quite tedious with all the databases having part of the information and some not having them at all. New organizational ways and protocols will have to be defined to ensure that there exists a consensus between all the information that will come pouring into the databases. Optimized filters will be needed to differentiate between junk data, duplicated data and adequate data. Even new databases or data warehouses will have to be built to ensure none of the information is wasted and everything has been kept in a standardised format.
Resources
The fact that NGS is moving at such a huge pace raises the question of whether the current state of hardware and software will be able to handle the load of information that it will be generating.
The graph in Figure 4 denotes the rate at which the cost of DNA sequencing per $ is increasing as compared to that of the hard disk storage. It can also be seen that the NGS causes a huge shift in the amount of data per $ and even by-passing the rate of disk storage. This information is fundamental because it shows that disk space or storage of high throughput data might become problematic in the near future. More processing power and RAM will have to be allocated to the NGS applications for them to run smoothly. Cloud computing can be a solution to this particular issue but it depends on the amount of information that is generated as well. If cloud computing is brought in the picture, then new algorithms and parallel computing will have to be implemented to handle this problem.
Huge variety, less consensus
Nowadays, there is a wide variety of commercially available NGS technologies. However, there is no consensus about the read length, throughput or runtime of the packages which can be demonstrated by Table 1. Choosing which package is optimal for sequence alignment sometimes become very tedious since the accuracy of each is not definite and standardised. Developing even newer technologies can create more havoc about accuracy, hence the need for standardization first.
CONCLUSION
NGS technologies have provided a lot of facilities in terms of DNA sequencing to the biologists. When compared to the Sanger sequencing, NGS technologies sequencing is much cheaper and faster. Nevertheless, Sanger sequencing remains one of the basic pillars of DNA sequencing since the error rates and profiles are much less as compared to that of NGS technologies (Kircher & Kelso, 2010). As long as the genome will remain a mystery to the scientists, the advent of next generation technologies will continue in order to decipher the genetic code.
Cite This Work
To export a reference to this article please select a referencing style below: