>DNA sequencing Videos.

>With IBM tossing it’s hat into the ring of “next-next-generation” sequencing, I’m starting to get lost as to which generation is which. For the moment, I’m sort of lumping things together, while I wait to see how the field plays out. In my mind, first generation is anything that requires chain termination, Second generation is chemical based pyrosequencing, and third generation is single molecule sequencing based on a nano-scale mechanical process. It’s a crude divide, but it seems to have some consistency.

At any rate, I decided I’d collect a few videos to illustrate each one. For Sanger, there are a LOT of videos, many of which are quite excellent, but I only wanted one. (Sorry if I didn’t pick yours.) For second and third generation DNA sequencing videos, the selection kind of flattens out, and two of them come from corporate sites, rather than youtube – which seems to be the general consensus repository of technology videos.

Personally, I find it interesting to see how each group is selling themselves. You’ll notice some videos press heavily on the technology, while others focus on the workflow.

As an aside, I also find it interesting to look for places where the illustrations don’t make sense… there’s a lovely place in the 454 video where two strands of DNA split from each other on the bead, leaving the two full strands and a complete primer sequence… mysterious! (Yes, I do enjoy looking for inconsistencies when I go to the movies.)

Ok, get out your popcorn.

First Generation:
Sanger Entry: Link

Second Generation:
Pyrosequencing Entry: Link

Helicose Entry: Link

Illumina (Corporate site): Link

(Click to see the Flash animation)

454 Entry: Link

Third Generation:

Pacific Biosciences: Link

(Click to see the Flash Video)

Oxford Nanopore Entry: Link

IBM’s Entry: Link

Note: If I’ve missed something, please let me know. I’m happy to add to this post whenever something new comes up.

>Base quality by position

>A colleague of mine was working on a nifty tool to give graphs of the base quality at each position in a read using Eland Export files, which could be incorporated into his pipeline. Over a discussion about the length of time it should take to do that analysis, (His script was taking an hour, and I said it should take about a minute to analyze 8M illumina reads…) I ended up saying I’d write my own version to do the analysis, just to show how quickly it could be done.

Well, I was wrong about it taking about a minute. It turns out that the file has a lot more than about double the originally quoted 8 million reads (QC, no match and multi match reads were not previously filtered), and the whole file was bzipped, which adds to the processing time.

Fortunately, I didn’t have to add bzip support in to the reader, as tcezard (Tim) had already added in a cool “PIPE” option for piping in whatever data format I want in to applications of the Vancouver Short Read Analysis Package, thus, I was able to do the following:

time bzcat /archive/solexa1_4/analysis2/HS1406/42E6FAAXX_7/42E6FAAXX_7_2_export.txt.bz2 | java6 src/projects/maq_utilities/QualityReport -input PIPE -output /projects/afejes/temp -aligner elandext

Quite a neat use of piping, really.

Anyhow, the fun part is that this was that the library was a 100-mer illumina run, and it makes a pretty picture. Slapping the output into openoffice yields the following graph:

I didn’t realize quality dropped so dramatically at 100bp – although I remember when qualities looked like that for 32bp reads…

Anyhow, I’ll include this tool in Findpeaks 4.0.8 in case any one is interested in it. And for the record, this run took 10 minutes, of which about 4 were taken up by bzcat. Of the 16.7M reads in the file, only 1.5M were aligned, probably due to the poor quality out beyond 60-70bp.

>Second Gen Sequencer Map and naming our generations properly

>Yes, this is old news, but I’ve found myself searching for the second generation sequencing map that was started at seqanswers several times in the last few days. Just to make it even easier to find – and of course to give some positive publicity to a really cool project – here’s the Google Maps based list of facilities that run second generation (Next-Generation) sequencing machines:

http://tinyurl.com/orm8cr

For the record, I understand a few places are missing, however. I hear there’s some second-generation sequencing going on Alberta, which clearly hasn’t appeared on the map yet.

And, as a foot note, now that I see other people have picked up on it and started calling “next-generation sequencing” the more appropriate label of “second generation sequencing” (e.g. here at nature), I’m going to drop next-gen as a label entirely. First-gen is sanger/dideoxy/capillary/etc, second-gen is pyrosequencing and the third gen is biotech-based (using cellular components such as DNA polymerases and the like). Lets end the confusion and name our generations accordingly, shall we?

>Another day, another result…

>I had the urge to just sit down and type out a long rant, but then common sense kicked in and I realized that no one is really interested in yet another graduate student’s rant about their project not working. However, it only took a few minutes for me to figure out why it’s relevant to the general world – something that’s (unfortunately) missing from most grad student projects.

If you follow along with Daniel McArthur’s blog, Genetic Future, you may have caught the announcement that Illumina is getting into the personal genome sequencing game. While I can’t admit that I was surprised by the news, I will have to admit that I am somewhat skeptical about how it’s going to play out.

If your business is using arrays, then you’ll have an easy time sorting through the relevance of the known “useful” changes to the genome – there are only a couple hundred or thousand that are relevant at the moment, and several hundred thousand more that might be relevant in the near future. However, when you’re sequencing a whole genome, interpretation becomes a lot more difficult.

Since my graduate project is really the analysis of transcriptome sequencing (a subset of genome sequencing), I know firsthand the frustration involved. Indeed, my project was originally focused on identifying changes to the genome common to several cancer cell lines. Unfortunately, this is what brought on my need to rant: there is vastly more going on in the genome than small sequence changes.

We tend to believe blindly what we were taught as the “central paradigm of molecular biology”. Genes are copied to mRNA, mRNA is translated to proteins, and the protein goes off to do it’s work. However, cells are infinitely more complex than that. Genes can be inactivated by small changes, can be chopped up and spliced together to become inactivated or even deregulated, interference can be run by distally modified sequences, gene splicing can be completely co-opted by inactivating genes we barely even understand yet and desperately over-expressed proteins can be marked for deletion by over-activating garbage collection systems so that they don’t have a chance to get where they were needed in the first place. And here we are, looking for single nucleotide variations, which make up a VERY small portion of the information in a Cell.

I don’t have the solution, yet, but whatever we do in the future, it’s not going to involve $48,000 genome re-sequencing. That information on it’s own is pretty useless – we’ll have to study expression (WTSS or RNA-Seq, so figure another $30,000), changes to epigenetics (of which there are many histone marks, so figure 30 x $10,000) and even dna methylation (I don’t begin to know what this process costs.)

So, yes, while I’m happy to see genome re-sequencing move beyond the confines of array based SNP testing, I’m pretty confident that this isn’t the big step forward it might seem. The early adopters might enjoy having a pretty piece of paper that tells them something unique about their DNA, and I don’t begrudge it. (In fact, I’d love to have my DNA sequenced, just for the sheer entertainment value.) Still, I don’t think we’re seeing a revolution in personal genomics – not quite yet. Various experiments have shown we’re on the cusp of a major change, but this isn’t the tipping point: we’re still going to have to wait for real insight into the use of this information.

When Illumina offers a nice toolkit that allows you to get all of the SNVs, changes in expression and full ChIP-Seq analysis – and maybe even a few mutant transcription factor ChIP-Seq experiments thrown in – and all for $48,000, then we’ll have a truly revolutionary system.

In the meantime, I think I’ll hold out on buying my genome sequence. $48,000 would buy me a couple more weeks in Tahiti, which would currently offer me a LOT more peace of mind. (=

And on that note, I’d better get back to doing the things I do…. new FindPeaks tag, anyone?

>Science Cartoons – 5 (RNA-Seq)

>This is the last science cartoon I did for my poster. I was pretty happy with the pictures, although if I were to do it over again, I’ve learned a few more tricks that I’d have used instead.

Anyhow, my favorite effect on this picture is the “text to path”, where you can make any string follow any line – who knew graphic design could be so much fun. It definitely makes for some interesting graphics. I’d definitely use this effect in an RNA folding paper, if I ever got the chance to do another one. (-;

>Multi-match reads in ChIP-Seq

>I had an interesting comment left on my blog today, which is worth taking a few minutes to write a response to:

"Hi Anthony, I just discovered your blog and it looks very interesting to me!
Since this article on Eland is now more than one year old, I was wondering
if the description at point 3 about multi matching locations is still
applicable to the Eland program in the Illumina pipeline 1.3. More in general,
would you trust the multi matching locations extracted from the multi_eland
output files to perform a repeat enrichment analysis over an experiment of
ChIP-seq? If no, why? Thank you in advance for your attention."

The first question asks about multi-matching locations – and if the point in question (point 3) applies to the Illumina Pipeline 1.3. Since point 3 was just that the older pipeline didn’t provide the locations of the multi-matche reads, I suppose this no longer really applies: I understand the new version of Eland does provide multi-match alignment information, as do other aligners such as Bowtie. However, I should also mention that since I adopted Maq as my preferred aligner, I haven’t used Eland much – so it’s hard for me to give an informed opinion on the quality of the matches. I simply don’t know if they’re any good, and I won’t belabour that point. I have used Bowtie specifically because it was able to do mutli-matches, but we didn’t use it for ChIP-Seq, and the multi-matches had other uses in that experiment.

So, the more interesting question is whether I’d use multi-match reads in a ChIP-Seq analysis. And, off hand, my answer has to be no. But let me explain my reasoning, and the conditions in which I would change that answer.

First, lets assume that we have Single End Tags, so the multi-match information is not resolvable. That means anytime we have a read that maps to more than one location, we have the possibility that we can either map it to it’s source – or we’re mapping it incorrectly. A 50% change of “getting it right.” The greater the number of multi-match locations, the smaller the chance we’re actually finding it’s correct origin. So, at best we’ve got a 50-50 chance that we’re not adversely affecting the outcome of the experiment. That’s not great.

In contrast, there are things we could do to make them usable. The most widely used method from FindPeaks is the weighted fragment distribution type. Thus, we could expand the principle to weight the fragments according to the number of sites. That would be… bearable. But would it significantly add to the quality of the alignment?

I’m still going to say no. Fragments we see in ChIP-Seq experiments tend to fall within 200-300bp of the regions in which the transcription factor (or other sites) bind. Thus, even if we were concerned that a particular transcription factor binds primarily to the similar motif regions at two sites, there should be more than enough (unique) sequence around that site (which is usually <30-40bp in length) to which you'll still see fragments aligning. That should compensate for the loss of the multi-match fragments. Even more importantly, as read lengths increase, the amount of non-unique sequence decreases rapidly, making the shrinking number of multi-match reads less important. The same argument can be extended for paired end tags: Just as read lengths improve and reduce the number of multi-match sites, more of the multi-match reads will be resolved by pairing them with a second read, which is unlikely to be within the same repeat region, thus reducing the number of reads that become unresolvable multi-matches. Proportionally, one would then expect that leaving out these reads become a smaller and smaller segment of the population, and would have to worry less and less about their contribution. So, then, when would I want them? Well, on the odd chance you’re working with very short reads, you can pull off the weighting properly, and you have single end tags – and the multi-match reads make up a significant proportion of the reads, then it’s worth exploring. You’d need to start asking the tough questions: did the aligner simply find that a small k-mer of the read aligned to multiple locations (and was then unable to resolve the tie by extension the way some Eland aligners work)? Does the aligner use quality scores to identify mis-alignments? How reliable are the alignments (what’s their error rate)? What was your sample, and how divergent is it from reference ? (e.g., cancer samples have a high variation rate, and so encourage many false alignments, making the alignments less reliable.) Overall, I really don’t see too many cases where you’re going to gain a lot by digging in the multi-match files. That’s not too say that you won’t find anything good in there – you probably would, if you knew where to look, but the noise to signal ratio is going to be pretty poor – just by definition of the fact that they’re mutli-match reads alone. You’ll just have to ask if it’s worth your time. For the moment, I don’t think my time (even at grad student wages) is worth it. It’s just not low hanging fruit, when it comes to ChIP-Seq.

>Searching for SNPs… a disaster waiting to happen.

>Well, I’m postponing my planned article, because I just don’t feel in the mood to work on that tonight. Instead, I figured I’d touch on something a little more important to me this evening: WTSS SNP calls. Well, as my committee members would say, they’re not SNPs, they’re variations or putative mutations. Technically, that makes them Single Nucleotide Variations, or SNVs. (They’re only polymorphisms if they’re common to a portion of the population.

In this case, they’re from cancer cell lines, so after I filter out all the real SNPs, what’s left are SNVs… and they’re bloody annoying. This is the second major project I’ve done where SNP calling has played a central role. The first was based on very early 454 data, where homopolymers were frequent, and thus finding SNVs was pretty easy: they were all over the place! After much work, it turned out that pretty much all of them were fake (false positives), and I learned to check for homopolymer runs – a simple trick, easily accomplished by visualizing the data.

We moved onto Illumina, after that. Actually, it was still Solexa at the time. Yes, this is older data – nearly a year old. It wasn’t particularly reliable, and I’ve now used several different aligners, references and otherwise, each time (I thought) improving the data. We came down to a couple very intriguing variations, and decided to sequence them. After several rounds of primer design, we finally got one that worked… and lo and behold. 0/2. Neither of them are real. So, now comes the post-mortem: Why did we get the false positives this time? Is it bias from the platform? Bad alignments? Or something even more suspicious… do we have evidence of edited RNA? Who knows. The game begins all over again, in the quest for answering the question “why?” Why do we get unexpected results?

Fortunately, I’m a scientist, so that question is really something I like. I don’t begrudge the last year’s worth of work – which apparently is now more or less down the toilet – but I hope that the why leads to something more interesting this time. (Thank goodness I have other projects on the go, as well!)

Ah, science. Good thing I’m hooked, otherwise I’d have tossed in the towel long ago.

>8 Postdoc positions

>I don’t want to spam anything, but since this is my own web page, I guess I can advertise as much as I’d like. I was just passed an email from a colleague at the Plant Science department at UBC, where they’re currently looking for eight post doc positions: mainly people who have or are interested in gaining Illumina sequence processing experience.

I figured this is noteworthy for several reasons:

  1. There is a growing demand for next-gen trained bioinformaticians, which looks good for the future career prospects of anyone in the Next-gen Sequencing/Genomics field (though this is hardly a surprise),
  2. Genomics is beginning to expand out of the narrow {yeast | human | C.elegans | etc} model organism fields into areas such as plant science, where it will have a huge impact. (going mainstream is always a good thing for a field of science, in my humble opinion.)
  3. Some of the positions will put bioinformaticians into key positions where they become the cornerstone of research projects, which is a far cry from the “bioinformaticians as a service” role that’s been popular in many research settings.

Anyhow, I can highly recommend at least one of these positions, having worked with the professor before, so if anyone is interested in the email, I’d be happy to forward along the advertisements.

>Genomics Forum 2008

>You can probably guess what this post is about from the title – which means I still haven’t gotten around to writing an entry on thresholding for ChIP-Seq. Actually, it’s probably a good thing I haven’t, as we’ve been learning a lot about thresholding in the past week. It seems many things we took for granted aren’t really the case. Anyhow, I’m not going to say too much about that, as I plan to collect my thoughts and discuss it in a later entry.

Instead, I’d like to discuss the 2008 Genomics Forum, sponsored by Genome BC, which took place on Friday – though, in particular, I’m going to focus on one talk, near to my own research. Dr. Barbara Wold from Caltech gave the first of the science talks, and focussed heavily on ChIP-Seq and Whole Transcriptome Shotgun Sequencing (WTSS). But before I get to that, I wanted to mention a few other things.

The first is that Genome BC took a few minutes to announce a really neat funding competition, which really impressed me, the Genome BC Science Opportunities Fund. (There’s nothing up on the web page yet, but if you google for it, you’ll come across the agenda for Friday’s forum in which it’s mentioned – I’m sure more will appear soon.) Its whole premise revolves around the question: “Are there experiments that we need to be doing, that are of strategic importance to the BC life science community?” I take that to mean, are there projects that we can’t afford not to undertake, that we wouldn’t have the funding to do otherwise? I find that to be very flexible, and very non-academic in nature – but quite neat. I hope the funding competition goes well, and I’m looking forward to seeing what they think falls into the “must do” category.

The second was the surprising demand for Bioinformaticians. I’m aware of several jobs for bioinformaticians with experience in next-gen sequencing, but the surprise to me was the number of times (5) I heard people mention that they were actively recruiting. If anyone with next-gen experience is out there looking for a job (post-doc, full time or grad student), drop me a note, and I can probably point you in the right direction.

The third was one of the afternoon talks, on journalism in science, from the perspective of traditional news paper/tv journalists. It seems so foreign to me, yet the talk touched on several interesting points, including the fact that journalists are struggling to come to terms with “new media.” (… which doesn’t seem particularly new to those of us who have been using the net since the 90’s, but I digress.) That gave me several ideas about things I can do with my blog, to bring it out of the simple text format I use now. I guess even those of us who live/breath/sleep internet don’t do a great job of harnessing it’s power for communicating effectively. Food for though.

Ok… so on to the main topic of tonight’s blog: Dr. Wold’s talk.

Dr. Wold spoke at length on two topics, ChIP-Seq and Whole Transcriptome Shotgun Sequencing. Since these are the two subject I’m actively working on, I was obviously very interested in hearing what she has to say, though I’ll comment more on the ChIP-Seq side of things.

One of the great open questions at the Genome Sciences Centre has been how to do an effective control for a ChIP-Seq experiment. It’s not something we’ve done much of, in the past, but the Wold lab demonstrated why they’re necessary, and how to do them well. It seems that ChIP-Seq experiments tend to yield fragments in several genomic regions that have nothing to do with the antibody or experiment itself. The educated guess is that these are caused by hypersensitive sites in the genome that tend to fragment in repeatable patterns, giving rise to peaks that appear in all samples. Indeed, I spend a good portion of this past week talking about observations of peaks exactly like that, and how to “filter” them out of the ChIP-Seq results. I wasn’t able to get a good idea of how the Wold lab does this, other than by eye, (which isn’t very high throughput), but knowing what needs to be done now, it shouldn’t be particularly difficult to incorporate into our next release of the FindPeaks code.

Another smart thing that the Wold lab has done is to separate the interactions of ChIP-Seq into two different types: Type 1 and Type 2, where Type 1 refers to single molecule-DNA binding events, which give rise to sharp peaks, and very clean profiles. These tend be transcription factors like NRSF, or STAT1, upon which the first generation of ChIP-Seq papers were published. Type 2 interactomes tend to be less clear, as they are transcription factors that recruit other elements, or form complexes that bind to the DNA at specific sites, and require other proteins to bind to encourage transcription. My own interpretation is that the number of identifiable binding sites should indicate the type, and thus, if there were three identifiable transcription factor consensus sites lined up, it should be considered a Type 3 interactome, though, that may be simplifying the case tremendously, as there are, undoubtedly, many other proteins that must be recruited before any transcription will take place.

In terms of applications, the members of the wold lab have been using their identified peaks to locate novel binding site motifs. I think this is the first thing everyone thinks of when they hear of ChIP-Seq for the first time, but it’s pretty cool to see it in action. (We also do it at the GSC too, I might add.) The neatest thing, however, was that they were able to identify a rather strange binding site, with two halves of a motif, split by a variable distance. I haven’t quite figured out how that works, in terms of DNA/Protein structure, but it’s conceptually quite neat. They were able to show that the distance between the two halves of the structure vary by 10-20 bases, making it a challenge to identify, for most traditional motif scanners. Nifty.

Another neat thing, which I think everyone knows, but was cool to hear that it’s been shown is that the binding sites often line up on areas of high conservation across species. I use that as a test for my own work, but it was good to have it confirmed.

Finally, one of the things Dr. Wold mentioned was that they were interested in using the information in the directionality of reads in their analysis. Oddly enough, this was one of the first problems I worked on in ChIP-Seq, months ago, and discovered several ways to handle it. I enjoyed knowing that there’s at least one thing my own ChIP-Seq code does that is unique, and possibly better than the competition. (-;

As for transcriptome work, there were only a couple things that are worth mentioning. The Wold lab seems to be using MAQ and a list of splice junctions assembled from annotated exons to map the transcriptome sequences. I’ve heard that before, actually, from someone at the GSC who is doing exactly the same thing. It’s a small world. I’m not really a fan of the technique, however. Yes, you’ll get a lot of the exon junction reads, but you’ll only find the ones you’re looking for, which is exactly the criticism all the next-gen people throw at the use of micro-arrays. There has got to be a better solution… but I don’t yet know what it is. (We thought it was Exonerate, but we can’t seem to get it to work well, due to several bugs in the software. It’s clearly a work in progress.)

Anyhow, I think I’m going to stop here. I’ll just sum it all up by saying it was a pretty good talk, and it’s given me lots of things to think about. I’m looking forward to getting back to coding tomorrow.

>New ChIP-Seq tool from Illumina

>Ok, I had to blog this. Someone on the SeqAnswers forum brought it to my attention that Illumina has a new tool for ChIP-Seq experiments. That in itself doesn’t bother me – the more people in this space, the faster we learn about what makes us tick.

What surprises me, though, is the tool itself (beadstudio data analysis software – chip sequencing module). It’s implemented only for Windows, for one. (Don’t most self-respecting scientists use Macs or Linux these days? Or at least use and develop tools that can be used cross-platform?) Second, the feature set appears to be a re-implementation of the UCSC Genome Browser. Given the choice between the two, I don’t see any reason to buy the Illumina version. (Yes, you have to pay for it, whereas UCSC is free and flexible.) I can’t tell if it loads bed files or wig files, but the screen shots show a rather unflexible tool that looks like a graphical version of Gap4 or Consed. I’m not particularly impressed.

Worse still, I can’t see this being implemented in a pipeline. If you’re processing 100’s of ChIP-Seq experiments in a year, or 1000’s once this technique really starts to hit it’s stride, why would you want to force it all through a GUI? I just don’t get it.

Well, what do I know? Maybe there’s a big market for people out there who don’t want free cross-platform tools, and would rather pay for a brand name science application than use something that works. Come to think of it, I’m willing to bet there are a few pharma companies out there who do think like that, and Illumina is likely to conquer that market with their tool. Happy clicking, Vista users.