>ChIP-Seq normalization.

>I’ve spent a lot of time working on ChIP-Seq controls recently, and wanted to raise an interesting point that I haven’t seen addressed much: How to normalize well. (I don’t claim to have read ALL of the chip-seq literature, and someone may have already beaten me to the punch… but I’m not aware of anything published on this yet.)

The question of normalization occurs as soon as you raise the issue of controls or comparing any two samples. You have to take it in to account when doing any type of comparision, really, so it’s somewhat important as the backbone to any good second-gen work.

The most common thing I’ve heard to date is to simply normalize by the number of tags in each data set. As far as I’m concerned, that really will only work when your data sets come from the same library, or two highly correlated samples – when nearly all of your tags come from the same locations.

However, this method fails as soon as you move into doing a null control.

Imagine you have two samples, one is your null control, with the “background” sequences in it. When you seqeunce, you get ~6M tags, all of which represent noise. The other is ChIP-Seq, so some background plus an enriched signal. When you sequence, hopefully you sequence 90% of your signal, and 10% of the background to get ~8M tags – of which ~.8M are noise. When you do a compare, the number of tags isn’t quite doing justice to the relationship between the two samples.

So what’s the real answer? Actually, I’m not sure – but I’ve come up with two different methods of doing controls in FindPeaks: One where you normalize by identifying a (symmetrical) linear regression through points that are found in both samples, the other by identifying the points that appear in both samples and summing up their peak heights. Oddly enough, they both work well, but in different scenarios. And clearly, both appear (so far) to work better than just assuming the number of tags is a good normalization ratio.

More interesting, yet, is that the normalization seems to change dramatically between chromosomes (as does the number of mapping reads), which leads you to ask why that might be. Unfortunately, I’m really not sure why it is. Why should one chromosome be over-represented in an “input dna” control?

Either way, I don’t think any of us are getting to the bottom of the rabbit hole of doing comparisons or good controls yet. On the bright side, however, we’ve come a LONG way from just assuming peak heights should fall into a nice Poisson distribution!

>New ChIP-seq control

>Ok, so I’ve finally implemented and debugged a second type of control in FindPeaks… It’s different, and it seems to be more sensitive, requiring less assumptions to be made about the data set itself.

What it needs, now, is some testing. Is anyone out there willing to try a novel form of control on a dataset that they have? (I won’t promise it’s flawless, but hey, it’s open source, and I’m willing to bug fix anything people find.)

If you do, let me know, and I’ll tell you how to activate it. Let the testing begin!

>Why peak calling is painful.

>In discussing my work, I’m often asked how hard it is to write a peak calling algorithm. The answer usually surprises people: It’s trivial. Peak calling itself isn’t hard. However, there are plenty of pitfalls that can surprise the unwary. (I’ve found myself in a few holes along the way, which have been somewhat challenging to get out of.)

The pitfalls, when they do show up, can be very painful – masking the triviality of the situation.

In reality, the three most frustrating things that occur in peak calling:

  1. Maintaining the software
  2. Peak calling without unlimited resources eg, 64Gb RAM
  3. Keeping on the cutting edge

On the whole, each of these things is a separate software design issue worthy of a couple of seconds of discussion.

When it comes to building software, it’s really easy to fire up a “one-off” script. Anyone can write something that can be tossed aside when they’re done with it – but code re-use and recycling is a skill. (And an important one.) Writing your peak finder to be modular is a lot of work, and a huge amount of time investment is required to keep the modules in good shape as the code grows. A good example of why this is important can be illustrated with file formats. Since the first version of FindPeaks, we’ve transitioned through two versions of Eland output, Maq’s .map format and now on to SAM and BAM (but not excluding BED, GFF, and several other more or less obscure formats). In each case, we’ve been able to simply write a new iterator and plug it into the existing modular infrastructure. In fact, SAM support was added in quite rapidly by Tim with only a few hours of investment. That wouldn’t have been possible without the massive upfront investment in good modularity.

The second pitfall is memory consumption – and this is somewhat more technical. When dealing with sequencing reads, you’re faced with a couple of choices: you either sort the reads and then move along the reads one at a time, determining where they land – OR – you can pre-load all the reads, then move along the chromosome. The first model takes very little memory, but requires a significant amount of pre-processing, which I’ll come back to in a moment. The second requires much less cpu time – but is intensely memory thirsty.

If you want to visualize this, the first method is to organize all of your reads by position, then to walk down the length of the chromosome with a moving window, only caring about the reads that fall into the window at any given point in time. This is how FindPeaks works now. The second is to build a model of the chromosome, much like a “pileup” file, which then can be processed however you like. (This is how I do SNP calling.) In theory, it shouldn’t matter which one you do, as long as all your reads can be sorted correctly. The first can usually be run with a limited amount of memory, depending on the memory strucutures you use, whereas the second pretty much is determined by the size of the chromosomes you’re using (multiplied by a constant that also depends on the structures you use.)

Unfortunately, using the first method isn’t always as easy as you might expect. For instance, when doing alignments with transcriptomes (or indels), you often have gapped reads. An early solution to this in FindPeaks was to break each portion of the read into separate aligned reads, and process them individually – which works well when correctly sorted. Unfortunately, new formats no longer allow that – using a “pre-sorted” bam/sam file, you can now find multi-part reads, but there’s no real option of pre-fragmenting those reads and re-sorting. Thus, FindPeaks now has an additional layer that must read ahead and buffer sam reads in order to make sure that the next one returned is the correct order. (You can get odd bugs, otherwise, and yes, there are many other potential solutions.)

Moving along to the last pitfall, the one thing that people want out of a peak finder is that it is able to do the latest and greatest methods – and do it ahead of everyone else. That on it’s own is a near impossible task. To keep a peak finder relevant, you not only need to implement what everyone else is doing, but also do things that they’re not. For a group of 30 people, that’s probably not too hard, but for academic peak callers, that can be a challenge – particularly since every use wants something subtly different than the next.

So, when people ask how hard it is to write their own peak caller, that’s the answer I give: It’s trivial – but a lot of hard work. It’s rewarding, educational and cool, but it’s a lot of work.

Ok, so is everyone ready to write their own peak caller now? (-;

>how recently was your sample sequenced?

>One more blog for the day. I was postponing writing this one because it’s been driving me nuts, and I thought I might be able to work around it… but clearly I can’t.

With all the work I’ve put into the controls and compares in FindPeaks, I thought I was finally clear of the bugs and pains of working on the software itself – and I think I am. Unfortunately, what I didn’t count on was that the data sets themselves may not be amenable to this analysis.

My control finally came off the sequencer a couple weeks ago, and I’ve been working with it for various analyses (snps and the like – it’s a WTSS data set)… and I finally plugged it into my FindPeaks/FindFeatures pipeline. Unfortunately, while the analysis is good, the sample itself is looking pretty bad. In looking at the data sets, the only thing I can figure is that the year and a half of sequencing chemistry changes has made a big impact on the number of aligning reads and the quality of the reads obtained. I no longer get a linear correlation between the two libraries – it looks partly sigmoidal.

Unfortunately, there’s nothing to do except re-seqeunce the sample. But really, I guess that makes sense. If you’re doing a comparison between two data-sets, you need them to have as few differences as possible.

I just never realized that the time between samples also needed to be controlled. Now I have a new question when I review papers: How much time elapsed between the sequencing of your sample and it’s control?

>Community

>This week has been a tremendous confluence of concepts and ideas around community. Not that I’d expect anyone else to notice, but it really kept building towards a common theme.

The first was just a community of co-workers. Last week, my lab went out to celebrate a lab-mate’s successful defense of her thesis (Congrats, Dr. Sleumer!). During the second round of drinks (Undrinkable dirty martinis), several of us had a half hour conversation on the best way to desalinate an over-salty martini. As weird as it sounds, it was an interesting and fun conversation, which I just can’t imagine having with too many people. (By the way, I think Obi’s suggestion wins: distillation.) This is not a group of people you want to take for granted!

The second community related event was an invitation to move my blog over to a larger community of bloggers. While I’ve temporarily declined, it raised the question of what kind of community I have while I keep my blog on my own server. In some ways, it leaves me isolated, although it does provide a “distinct” source of information, easily distinguishable from other people’s blogs. (One of the reasons for not moving the larger community is the lack of distinguishing marks – I don’t want to sink into a “borg” experience with other bloggers and just become assimilated entirely.) Is it worth moving over to reduce the isolation and become part of a bigger community, even if it means losing some of my identity?

The third event was a talk I gave this morning. I spent a lot of time trying to put together a coherent presentation – and ended talking about my experiences without discussing the actual focus of my research. Instead, it was on the topic of “successes and failures in developing an open source community” as applied to the Vancouver Short Read Analysis Package. Yes, I’m happy there is a (small) community around it, but there is definitely room for improvement.

Anyhow, at the risk of babbling on too much, what I really wanted to say is that communities are all around us, and we have to seriously consider our impact on them, and the impact they have on us – not to mention how we integrate into them, both in our work and outside. If you can’t maximize your ability to motivate them (or their ability to motivate you), then you’re at a serious disadvantage. How we balance all of that is an open question, and one I’m still working hard at answering.

I’ve attached my presentation from this morning, just in case anyone is interested. (I’ve decorated it with pictures from the South Pacific, in case all of the plain text is too boring to keep you awake.)

Here it is (it’s about 7Mb.)

>FindPeaks 4.0

>Well, I’ve finally gotten to it: the tag for FindPeaks 4.0. At this point, I’m more or less satisfied with what made it in to this release: Saturation, Controls, Compares and a whole lot of changes to the underlying machinery. The documentation is still going through some changes, (I have another two flags to add in) and a lot more clarification to do on what some of the parameters actually accomplish, but it’s now in a reasonable state.

Despite the milestone, this project is really a constant evolution. I’m already thinking about what should be in the next version (4.1?): Support for SAM/BAM, “peakless peak calling” for regions instead of peaks, a vastly upgraded FindFeatures code and a host of small changes that I had thought weren’t worth the effort for this particular release. I’m even considering a GUI, if I can squeeze it in. (If anyone would like to help out on that project, I’d be thrilled to add them to the project!)

At this point, I’m happy to say I’m not aware of any outstanding coding bugs – although I do take it seriously that there is an open bug remarking that the documentation is insufficient. I’ve been worknig on improving it, and reorganizing the manual, which should be done in the next couple days. Once that’s done, I’ll jump back into using my code to do some analysis of my own. There are a few really neat things, based on work on my poster, that I’d like to play with. I guess that’s what they say about coders: when you write software for yourself, you never lack the motivation to add in one more feature. (=

>Poster – reprise

>In an earlier post, I said I’d eventually get around to putting up a thumbnail of the poster that I presented at the Canadian Institutes of Health Research National Research Poster Competition. (Yes, the word “research” appears twice in that sentence.) After a couple days of being busy with other stuff, I’ve finally gotten around to it.

I’m also happy to say that the poster was well received, despite the unconventional appearance. It was awarded an Award of Excellence (Silver category) from the judges.

thumbnail of poster

>Science Cartoons – 3

>I wasn’t going to do more than one comic a day, but since I just published it into the FindPeaks 4.0 manual today, I may as well put it here too, and kill two birds with one stone.

Just to clarify, under copyright laws, you can certainly re-use my images for teaching purposes or your own private use (that’s generally called “fair use” in the US, and copyright laws in most countries have similar exceptions), but you can’t publish it, take credit for it, or profit from it without discussing it with me first. However, since people browse through my page all the time, I figure I should mention that I do hold copyright on the pictures, so don’t steal them, ok?

Anyhow, Comic #3 is a brief description of how the compare in FindPeaks 4.0 works. Enjoy!

>Science Cartoons – 1

>I’ve been busy putting together a poster for a student conference I’m attending next week in Winnipeg, which has distracted me from just about everything else I had planned to accomplish. Not to worry, though, I’ll be back on it in a day or two.

Anyhow, part of the fun this time around is that the poster competition includes criteria for judging that involve how pretty the poster is – so I decided to go all out and learn to draw with inkscape. Instead of the traditional background section on a poster, I went with a comic theme. It’s a great way to use text and figures to get across complex topics very quickly. So, for the next couple days, I thought I’d post some of them, one at a time.

Here’s the first, called “Second Generation Sequencing”. It’s my least favorite of the bunch, since the images feel somewhat sloppy, but it’s not a bad try for a first pass. Yes, I do own the copyrights – and you do need to ask permission to re-use them.

>Rumours..

>Apparently there’s a rumour going around that I think FindPeaks isn’t a good software program, and that I’ve said so on my blog. I don’t know who started this, or where it came from, but I would like to set the record straight.

I think FindPeaks is one of the best ChIP-Seq packages currently available. Yes, I would like to have better documentation, yes there are occasionally bugs in it, and yes, the fact that I’m the only full-time developer on the project are detriments…. But those are only complaints, and they don’t diminish the fact that I think FindPeaks can out peak-call, out control, out compare…. and will outlast, outwit and outplay all of it’s opponents. (=

FindPeaks is actually a VERY well written piece of code, and has the flexibility to do SO much more than just ChIP-Seq. If I had my way, I’d have another developer or two working with the code to expand many of the really cool features that it already has.

If people are considering using it, I’m always happy to help get them started, and I’m thrilled to add more people to the project if they would like to contribute.

So, just to make things clear – I think FindPeaks (and the rest of the Vancouver Short Read Analysis Package) are great pieces of software, and are truly worth using. If you don’t believe me, send me an email, and I’ll help make you a believer too. (=