>Quality vs Quantity

>Today was an interesting day, for many reasons. The first was the afternoon tours for high-school students that came by the Genome Sciences Centre and the labs. I’ve been taking part in an outreach program for some of the students at two local high schools, which has involved visiting the students to teach them a bit of the biology and computers we do, as well as the tours that bring them by to see us “at work.” Honestly, it’s a lot of fun, and I really enjoy interacting with the kids. Their questions are always amusing and insightful – and are often a lot of fun to answer well. (How do you explain how the academic system works in 2 minutes or less?)

For my part, I introduced the kids to Pacific Biosystems SMRT technology. I came up with a relatively slick monologue that goes well with a video from PacBio. (If you haven’t seen their video, you should definitely check this out.) The kids seem genuinely impressed with the concept, and really enjoy the graphics – although they enjoy the desktop effects with Ubuntu too… so maybe that’s not the best criteria to use for evaluation.

Anyhow, aside from that distraction, I’ve also had the pleasure of working on some of my older code today. After months of people at the GSC ignoring the fact that I’d already written code to solve many of the problems they were trying to develop software, a few people have decided to pick up some of the pieces of the Vancouver Short Read Package and give it a test spin.

One of them was looking at FindFeatures – which I’ve used recently to find exons of interest in WTSS libraries – and the other was PSNPAnalysiPipeline code – which does some neat metrics for WTSS.

The fun part of it is that the code for both of those applications were written months ago – in some cases before I had the data to test them on. When revisiting them and now actually putting the code to use, I was really surprised by the number of options I’d tossed in, to account for many situations that hadn’t even been seriously anticipated. Someone renamed all of your fasta files? No worries, just use the -prepend option! Your junction library has a completely non-standard naming? No problem, just use the -override_mapname option! Some of your MAQ aligned reads have indels – well, ok, i can give you a 1-line patch to make that work too.

I suppose that really makes me wonder: If I were writing one-off scripts, which would obviously lack this kind of flexibility, I’d be able to move faster and more nimble across the topics that interest me. (Several other grad students do that, and are well published because of it.) Is that a trade off I’m willing to make, though?

Someone really needs to hold a forum on this topic: “Grad students: quality or quantity?” I’d love to sit through those panel discussions. As for myself, I’m still somewhat undecided on the issue. I’d love more publications, but having the code just work (which gets harder and harder as the codebase hits 30k lines) is also a nice thing. While I’m sure users of my software are happy when these options exist, I wonder what my supervisor thinks of the months I’ve spent building all of these tools – and not writing papers.

Ah well, I suppose when it comes time to defend, I’ll find out exactly what he thinks about that issue. :/

>2 weeks of neglect on my blog = great thesis progress.

>I wonder if my blogging output is inversely proportional to my progress on my thesis. I stopped writing two weeks ago for a little break, and ended up making big steps forward. The vast amount of my work went into FindPeaks, which included the following:

  • A complete threaded Saturation analysis for next-gen libraries.
  • A method of comparing next-gen libraries to identify peaks that are statistically significant outliers. (It’s also symmetic, unlike a linear regression based methods.)
  • A better control method
  • A whole new way of analysing WTSS data, which gives statistically valid expression differences

And, of course many many other changes. Not everything is bug-free, yet, but it’s getting there. All that’s left on my task list are debugging a couple of things in the compare mode, relating to peaks present in only one of the two librarires, and an upgrade to my FDR cutoff prediction methods. Once those are done, I think I’ll be ready to push out FindPeaks 4.0. YAY!

Actually, what was surprising to me was the sheer amount of work that I’ve done on this since January. I compiled the change list since my last “quarterly report” for a project that used FindPeaks (but doesn’t support it, ironically…. why am I doing reports for them again?) and came up with 24 pages of commit messages – over 575 commits. Considering the amount of work I’ve done on my actual projects, away from FindPeaks, I think I’ve been pretty darn productive.

Yes, I’m going to take this opportunity to pat myself on the back in public for a whole 2 seconds… ok, done.

So, overall, blogging may not have been distracting me from my work, as even at the height of my blogging (around AGBT), I was still getting lots done, but the past two weeks have really been a help. I’ll be back to blogging all the good stuff on monday. And I’m looking forward to doing some writing now, on some of the cool things in FP4.0 that haven’t made it into the manual… yet.

Anyone want some fresh ChIP-Seq results? (-;

>Nifty little trick for debugging frozen applications

>This trick is just too cool not to mention.  I was trying to debug an application that was getting stuck in an endless loop, the other day. It was a rather complicated set of changes that was required and I had no idea where the program was getting stuck.’

In the past, I would have just ended the program with a control-c, and then started dropping in print statements until I could isolate exactly where the program was getting stuck.  Instead, I stumbled upon a very nifty little trick: using the kill function to halt the program and dump the thread’s core to screen with the command:

kill -3 [pid]

For a java code running from the class files, the core dump shows you exactly which line is being executed in each thread, allowing you to find out precisely where the problem is – making debugging go much more quickly.

Anyhow, I haven’t yet tried if this works on a .jar file, or what else you can do with a quick “kill -3”, but this certainly broadens my toolkit of debugging utilities, and gives me a whole new respect for the kill signals.  I may have to test out a few of the other ones….

>Universal format converter for aligned reads

>Last night, I was working on FindPeaks when I realized what an interesting treasure trove of libraries I was really sitting on. I have readers and writers for many of the most common aligned read formats, and I have several programs that do useful functions. So, that raise the distinctly interesting point that all of them should be applied together in one shot… and so I did exactly that.

I now have an interesting set of utilities that can be used to convert from one file format to another: bed, gff, eland, extended eland, MAQ .map (read only), mapview, bowtie…. and several other more obscure formats.

For the moment, the “conversion utility” forces the output to bed file format (since that’s the file type with the least information, and I don’t have to worry about unexpected file information loss), which can then be viewed with the UCSC browser, or interpreted by FindPeaks to generate wig files. (BED files are really the lowest common denominator of aligned information.) But why stop there?

Why not add a very simple functionality that lets one format be converted to the other? Actually, there’s no good reason not to, but it does involve some heavy caveats. Conversion from one format type to another is relatively trivial until you hit the quality strings. since these aren’t being scaled or altered, you could end up with some rather bizzare conversions unless they’re handled cleanly. Unfortunately, doing this scaling is such a moving target that it’s just not possible to keep up with that and do all the other devlopment work I have on my plate. (I think I’ll be asking for a co-op student for the summer to help out.)

Anyhow, I’ll be including this nifty utility in my new tags. Hopefully people will find the upgraded conversion utility to be helpful to them. (=

>Findpeaks 3.3… continued

>Patch, compile, read bug, search code, compile, remember to patch, compile, test, find bug, realized it’s the wrong bug, test, compile, test….

Although I really enjoy working on my apps, sometimes a whole day goes by where tons of changes are made, and really I don’t feel like I’ve gotten much done. I suppose it’s more of the scale of things left to do, rather than the number of tasks. I’ve managed to solve a few mysteries and make an impact for some people using the software, but haven’t got around to testing the big changes I’ve been working on for a few days on using different compare mechanisms for FindPeaks.

(One might then ask why I’m blogging instead of doing that testing… and that would be a very good question.)

Some quick ChIP-Seq things on my mind:

  • Samtools: there is a very complete Java/Samtools/Bamtools API that I could be integrating, but after staring at it for a while, I’ve realized that the complete lack of documentation on how to integrate it is really slowing the effort down. I will proably return to it next week.
  • Compare and Control: It seems people are switching to this paradigm on several other projects – I just need to get the new compare mechanism in, and then integrate it in with the control at the same time. That will provide a really nice method for doing both at once, which is really key for moving forward.
  • Eland “extended” format: I ended up reworking all of the Eland Export file functions today. All of the original files I worked with were pre-sorted and pre-formatted. Unfortunately, that’s not how they exist in the real world. I now have updated the sort and separate chromosome functions for eland ext. I haven’t done much testing on them, unfortunately, but that’s coming up too.
  • Documentation: I’m so far behind – writing one small piece of manual a day seems like a good target – I’ll try to hold myself to it. I might catch up by the end of the month, at that pace.

Anyhow, lots of really fun things coming up in this version of FindPeaks… I just have to keep plugging away.

>No More Maq?

>Another grad student at the GSC forwarded an email to our mailing list the other day, which was in turn from the maq-help mailing list. Unfortunately, the link on the maq-help mailing list takes you to another page, which incidentally (and erroneously) complains that FindPeaks doesn’t work with Maq .map files – which it does. Instead, I suggest checking out this post on SeqAnswers from Li Heng, the creator of Maq, which has a very similar message.

The main gist of it is that the .map file format will be deprecated, and there will be no new versions of the Maq software package in the future. Instead, they will be working on two other projects (from the forwarded email):

  1. Samtools: replaces maq’s (reference-based) “assembly”
  2. bwa: replaces maq’s “mapping” for whole human genome alignment.

I suppose it means that eventually FindPeaks should support the Samtools formats, which I’ll have to look into at some point. For those of you who are still using Maq, you may need to start following those projects as well, simply because it raises the question of long-term Maq support. As with many early generation Bioinformatics tools, we’ll just have to be patient and watch how the software landscape evolves.

It probably also means that I’ll have to start watching the Samtools development more carefully for use with my thesis project – many of the tools they are planning seem to replace the ones I’ve already developed in the Vancouver Short Read Alignment Package. Eventually, I’ll have to evaluate both sets against each other. (That could also be an interesting project.)

While this was news to me, it’s probably no more than the expected churn of a young technology field. I’m sure it’s not going to be long until even the 2nd generation sequencing machines themselves evolve into something else.

>The Future of FindPeaks

>At the end of my committee meeting, last month, my advisors suggested I spend less time on engineering questions, and more time on the biology of the research I’m working on. Since that means spending more time on the cancer biology project, and less on FindPeaks, I’ve been spending some time thinking about how I want to proceed forward – and I think the answer is to work smarter on FindPeaks. (No, I’m not dropping FindPeaks development. It’s just too much fun.)

For me, the amusing part of it is that FindPeaks is already on it’s 4th major structural iteration. Matthew Bainbridge wrote the first, I duplicated it by re-writing it’s code for the second version, then came the first round of major upgrades in version 3.1, and then I did the massive cleanup that resulted in the 3.2 branch. After all that, why would I want to write another version?

Somewhere along the line, I’ve realized that there are several major engineering things that could be done that would make FindPeaks faster, more versatile and able to provide more insight into the biology of ChIP-Seq and similar experiments. Most of the changes are a reflection of the fact that the underlying aligners that are being used have changed. When I first got involved we were using Eland 0.3 (?), which was simple compared to the tools we now have available. It just aligned each fragment individually and spit out the results, which left the filtering and sorting up to FindPeaks. Thus, early versions of FindPeaks were centred on those basic operations. As we moved to sorted formats like .map and _sorted.txt files, those issues have mostly dissapeared, allowing more emphasis to be placed on the statistics and functionality.

At this point, I think we’re coming to the next generation of biology problems – integrating FindPeaks into the wider toolset – and generating real knowledge about what’s going on in the genome, and I think it’s time for FindPeaks to evolve to fill that role, growing out to better use the information available in the sorted aligner results.

Ever since the end of my exam, I haven’t been able to stop thinking of neat applications for FindPeaks and the rest of my tool kit – so, even if I end up focussing on the cancer biology that I’ve got in front of me, I’m still going to find the time to work on FindPeaks, to better take advantage of the information that FindPeaks isn’t currently using.

I guess that desire to do things well, and to get at the answers that are hidden in the data is what drives us all to do science. And probably what drives grad students to work late into the night on their projects…. I think I see a few more late nights in the near future. (-;

>SNP callers.

>I thought I’d switch gears a bit this morning. I keep hearing people say that the next project their company/institute/lab is going to tackle is a SNP calling application, which strikes me as odd. I’ve written at least 3 over the last several months, and they’re all trivial. They seem to perform as well as any one else’s SNP calls, and, if they take up more memory, I didn’t think that was too big of a problem. We have machines with lots of RAM these days, and it’s relatively cheap, these days.

What really strikes me as odd is that people think there’s money in this. I just can’t see it. The barrier to creating a new SNP calling program is incredibly low. I’d suggest it’s even lower than creating an aligner – and there are already 20 or so of those out there. There’s even an aligner being developed at the GSC (which I don’t care for in the slightest, I might add) that works reasonably well.

I think the big thing that everyone is missing is that it’s not the SNPs being called that important – it’s SNP management. In order to do SNP filtering, I have a huge postgresql database with SNPs from a variety of sources, in several large tables, which have to be compared against the SNPs and gene calls from my data set. Even then, I would have a very difficult time handing off my database to someone else – my database is scalable, but completely un-automated, and has nothing but the psql interface, which is clearly not the most user friendly. If I were going to hire a grad student and allocate money to software development, I wouldn’t spend the money on a SNP caller and have the grad student write the database – I’d put the grad student to work on his own SNP caller and buy a SNP management tool. Unfortunately, it’s a big project, and I don’t think there’s a single tool out there that would begin to meet the needs of people managing output from massively-parallel sequencing efforts.

Anyhow, just some food for thought, while I write tools that manage SNPs this morning.

Cheers.