Where’s the collaboration?

I had another topic queued up this morning, but an email from my sister-in-law reminded me of a more pressing beef: Lack of collaboration in the sciences. And, of course, I have no statistics to back this up, so I’m going to put this out there and see if anyone has anything to comment on the topic.

My contention is that the current methods for funding scientists is the culprit for driving less efficient science, mixed with a healthy dose of Zero Sum Game thinking.

First, my biggest pet peeve is that scientists – and bioinformaticians in particular – spend a lot of time reinventing the wheel.  How many SNP callers are currently available?  How many ChiP-Seq packages? How many aligners?  And, more importantly, how can you tell one from the other?  (How many of the hundreds of snp callers have you actually used?)

It’s a pretty annoying aspect of bioinformatics that people seem to feel the need to start from scratch on a new project every time they say “I could tweak a parameter in this alignment algorithm…”  and then off they go, writing aligner #23,483,337 from scratch instead of modifying the existing aligner.  At some point, we’ll have more aligners than genomes!  (Ok, that’s a shameless hyperbole.)

But, the point stands.  Bioinformaticians create a plethora of software that solve problems that are not entirely new.  While I’m not saying that bioinformaticians are working on solved problems, I am asserting that the creation of novel software packages is an inefficient way to tackle problems that someone else has already invested time/money into building software for. But I’ll come back to that in a minute.

But why is the default behavior to write your own package instead of building on top of an existing one?  Well, that’s clear: Publications.  In science, the method of determining your progress is how many journal publications you have, skewed by some “impact factor” for how impressive the name of the journal is.  The problem is that this is a terrible metric to judge progress and contribution.  Solving a difficult problem in an existing piece of software doesn’t merit a publication, but wasting 4 months to rewrite a piece of software DOES.

The science community, in general, and the funding community more specifically, will reward you for doing wasteful work instead of focusing your energies where it’s needed. This tends to squash software collaborations before they can take off simply by encouraging a proliferation of useless software that is rewarded because it’s novel.

There are examples of bioinformatics packages where collaboration is a bit more encouraged – and those provide models for more efficient ways of doing research.  For instance, in the molecular dynamics community, Charmm and Amber are the two software frameworks around which most people have gathered. Grad students don’t start their degree by being told to re-write one or the other packages, but are instead told to learn one and then add modules to it.  Eventually the modules are released along with a publication describing the model.  (Or left to rot in a dingy hard drive somewhere if they’re not useful.)   Publications come from the work done and the algorithm modifications being explained.  That, to me, seems like a better model – and means everyone doesn’t start from scratch

If you’re wondering where I’m going with this, it’s not towards the Microsoft model where everyone does bioinformatics in Excel, using Microsoft generated code.

Instead, I’d like to propose a coordinated bioinformatics code-base.  Not a single package, but a unified set of hooks instead.  Imagine one code base, where you could write a module and add it to a big git hub of bioinformatics code – and re-use a common (well debugged) core set of functions that handle many of the common pieces.  You could swap out aligner implementations and have modular common output formats.  You could build a chip-seq engine, and use modular functions for FDR calculations, replacing them as needed.  Imagine you could collaborate on code design with someone else – and when you’re done, you get a proper paper on the algorithm, not an application note announcing yet another package.

(We have been better in the past couple years with tool sets like SAMTools, but that deals with a single common file format.  Imagine if that also allowed for much bigger projects like providing core functions for RNA-Seq or CNV analysis…  but I digress.)

Even better, if we all developed around a single set of common hooks, you can imagine that, at the end of the day (once you’ve submitted your repository to the main trunk), someone like the Galaxy team would simply vacuum up your modules and instantly make your code available to every bioinformatician and biologist out there.  Instant usability!

While this model of bioinformatics development would take a small team of core maintainers for the common core and hooks, much the same way Linux has Linus Torvalds working on the Kernel, it would also cut down severely on code duplication, bugs in bioinformatics code and the plethora of software packages that never get used.

I don’t think this is an unachievable goal, either for the DIY bioinformatics community, the Open Source bioinformatics community or the academic bioinformatics community.  Indeed, if all three of those decided to work together, it could be a very powerful movement.  Moreso, corporate bioinformatics could be a strong player in it, providing support and development for users, much the way corporate Linux players have done for the past two decades.

What is needed, however, is buy-in from some influential people, and some influential labs.  Putting aside their own home grown software and investing in a common core is probably a challenging concept, but it could be done – and the rewards would be dramatic.

Finally, coming back to the funding issue.  Agencies funding bioinformatics work would also save a lot of money by investing in this type of framework.  It would ensure more time is spent on more useful coding, more time is spent on publications that do more to describe algorithms and to ensure higher quality code is being produced at the end of the day.  The big difference is that they’d have to start accepting that bioinformatics papers shouldn’t be about “new software” available, but “new statistics”, “new algorithms” and “new methods” – which may require a paradigm change in the way we evaluate bioinformatics funding.

Anyhow, I can always dream.

Notes: Yes, there are software frameworks out there that could be used to get the ball rolling.  I know Galaxy does have some fantastic tools, but (if I’m not mistaken), it doesn’t provide a common framework for coding – only for interacting with the software.  I’m also aware that Charmm and Amber have problems – mainly because they were developed by competing labs that failed to become entirely enclusive of the community, or to invest substantially in maintaining the infrastructure in a clean way.Finally, Yes, the licensing of this code would determine the extent of corporate participation, but the GPL provides at least one successful example of this working.

>How to be a better Programmer: Tactics.

>I’m a bit too busy for a long post, but a link was circulating around the office that I thought was worth passing on to any bioinformaticians out there.

http://dlowe-wfh.blogspot.com/2007/06/tactics-tactics-tactics.html

The article above is on how to be a better programmer – and I wholeheartedly agree with what the author proposed, with one caveat that I’ll get to in a minute. The point of the the article is that learning to see the big picture (not specific skills) will make you a better programmer. In fact, this is the same advice Sun Tzu gives in “The Art of War”, where understanding the terrain, the enemy, etc are the tools you need to be a better general. [This would be in contrast to learning how to wield each weapon, which would only make you a better warrior.] Frankly, it’s good advice, and this leads you down the path towards good planning and clear thinking – the keys to success in most fields.

The caveat, however, is that there are times in your life where this is the wrong approach: ie. grad school. As a grad student, your goal isn’t to be great at everything you touch – it’s to specialize in some small corner of one field, and tactics are no help here. If grad school existed for Ninjas, the average student would walk out being the best (pick one of: poisoner/dart thrower/wall climber/etc) in the world – and likely knowing little or nothing about how to be a real ninja beyond what they learned in their Ninja undergrad. Tactics are never a bad investment, but they aren’t always what is being asked of you.

Anyhow, I plan to take the advice in the article and to keep studying the tactics of bioinformatics in my spare time, even though my daily work is more on the details and implementation side of it. There are a few links in the comments of the original article to sites the author believes are good comp-sci tactics… I’ll definitely be looking into those tonight. Besides, when it comes down to it, the tactics are really the fun parts of the problems, although there is also something to be said for getting your code working correctly and efficiently…. which I’d better get back to. (=

Happy coding!

>Link Roundup Returns – Dec 16-22

>I’ve been busy with my thesis project for the past couple weeks, which I think is understandable, but all work and no play kinda doesn’t sit well for me. So, over the weekend, I learned go, google’s new programming languages, and wrote myself a simple application for keeping track of links – and dumping them out in a pretty html format that I can just cut and paste into my blog.

While I’m not quite ready to release the code for my little go application, I am ready to test it out. I went back through the last 200 twitter posts I have (about 8 days worth), and grabbed the ones that looked interesting to me. I may have missed a few, or grabbed a few less than thrilling ones. It’s simply a consequence of me skimming some of the articles less well than others. I promise the quality of my links will be better in the future.

Anyhow, this experiment gave me a few insights into the process of “reprocessing” tweets. The first is that my app only records the person from whom I got the tweet – not the people from who they got it. I’ll try to address that in the future. The second is that it’s a very simple interface – and a lot of things I wanted to say just didn’t fit. (Maybe that’s for the better.. who knows.)

Regardless (or irregardless, for those of you in the U.S.) here are my picks for the week.

Bioinformatics:

  • Bringing back Blast (Blast+) (PDF) – Link (via @BioInfo)
  • Incredibly vague advice on how to become a bioinformatician – Link (via @KatherineMejia)
  • Cleaning up the Human Genome – Link (via @dgmacarthur)
  • Neat article on “4th paradigm of computing: exaflod of observational data” – Link (via @genomicslawyer)

Biology:

  • Gene/Protein Annotation is worse than you thought – Link (via @BioInfo)
  • Why are europeans white? – Link (via @lukejostins)

Future Technology:

  • D-Wave Surfaces again in discussions about bioinformatics – Link (via @biotechbase)
  • Changing the way we give credit in science – Link (via @genomicslawyer)

Off topic:

  • On scientists getting quote-mined by the press – Link (via @Etche_homo)
  • Give away of the best science cookie cutters ever – Link (via @apfejes)
  • Neat early history of the electric car – Link (via @biotechbase)
  • Wild (innacurate and funny) conspiracy theories about the Wellcome Trust Sanger Institute – Link (via @dgmacarthur)
  • The Eureka Moment: An Interview with Sir Alec Jeffreys (Inventor of the DNA Fingerprint) – Link (via @dgmacarthur)
  • Six types of twitter user (based on The Tipping Point) – Link (via @ritajlg)

Personal Medicine:

  • Discussion on mutations in cancer (in the press) – Link (via @CompleteGenomic)
  • Upcoming Conference: Personalized Medicine World Conference (Jan 19-20, 2010) – Link (via @CompleteGenomic)
  • deCODEme offers free analysis for 23andMe customers – Link (via @dgmacarthur)
  • UK government waking up to the impact of personalized medicine – Link (via @dgmacarthur)
  • Doctors not adopting genomic based tests for drug suitabiity – Link (via @dgmacarthur)
  • Quick and dirty biomarker detection – Link (via @genomicslawyer)
  • Personal Genomics article for the masses – Link (via @genomicslawyer)

Sequencing:

  • Paper doing the rounds: Effect of read-mapping biases on detecting allele-specific expression from RNA-sequencing data – Link (via @BioInfo)
  • Archiving Next Generation Sequencing Data – Link (via @BioInfo)
  • Epigenetics takes aim at cancer and other illnesses – Link (via @BioInfo)
  • (Haven’t yet read) Changing ecconomics of DNA Synthesis – Link (via @biotechbase)
  • Genomic players for investors. (Very light overview) – Link (via @genomicslawyer)
  • Haven’t read yet: Recommended review of 2nd and 3rd generation seq. technologies – Link (via @nanopore)
  • De novo assembly of Giant Panda Genome – Link (via @nanopore)
  • Welcome Trust summary of 2nd Gen sequencing technologies – Link (via @ritajlg)

>Go from Google…

>Just a short post, since I’m actually (although you probably can’t tell) rather busy today. However, I’m absolutely fascinated by Google’s new language, Go. It’s taken the best from just about every existing language out there, and appears so clean!

I’m currently watching Google’s talk on it, while I write… I’m only a few minutes in, but it seems pretty good. Watching this seriously makes me want to start a new bio-go project… so nifty!

>Why peak calling is painful.

>In discussing my work, I’m often asked how hard it is to write a peak calling algorithm. The answer usually surprises people: It’s trivial. Peak calling itself isn’t hard. However, there are plenty of pitfalls that can surprise the unwary. (I’ve found myself in a few holes along the way, which have been somewhat challenging to get out of.)

The pitfalls, when they do show up, can be very painful – masking the triviality of the situation.

In reality, the three most frustrating things that occur in peak calling:

  1. Maintaining the software
  2. Peak calling without unlimited resources eg, 64Gb RAM
  3. Keeping on the cutting edge

On the whole, each of these things is a separate software design issue worthy of a couple of seconds of discussion.

When it comes to building software, it’s really easy to fire up a “one-off” script. Anyone can write something that can be tossed aside when they’re done with it – but code re-use and recycling is a skill. (And an important one.) Writing your peak finder to be modular is a lot of work, and a huge amount of time investment is required to keep the modules in good shape as the code grows. A good example of why this is important can be illustrated with file formats. Since the first version of FindPeaks, we’ve transitioned through two versions of Eland output, Maq’s .map format and now on to SAM and BAM (but not excluding BED, GFF, and several other more or less obscure formats). In each case, we’ve been able to simply write a new iterator and plug it into the existing modular infrastructure. In fact, SAM support was added in quite rapidly by Tim with only a few hours of investment. That wouldn’t have been possible without the massive upfront investment in good modularity.

The second pitfall is memory consumption – and this is somewhat more technical. When dealing with sequencing reads, you’re faced with a couple of choices: you either sort the reads and then move along the reads one at a time, determining where they land – OR – you can pre-load all the reads, then move along the chromosome. The first model takes very little memory, but requires a significant amount of pre-processing, which I’ll come back to in a moment. The second requires much less cpu time – but is intensely memory thirsty.

If you want to visualize this, the first method is to organize all of your reads by position, then to walk down the length of the chromosome with a moving window, only caring about the reads that fall into the window at any given point in time. This is how FindPeaks works now. The second is to build a model of the chromosome, much like a “pileup” file, which then can be processed however you like. (This is how I do SNP calling.) In theory, it shouldn’t matter which one you do, as long as all your reads can be sorted correctly. The first can usually be run with a limited amount of memory, depending on the memory strucutures you use, whereas the second pretty much is determined by the size of the chromosomes you’re using (multiplied by a constant that also depends on the structures you use.)

Unfortunately, using the first method isn’t always as easy as you might expect. For instance, when doing alignments with transcriptomes (or indels), you often have gapped reads. An early solution to this in FindPeaks was to break each portion of the read into separate aligned reads, and process them individually – which works well when correctly sorted. Unfortunately, new formats no longer allow that – using a “pre-sorted” bam/sam file, you can now find multi-part reads, but there’s no real option of pre-fragmenting those reads and re-sorting. Thus, FindPeaks now has an additional layer that must read ahead and buffer sam reads in order to make sure that the next one returned is the correct order. (You can get odd bugs, otherwise, and yes, there are many other potential solutions.)

Moving along to the last pitfall, the one thing that people want out of a peak finder is that it is able to do the latest and greatest methods – and do it ahead of everyone else. That on it’s own is a near impossible task. To keep a peak finder relevant, you not only need to implement what everyone else is doing, but also do things that they’re not. For a group of 30 people, that’s probably not too hard, but for academic peak callers, that can be a challenge – particularly since every use wants something subtly different than the next.

So, when people ask how hard it is to write their own peak caller, that’s the answer I give: It’s trivial – but a lot of hard work. It’s rewarding, educational and cool, but it’s a lot of work.

Ok, so is everyone ready to write their own peak caller now? (-;

>Recursive MC solution to a simple problem…

>I’m trying to find balance between writing and experiments/coding. You can’t do both at the same time without going nuts, in my humble opinion, so I’ve come up with the plan of alternating days. One day of FindPeaks work, one day on my project. At that rate, I may not give the fastest responses (yes, I have a few emails waiting), but it should keep me sane and help me graduate in a reasonable amount of time. (For those of you waiting, tomorrow is FindPeaks day.)

That left today to work on the paper I’m putting together. Unfortunately, working on the paper doesn’t mean I don’t have any coding to do. I had a nice simulation that I needed to run: given the data sets I have, what are the likely overlaps I would expect?

Of course, I hate solving a problem once – I’d rather solve the general case and then plug in the particulars.

Today’s problem can be summed up as: “Given n data sets, each with i_n genes, what is the expected number of genes common to each possible overlap of 2 or more datasets?”

My solution, after thinking about the problem for a while, was to use a recursive solution. Not surprisingly, I haven’t written recursive code in years, so I was a little hesitant to give it a shot. In contrast, I whipped up the code, and gave it a shot – and it worked the first time. (That’s sometimes a rarity with my code – I’m a really good debugger, but can often be sloppy when writing code quickly the first time.) Best of all, the code is extensible – If I have more data sets later, I can just add them in and re-run. No code modification needed beyond changing the data. (Yes, I was sloppy and hard coded it, though it would be trivial to read it from a data file, if someone wants to re-use this code.)

Anyhow, it turned out to be an elegant solution to a rather complex problem – and I was happy to see that the results I have for the real experiment stick out like a sore thumb: it’s far greater than random chance.

If anyone is interested in seeing the code, it was uploaded into the Vancouver Short Read Analysis Package svn repository: here. (I’m doubting the number of page views that’ll get, but what the heck, it’s open source anyhow.)

I love it when code works properly – and I love it even more when it works properly the first time.

All in all, I’d say it’s been a good day, not even counting the 2 hours I spent at the fencing club. En gard! (-;

>SNP Datatabase v0.1

>Good news, my snp database seems to be in good form, and is ready for importing SNPs. For people who are interested, you can download the Vancouver Short Read Package from SVN, and find the relevant information in
/trunk/src/transcript_analysis/SNP_Database/

There’s a schema for setting up the tables and indexes, as well as applications for running imports from maq SNP calls and running a SNP caller on any form of alignment supported by FindPeaks (maq, eland, etc…).

At this point, there are no documents on how to use the software, since that’s the plan for this afternoon, and I’m assuming everyone who uses this already has access to a postgresql database (aka, a simple ubuntu + psql setup.)

But, I’m ready to start getting feature requests, requests for new SNP formats and schema changes.

Anyone who’s interested in joining onto this project, I’m only a few hours away from having some neat toys to play with!

>New Project Time… variation database

>I don’t know if anyone out there is interested in joining in – I’m starting to work on a database that will allow me to store all of the snps/variations that arise in any data set collected at the institution. (Or the subset to which I have the right to harvest snps, anyhow.) This will be part of the Vancouver Short Read Analysis Package, and, of course, will be available to anyone allowed to look at GPL code.

I’m currently on my first pass – consider it version 0.1 – but already have some basic functionality assembled. Currently, it uses a built in snp caller to identify locations with variations and to directly send them into a postgresql database, but I will shortly be building tools to allow SNPs from any snp caller to be migrated into the db.

Anyhow, just putting it out there – this could be a useful resource for people who are interested in meta analysis, and particularly those who might be interested in collaborating to build a better mousetrap. (=

>Aligner tests

>You know what I’d kill for? A simple set of tests for each aligner available. I have no idea why we didn’t do this ages ago. I’m sick of off-by-one errors caused by all sorts of slightly different formats available – and I can’t do unit tests without a good simple demonstration file for each aligner type.

I know Sam format should help with this – assuming everyone adopts it – but even for SAM I don’t have a good control file.

I’ve asked someone here to set up this test using a known sequence- and if it works, I’ll bundle the results into the Vancouver Package so everyone can use it.

Here’s the 50-mer I picked to do the test. For those of you with some knowledge of cancer, it comes from tp53. It appears to blast uniquely to this location only.

>forward - chr17:7,519,148-7,519,197
CATGTGCTGTGACTGCTTGTAGATGGCCATGGCGCGGACGCGGGTGCCGG

>reverse - chr17:7,519,148-7,519,197
ccggcacccgcgtccgcgccatggccatctacaagcagtcacagcacatg

>Picard code contribution

>

Update 2: I should point out that the subject of this post has been resolved. I’ll mark it down to a misunderstanding. The patches I submitted were accepted several days after being sent and rejected, once the purpose of the patch was clarified with the developers. I will leave the rest of the post here, for posterity sake, and because I think that there is some merit to the points I made, even if they were misguided in their target.

Today is going to be a very blog-ful day. I just seem to have a lot to rant about. I’ll be blaming it on the spider and a lack of sleep.

One of the things that thrills me about Open Source software is the ability for anyone to make contributions (above and beyond the ability to share and understand the source code) – and I was ecstatic when I discovered the java based Picard project, an open source set of libraries for working with SAM/BAM files. I’ve been slowly reading through the code, as I’d like to use it in my project for reading/writing SAM format files – which nearly all of the aligners available are moving towards.

One of those wonderful tools that I use for my own development is called Enerjy. It’s an Eclipse plug-in designed to help you write better java code by making suggestions about things that can be improved. A lot of it’s suggestions are simple: re-order imports to make them alphabetical (and more readable), fill in missing javadoc flags, etc. They’re not key pieces, but they are important to maintain your code’s good health. It does also point the way to things that will likely cause bugs as well (such as doing string comparisons with the “==” operator).

While reading through the Picard libraries and code, Enerjy threw more than 1600 warnings. It’s not in bad shape, but it’s got a lot of little “problems” that could easily be fixed. Mainly a lot of missing javadoc, un-cast generic types, arrays being passed between classes and the like. As part of my efforts to read through and understand the code, which I want to do before using it, I figured I’d fix these details. As I ramped up into the more complex warnings, I wanted to start small while still making a contribution. Open source at it’s best, right?

The sad part of the tale is that open source only works when the community’s contributions are welcome. Apparently, with Picard, code cleaning and maintenance isn’t. My first set of patches (dealing mainly with the trivial warnings) were rejected. With that reception, I’m not going to waste my time submitting the second set of changes I made. That’s kind of sad, in my opinion. I expressly told them that these patches were just a small start and that I’d begin making larger code contributions as my familiarity with the code improves – and at this rate, my familiarity with the code is definitely not going to mature as quickly, since I have much less motivation to clean up their warnings if they themselves aren’t interested in fixing them.

At any rate, perhaps I should have known. Open source in science usually means people have agendas about what they’d like to accomplish with the software – and including contributions may mean including someone on a publication downstream if and when it does become published. I don’t know if that was the case here: it was well within the project leader’s rights to reject my patches on any grounds they like, but I can’t say it makes me happy. I still don’t enjoy staring at 1600+ warnings every time I open Eclipse.

The only lesson I take away from this is that next time I see “Open Source” software, I’ll remember that just because it’s open source, it doesn’t mean all contributions are welcome – I should have confirmed with the developers before touching the code that they are open to small changes, and not just bug fixes. In the future, I suppose I’ll be tempering my excitement for open source science software projects.

update: A friend of mine pointed me to a link that’s highly related. Anyone with an open source project (or interested in getting started in one) should check out this blog post titled Teaching people to fish.