#AGBTPH – Ryan Hartmaier – Genomic analysis of 63,220 tumours reveals insights into tumour uniqueness and cancer immunotherapy strategy

Ryan Hartmaier, Foundation Medicine

Intersection of genomics and cancer immunotherapy:  neooantigens are critical – identified through NGS and prediction algorithms.  Can be used for immune checkpoint inhibitors or cancer vaccines.

Extensive genetic diversity within a a given tumour.  (mutanome)

Difficult to manufacture and scale, thus expensive therapeutics.  However, TCGA datasets (and others) reinforce that individualized therapies make sense.  No comprehensive analysis data set on this approach has yet been done.

NGS-based genomic profiling for solid tumours.  FoundationCore holds data.

At time of analysis, 63,220 tumours available.  Genetic diversity was very high.

Mutanomes are unique and rarely share more than 1-2 driver mutations.  Thus, define smaller set of alterations that are found across many tumours.  Can be done at genes, type, variant or coding short variants.  Led to about 25% of tumours having at least one overlap with 10 shortlist genes.

Instead of trying to do single immunogen therapy for each person, look for those that could be used commonly across many people.  Use MHC-I binding prediction to identify specific neoantigens.  1-2% will have at least one of these variants.

Multi-epitope, non-individualized vaccines could be used, but, only apply to 1-2%.

Evidence of immunoediting in driver alterations.  Unfortunately, driver mutations produce fewer neoantigens.

Discussions of limits of method, but much room for improvement and expansion of experiment

conclusion:  Tumour mutanomes are highly unique.  25% of tumours have at least one coding mutation, potential to build vaccines is limited to 1-2% of the population.  Drivers tend not to produce neoantigens.

 

15 practical tips for bioinformaticians.

This is entirely inspired by a blog post of a very similar name from Xianjun Dong on the r-bloggers.com site.  The R-specific focus didn’t do much for me, given that R as a language leaves me annoyed and frustrated, although I do understand why others use it.  I haven’t come across Xianjun’s work before, and have never met him either online or in person, but I hope he doesn’t mind me revisiting his list with a broader scope.  Thanks to Xianjun for creating the original list!

I’ve paraphrased his points in underline, written out my point response, and highlighted what I feel is the take away.  So, lets upgrade the list a bit, shall we?

1. Use a non-random seed.  Actually, that’s pretty good, but the real point should extend this to all areas of your work:  determinism is the key both to debugging and to science – you need to be able to recreate all of your work upon demand.  That’s the basis of how we do science.

2.  The original said set your own tmp directory” so that you don’t overlap toes with other applications.  Frankly, I’d skip that, and instead, suggest you learn how the other applications work!  If you’re running a piece of code, take the time to learn it – and by extension, all of it’s parameters. The biggest mistake I see from novice bioinformaticians is trying to use code they’re not familiar with, and doing something the author never intended.  Don’t just run other people’s tools, use them properly!

3. An R-specific file name hint.  This point was far too R-centric, so I’ll just point you back to another key point: Take the time to learn the biology.  Don’t get so caught up in the programming that you forget that underneath all of the code lies an actual biology or chemistry problem that you’re trying to study, simulate or interpret.  Most often, the best bioinformatics solutions are the ones that are inspired by the biology itself.

4. Create a Readme file for your work. This is actually just the tip of the iceberg – Readme files are the last resort for any serious software project. A reasonable software project should have a wiki or a manual, as well as a host of other documentation. (Bug trackers, feature trackers, unit tests, example data files.)  The list should grow with the size of the project.  If your project is going to last more than a couple of weeks, then a readme file needs to grow into something larger.  Documentation should be an integral part of your coding practice, however you do it.

5. Comment your code.  Yes – please do.  But, don’t just comment your code, write code that doesn’t need comments!  One of the reasons why I love python is because there is a pythonic way to do things, and minimal comments are necessary to make it obvious what its supposed to do.  Of course, anytime you think of a “clever” trick, that’s a prime candidate for extra documentation, and the more clever you are, the more documentation I expect.

6. Backup your code.  Yep – I’m going to agree with the original.  However, I do disagree with the execution.  Don’t just back up your code to an extra disk, get your code into version control.  The only person who doesn’t need version control is the person who never edits their code… and I haven’t met them yet.  If you expect your project to be successful, then expect it to mature over time – and in turn, that you’ll have multiple versions.   Trust me, version control doesn’t just back up, it makes code management and collaboration possible.  Three for the price of one…. or for free if you use github.

7. clean up your intermediate data.  Actually, I think keeping intermediate data around is a useful thing, while you’re working. Yes, biological data can create big files, and you should definitely clean up after yourself, but the more important lesson is to be aware of the resources that are available to you – of which disk space is just one.  Indeed, all of programming is a tradeoff between CPU, Memory and Disk, and they’re interchangeable, of course.  If you’re not aware of the Space-Time tradeoff, then you really haven’t started your journey as a bioinformatician.  Really – this is probably the most important lesson you can learn as a programmer.

8. .bam, not .sam. This point is a bit limited in scope, so lets widen it.  All of the data you’ll ever deal with is going to be in a less-than-optimal format for storage, and it’s on you to figure out what the right format it going to be.  Have VCFs?  Gzip them!  Have .sam files?  Make them .bam files!  Of course, this doesn’t just go for storage: Do the same for how you access them.  That gzipped VCF?  You should have bgzipped it and then tabix indexed it.  Same goes for your Fasta file (FAIDX?), or whatever else you have.  Don’t just use compression, use it to your advantage.

9. Parallelize your code.  Oh man, this is a can of worms.  On the one hand, much of bioinformatics is embarrassingly parallelizeable.  That’s the good news.  The bad news is that threaded/multiprocessed code is harder to debug and maintain.  This should be the last path you go down, after you’ve optimized the heck out of your code.  Don’t parallelize what you can optimize – but use parallelization to overcome resource limitations. And only when you can’t access the resources in any other way.  (If you work with a cluster, though, this may be a quick and dirty way to get more resources…)

10. clean up and back up.  This was just a repeat of earlier points, so lets talk about networking.  The best way to keep yourself current is to listen to what others have to say.  That means making time to go to conferences, reading papers, blogs or even twitter.  Talk to other bioinformaticians because they’ll always have new ideas, and it’s far too easy to get in to a routine where you’re not exposing yourself to whatever is new and exciting.

11. OOP: Inheritance, Encapsulation, Polymorphism. Actually, on this point, I completely agree.  Understanding object oriented programming takes you from being able to write scripts to being able to write a program.  A subtle distinction, but it will broaden your horizons in so many ways, of which the most important is clearly code re-use.  And reusing your existing code means you start developing a toolkit instead of making everything a one off.

12. Save the URL of your references. Again, great start, but don’t just save the URL of your references.  Make notes on everything. Whatever you find useful or inspiring, make a note in your lab book.  Wait, you think bioinformaticians don’t have lab books?  If that’s true, it’s only because you’ve moved on to something else that keeps a permanent record, like version control for your code, or electronic notebooks for your commands.  Make sure everything you do is documented.

13. Keep Learning.  YES!  This!  If you find yourself treading water as a bioinformatician, you’re probably not far from sinking.  Neither programming or biology ever really stand still – there’s always something new that you should get to know.  Keeping up with both fields is tough, but absolutely necessary.

14. Give back what you learn.  Again, gotta agree here.  There are lots of ways to engage the community: share your code, share your experience, share your opinions, share your love of science… but get out and share it somehow.

15. Stand up on occasion.  Ok, I’ll go with this too.  The sitting/standing desks are fantastic, and definitely worth the money, if you can get one.  Bioinformaticians spend way too much time sitting, and you shouldn’t neglect your health.  Or your family, actually.  Don’t forget to work hard, and play hard.

A stab at the future of bioinformatics

I had a conversation the other day about where bioinformatics is headed, and it left me thinking about it for the past few days.  Generally, the question was more about whether bioinformatics (and biotechs) are at the start of something big, or whether this is all a fad.  Unfortunately, I can’t tell the future, but that doesn’t mean I shouldn’t take a guess wild stab in the dark.

Some things are clear because some things never change.  Unless armageddon is upon us or aliens land, we can be sure that sequencing will continue to get cheaper until it hits bottom – by which I mean about the same cost as any other medical test. (At which point, profit margins go up while sequencing costs go down, of course!)  But, that means that for the foreseeable future, we should expect the volume of human sequencing data to continue to rise.

That, naturally, translates pretty directly to an increase in the amount of data that needs to be processed.  Bioinformatics, unlike many other fields, is all about automation and discovery – and in this case, automation is really the big deal.  (I’ll get back to discovery later.)  Pipelines that take care of the human data are going to be more and more valuable, particularly when they add value to the automation and interpretation.  (Obviously, I should disclose that I work for a company that does this.)  I can’t say that I see this need going away any time soon.  However, doing it well requires significant investment and (I’d like to think) skill.  (As an aside, sorry for all of the asides.)

Clearly, though, automation will probably be a big employer of bioinformaticians going forward.  A great pipeline is one that is entirely invisible to the people using it, and keeping a pipeline for the automation of bioinformatics data current isn’t an easy task.  Anyone who has ever said “Great! We’re done building this pipeline!” isn’t on the cutting edge.  Or even on the leading edge.  Or any edge at all.  If you finish a pipeline, it’ll be obsolete before you can commit it to your git repository.

But, the state of the art in any field, bioinformatics included, is all about discovery.  For the most part, I suspect that it means big data.  Sometimes big databases, but definitely big data sets.  (Are you old enough to remember when big data in bioinformatics came in a fasta file, and people thought perl was going to take over the world?)  There are seven billion people on earth, and they all have genomes to be sequenced.  We have so much to discover that every bioinformatician on the planet could work on that full time, and we could keep going for years.

So yes, I’m pretty bullish on the prospects of bioinformaticians in the future.  As long as we perceive knowledge about ourselves is useful, and as long as our own health preoccupies us – for insurance purposes or diagnostics – there will be bioinformatics jobs out there.  (Whether there are too many bioinformaticians is a different story for another post.)  Discovery and re-discovery will really come sharply into focus for the next few decades.

We can figure out some of the more obvious points:

  • Cancer will be a huge driver of sequencing because it changes over time, and so we’ll constantly be driven to sequence again and again looking for markers or subpopulations. It’s a genetic disease and sequencing will give us a window into what it’s doing where nothing else can.  Like physicists and the hunt for subatomic particles, bioinformaticians are going to spend the next hundred years analyzing cancer data sets over and over and over.  There are 3 billion bases in the human genome, and probably as many unique variantions that make a cell oncogenic. (Big time discovery)
  • Rare disease diagnostics should become commonplace.  Can you imagine catching every single childhood disease within two weeks of the birth of a child?  How much suffering would that prevent?   Bioinformaticians will be at the core of that, automating systems to take genetic councillors out of the picture. (discovery turning to automation)
  • Single cell sequencing will eventually become a thing…. and then we’ll have to spend the next decade figuring out how the heck we should interpret it.  That’ll be a whole new field of tools. (discovery!)
  • Integration with medical records will probably happen.  Currently, it’s far from ideal, but mostly because (as far as I can tell) electronic medical records are built for doctors. Bioinformaticians will have to step in and have an impact.  Not that we haven’t seen great strides, but I have yet to hear of an EMR system that handles whole genome sequencing.  (automation.)
  • LIMS.  ugh. It’ll happen and drain the lives from countless bioinformaticians.  No further comment necessary. (automation)

At some point, however, it’s going to become glaringly obvious that the bioinformatics component is the most expensive part of all of the above processes.  Each will drive massive cost savings in healthcare and efficiency, but the actual process of building the tools doesn’t scale the same way as the data generation.

Where does that leave us?  I’d like to think that it’s a bright future for those who are in the field.  Interesting times ahead.

Ants..

This is a strange way to begin, but moving to California has reminded me of an interest in an Algorithm that I’ve always found fascinating: Ant Walks.

I hadn’t expected to return to that particular algorithm, but it turns out there’s a reason why people become fascinated with it. Because it’s somewhat an attempt to describe the behaviour of ants… which California has given me an opportunity to study first hand.

I’m moving in a week or two, but I have to admit, I have a love/hate relationship with the Ant colony in the back yard. I won’t really miss them, because they’re seriously everywhere. Although I’ve learned how to keep them out of the house, and they dont’ really bother me much, they’re persistent and highly effective at finding food. Especially crumbs left on the kitchen floor. (By the way, strategic placement of ant repellent, and the ants actually have a pretty hard time finding their way in… but that’s another post for another day.)

Regardless, the few times that the ants have found their way inside have inspired me to watch them and learn a bit about how they do what they do – and it’s remarkably similar to the algorithm based off of their behaviour. First, they take advantage of sheer numbers. They don’t really care about any one individual, and thus they just send each ant out to wander around. Basically, it’s just a divide and conquer, with zero planning. The more ants they send out, the more likely they are to find something. If you had only two or three ants, it would be futile… but 50-100 ants all wandering in a room with a small number of crumbs will result in the crumbs all being found.

And then there’s the whole thing about the trails. Watching them run back and forth along the trails really shows you that the ants do know exactly where they’re going, when they have somewhere to be. When they get to the end, they seem to go back into the “seeking” mode, so you can concentrate the search for relevance to a smaller area, for a more directed random search.

All and all, it’s fascinating. Unfortunately, unlike Richard Feynman, I haven’t had the time to set up Ant Ferries as a method of discouraging the ants from returning – my daughter and wife are patient, but not THAT patient – but that doesn’t mean I haven’t had a chance to observe them.  I have to admit, of all the things that I thought would entertain me in  California, I didn’t expect that Ants would be on that list.

Anyone interested in doing some topology experiments? (-;

A surprise revelation today.

I feel like blogging about blogging tonight… but I’ll keep it short.

I’ve realized that blogging and twitter were an extension of my interest in living on the cutting edge of bioinformatics. I’m always interested in new technologies and new developments, and when I cut back on blogging (mainly because my priorities shifted as a parent… sleep, glorious sleep.), I also cut back on my interactions with the field around me.

That was mostly OK for a while. When I was working at UBC, I was able to attend lectures and interact in the academic world, so I had a bit of a life line. However, in Oakland, I haven’t been tied in to that greater flow of information. It sort of snuck up on me this morning, and I realized two things today.

The first is that I’m way out of touch, and it’s time to re-engage. I actually don’t begrudge the loss of interactions over the past year or so, really. The things my team and I have accomplished in the past year have been nothing short of awe inspiring, and I’ve learned a lot about my work, pipeline bioinformatics and what you can really make a computer do if you try hard enough. (Order of magnitude performance increases just make me feel warm and tingly… one day I should post the performance graph of Omicia’s software.) But, it’s time.

The second thing I realized, is that the lack of engagement drove me to reddit’s bioinformatics forum for a similar reason. I love writing, but without stimulation, you have nothing to write about. Reddit gives you a series of writing prompts, which can be fun, but I can get the same thing from reading other blogs and twitter – and that’s far more interesting than Reddit’s usual repertoire. (How many times can you give the same advice to people who want to get into bioinformatics?)

Regardless, If you’re looking for me, I’m going to be back on twitter, feeding my addiction to science and bioinformatics.

And yes, as of a few days ago, my daughter finally learned to sleep through the night. Strange how everything is connected, isn’t it?

Pac Bio Sequel

This isn’t anything others haven’t heard about, I’m sure, but I just saw the announcement for the Pac Bio Sequel.

It’s a pretty looking machine, and it’s promise (according to the press release) is pretty awesome. Actually, I’ve always had a sweet spot for Pac Bio, despite never having worked with Pac Bio data. It’s just that I so much want it to work. There’s just something appealing to me about tethered enzymes and single molecule sequencing.

Anyhow, I don’t have much commentary, though I’d love to hear if others do, about the Sequel.

http://blog.pacificbiosciences.com/2015/09/introducing-sequel-system-scalable.html

Something they don’t tell you about PyMongo 3.0 and Multiprocessing.

EDIT: This post turned into a bug report over at the mongo python driver wiki, where it was confirmed to be a bug, and not a feature. Ultimately, the issue hasn’t been resolved yet, but version 3.0.4 will now throw a warning, preventing this issue from failing silently. Thanks to A. Jesse Jiryu Davis for suggesting I file it as a bug, and Anna Herlihy for the patch!

I had an interesting bug in a piece of software that I’ve been working on, that involves some heavy multithreading.  Running 18 processes simultaneously, of which at least 9 of them require some form of database interaction with MongoDB, is really not all that complicated… but I hit something that tossed in a wrench and confused me for 2 days.  What was it, you might ask?

Well, it looked like this:

 File "something.py", line 177, in flush
  b.execute()
File "/Users/afejes/sandboxes/pipeline4/lib/python2.7/site-packages/pymongo/bulk.py", line 582, in execute
  return self.__bulk.execute(write_concern)
File "/Users/afejes/sandboxes/pipeline4/lib/python2.7/site-packages/pymongo/bulk.py", line 430, in execute
  with client._socket_for_writes() as sock_info:
File "/usr/local/Cellar/python/2.7.10/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
  return self.gen.next()
File "/Users/afejes/sandboxes/pipeline4/lib/python2.7/site-packages/pymongo/mongo_client.py", line 663, in _get_socket
  server = self._get_topology().select_server(selector)
File "/Users/afejes/sandboxes/pipeline4/lib/python2.7/site-packages/pymongo/topology.py", line 121, in select_server
address))
File "/Users/afejes/sandboxes/pipeline4/lib/python2.7/site-packages/pymongo/topology.py", line 97, in select_servers
  self._error_message(selector))
ServerSelectionTimeoutError: No servers found yet

Basically, the new pymongo drivers (3.0.x) have changed their initialization, so that they no longer actually create the connection pool when you initialize them.  You say:

mongo = MongoClient()

and they go off and do a non-blocking initialization of everything pymongo needs to start the server. All is good.

However, if you’re doing multiprocessing, the temptation is to allow each of your threads to launch a new instance of the MongoClient. Indeed, I’ve done that before with 2.8.x series of pymongo, and it worked well. However, in this case, pymongo 3.0.2 REALLY doesn’t like it, and you’ll get the “No Servers found yet” error when you try to retrieve results from your database. Oddly enough, it’s especially hard to figure out because pymongo has one more hidden surprise for you: serverSelectionTimeoutMS.

You probably have never heard of this parameter, but it’s kinda important, now. It goes on your initialization of the MongoWapper:

self.mongo = MongoClient(mongo_url, mongo_port, serverSelectionTimeoutMS=500) 

If you don’t put it there, the default value is 30 seconds… Which means your application sits there, waiting to see if the mongo database will connect for 30 seconds, once it realizes that the database is missing. When it finally does fail, you’ll get the error above… 30 seconds after your database went down. That’s cool… except when the issue is actually not related to the database going down.

In my case, the issue was not that the database went down, but that each thread should not be initializing a new instance of MongoClient! The only solution: have the parent thread create one instance of MongoClient, and then pass that as a parameter to the processes. Tada! – the error disappears, and your program starts to run, instead of failing and waiting 30 seconds to tell you.

On the subject of indels..

Ah, a blog post. It’s been a while, as life has been busy lately. My daughter turned 3 last week, and I’ve moved half way across the world and back, but I have slowly found myself with things to say again.

And, the one that needs saying first is that, as a community, NGS people have done a terrible job on standardizing how we deal with Indels. SNVs aren’t bad – we only have half a dozen ways to mess them up – but indels are just something else.

After a year of working hard on SNVs, indels have fallen back on the menu, and I’ve been beating my head on the wall trying to solve it all in one shot. Needless to say, it’s not going to be that easy, but there are a few things that are really worth pointing out:

If you can represent something in the genome two different ways, you should pick the easiest, right? Wrong, there are people who don’t agree with this, and I can give you an example. Lets say you have a reference sequence GAAAC, and you delete two As. Personally, I’d pick the left justified version and say GAA -> G. That’s pretty clear: you’ve removed to A’s after the G. Using the single redundant G makes it left justified , and anchored or rooted, and intuitively obvious. However, other people might disagree.

For instance, if you use a more old school style, that pre-dates Next-gen sequencing, you’d probably right justify it: AAC ->C… or take it one step further and drop the C, giving you AA->-. Yes, that’s a dash. Between the left and right justification, there’s not much to say: it’s either one standard or the other. Right justification is used by a lot of databases, such as clinvar, where many (most? all?) of the known deletions are pulled from clinical papers, who adopted that as the standard.

However, that’s far from the worst you can do.  You can also add one step to the confusion and pad your variant.  For instance, you could also represent the deletion of the two As with GAAAC->GAC.  Now, you’ll see it’s anchored on the left and the right, which is not necessarily a bad thing, but it is redundant.  You don’t need both for an unambiguous representation of the indel.  This is a non-reduced representation of the variant.  You can make them more confusing, if you try, though.  There are no bounds to the padding you can add.  Want a simple SNV to look more complicated?  How about: ACGTACTCGGCTAG->AGGTACTCGGCTAG. I would probably just shift the position over by one to the right and call it a C->G variant, and drop the padding.

Why do people not use reduced representation padding, though?  Because it’s more convenient for them.   Here’s an example I got from ExAC:  GAAA -> G,GA,GAA.  See what they’ve done there?  It’s actually three variants at the same position that I would represent with three different reference sequences, but by padding the variants, they can place them all on one line.  GA->G, GAA->G and GAAA->G.  If you don’t know that they’ve done this, it’s a bit surprising.  Indeed, I had to write to them to ask about it, because it wasn’t intuitively obvious to me why they show reduced variants on their web page, but distribute a VCF file with non-reduced variants.  There is a blog post about how to reduce variants, but as of last week, it wasn’t referenced in the readme files of their FTP site.

Regardless, ExAC isn’t the only one to use non-reduced representations – dbSNP does it as well, and I haven’t even begun to look at the myriad of other data sources we depend on for indel interpretation. It was rightly pointed out to me that non-reduced representations are not forbidden in the VCF 4.2 standard.  It’s definitely not forbidden, but then again, as a community, taking the position that anything not forbidden is allowed is a dangerous path for those who would like to see a unified standard.  We’re just not going to converge on the same page, if we keep stuff like this going.

Alas, Indels are a difficult minefield.  They are hard to call, hard to represent and hard to interpret.  We have a long path ahead of us to straighten it all out, but I don’t doubt we’ll get there.  This is just one more step we’ll have to take, in order to make sure we start getting these things right.

 

AMA for fun.

I’ve been asked by a few people to do an AMA, since I seem to be one of the few PhD-level Bioinformaticians working in industry who are active on the Reddit bioinformatics forum.  There are probably a lot of others, but I suspect that the bulk of people there are mostly graduate students or academics.

Anyhow, If anyone is interested in such silliness, here’s the link.

Of course, I’m going to feel pretty silly about the whole thing if no one asks any questions…

Frontiers in Science Latex missing packages.

I’m working on a manuscript to be sent in to a frontiers journal, and discovered a few missing dependencies for LaTeX, so I figured I’d share them here.

If you find you’re missing chngpage.sty, install texlive-latex-extra

if you find you’re missing lineno.sty, install texlive-humanities

On a Mac, that’s:

sudo port -v install texlive-humanities texlive-latex-extra

Happy compiling.