Bioinformatics toolchain

Once again, it’s a monday morning, and I’ve found myself on the ferry headed across the bay, thinking to myself, what could be better than crowdsourcing my bioinformatics toolchain, right?

Actually, This serves two purposes: It’s a handy guide for myself of useful things to install on a clean system, as well as an opportunity to open a conversation about things that a bioinformatician should have on their computer. Obviously we don’t all do the same things, but the concepts should be the same.

My first round of installs were pretty obvious:

  • An IDE (Pycharm, community edition)
  • A programming language (Python 3.6)
  • A text editor (BBEdit… for now, and nano)
  • A browser (Chrome)
  • A package manager (Brew)
  • A python package manager (pip)
  • A some very handy tools (virtualenv, cython)
  • A code cleanliness tool (pylint)

I realized I also needed at least one source code tool, so the obvious was a private github repository.

My first order of business was to create a useful wrapper for running embarassingly parallel processes on computers with multiple cores – I wrote a similar tool at my last job, and it was invaluable for getting computer heavy tasks done quickly, so I rebuilt it from scratch, including unit tests. The good thing about that exercise was that it also gave me an opportunity to deploy my full toolchain, including configuring pylint (“Your code scores 10.0/10.0”), and github, so that I now have some basic organization and working environment. Unit testing also forced me to configure the virtual environment and the dependency chains of libraries, and ensured that what I wrote was doing what I expect.

All in all, a win-win situation.

I also installed a few other programs:

  • Slack, with which I connect with other bioinformaticians
  • Twitter, so I can follow along with stuff like #AMA17, which is going on this weekend.
  • Civ V, because you can’t write code all the time. (-:

What do you think, have I missed anything important?

A few hints about moving to Python 3.6 (from 2.7) with Mutliprocessing

To those who’ve worked with me over the past couple years, you’ll know I’m a big fan of multiprocessing, which is a python package that effectively spawns new processes, much the same way you’d use threads in any other programming language.  Mainly, that’s because python’s GIL (global interpreter lock) more or less throttles any attempt you might seriously make to get threads to work.  However, multiprocessing is a nice replacement and effectively sidesteps those issues, allowing you to use as much of your computer’s resources as are available to you.

Consequently, I’ve spent part of the last couple days building up a new set of generic processes that will let me parallelize pretty much any piece of code that can work with a queue.  That is to say, if I can toss a bunch of things into a pile, and have each piece processed by a separate running instance of code, I can use this library.  It’ll be very handy for processing individual lines in a file (eg, VCF or fastq, or anything where the lines are independent)

Of course, this post only has any relevance because I’ve also decided to move from python 2.7 to 3.6 – and to no one’s surprise, things have changed.  In 2.7, I spent time creating objects that had built in locks, and shared c_type variables that could be passed around.  In 3.6, all of that becomes irrelevant.  Instead, you create a new object, a Manager().

The Manager is a relatively complex object, in that it has built in locks – for which I haven’t figured out how efficient they are yet, that’s probably down the road a bit – which makes all of the Lock wrapping I’d done in 2.7 obsolete.  My first attempt a making it work was a failure, as it constantly threw errors that you can’t put Locks into the Manager.  In fact, you also can’t put objects containing locks (such as multiprocessing Value) into the Manager. You can, however, replace them with Value objects from the manager class.

The part of the Manager that I haven’t played with yet, is that they also seem to have the ability to share information across computers, if you launch it as a server process.  Although likely overkill (and network latency makes me really shy away from that), it seems like it could be useful for building big cluster jobs.  Again, something much further down the road for me.

Although not a huge milestone, it’s good to have at least one essential component back in my toolkit: My unit test suite passes, doing some simple processing using the generic processing class.  And yes, good code requires good unit tests, so I’ve also been writing those.

Lessons learned the hard way are often remembered the best.  Writing multiprocessing code out from scratch was a great exercise, and learning some of the changes between 2.7 and 3.6 was definitely worthwhile.

Dealing with being a lone bioinformatician – social media.

As I settle into my new job, I’ve quickly realized that I’m going to be a “lone bioinformatician” for a little while, and that I’m going to have to go back to my old habits of twitter and blogging, in order to keep up with the world around me.  In addition, I’m finding myself on slack as well, in the reddit bioinformatics channel.  The idea is that I’ll be able to keep in touch with developments in my field better this way.

That said, my current following list is heavily tilted towards non-bioinformatics, so I’ve begun the long journey of purging my list.  (If I’ve unfollowed you… sorry!)  The harder part will be trying to figure out who it is that I should be following.

The bright side of this is that the long ferry rides at either end of my day are giving me time to do some of this work, which is an unexpected bonus. I had no idea that adding to my commute time would also add to my productivity.

That said, If anyone has any suggestions about who I should be following on twitter or in blog format, please let me know – I’ll cheerfully compile a list of twittering/blogging bioinformaticians, or if you already know of a current list, I’d love to hear about it.

In the meantime, if you’re interested in joining a bioinformatics slack, please let me know, and I’d be happy to add you.

#AGBTPH – Kenna Mills Shaw, Precision oncology decision support: Building a tool to deliver the right drugs(s) at the right time(s) to the right patient(s).

[I have to catch a flight to the airport, so can’t stay for the whole talk…. d’oh]

@kennamshaw

Very narrow definition of precision medicine:  Use NGS to find patients who may respond better to one drug or another, or be resistant to a class of drugs: just matching patients to drugs.

Precision medicine is completely aspirational for patients.  We still do a bad job of figuring out how to match patients with drugs.  Right now, we don’t do it well – or at all.

We’re all bad at it, actually.

  • which patients should get tested?
  • use data to impact care
  • demonstrating data changes oucome
  • deciding how much of genome to sequence
  • how do we pay for it?

Why was MD anderson bad at it?

Patients concerned about, are those who have exhausted standard therapies, for instance.

Drop in cost leads to increases in data generation.  We all suck at using this data to impact outcome for patient.   MD Anderson was only able to impact 11% of patients with potentially actionable information.

Whole exome at other institutes were getting 5% (Beltran et al)

There are only 125 “actionable” genes.

NGS is not sufficient or necessary to drive personalized medicine.

Why?

  • solid tumours, behind liquid tumours because it’s hard to get the DNA.
  • Accessibility  – timing of data
  • Attitudes of doctors as well.

Leukaemia docs also use the molecular signature as well as other data to clarify.  Solid tumour docs do not.

Ignoring copy number, only 40% of patients have actionable variants.  (goes way up with copy number.)

Clinical trials categorized by type of match – even broadly, that’s 11% of patients.  Lack of enrolment not due to lack of available matched trials.

[Ok… time to go… alas, can’t stay to see the end of this talk.]

#AGBTPH – Imran Haque, Overcoming artificial selection to realize the potential of germ line cancer screening

@imranshaque – Counsyl

Selfie related deaths:  Indiscriminate killer – equal risk for men vs. women.  40% of related deaths occurred in india.  10% of those who sing in the car….   About on par with shark attack deaths.

Cancer genomics is about 30 years old.  RB1 (1986).  Today many genes are known to be implicated in cancer.  Many of the more recent ones are less penetrant.

You can now get a commercial NGS test for 39-42 genes – and it’s relatively cheap.  How to get it:  1: get cancer, or 2: related to those who had cancer.

Models is under strain.

Access to “free” genetic testing for cancer risk is gated by personal and family history.

Very complicated decision tree.  Personal history of Breast cancer (long list of tree)… or other cancers or many many other factors.  Why is this bad?  Requires a 3rd degree pedigree, which may be too complex for an appointment.  Only a small number of patients who qualify actually get test: 7%.

Counsyl – First Care. (Product)  Helps you do your pre-test consult before you go into the clinic.  Then, offer follow up with genetic counsellor.  Reports it back to physician for appropriate treatment.  Anecdotally doing very well and increasing the number of patients who qualify for free testing.

Some insurers require additional barriers to get testing.  Patients may also be required to do pre-testing.  This helps to bring genetic counselling into the picture, and guarantees that the right tests are being used.

Counsyl can evaluate that – A large segment of population cancels the test if the requirements of pre-counselling are put in place.  Pre-test counselling is not being seen as a bonus.

Yield:

A good amount of cancers are driven by the same 2 genes (BRCA1/2).

Ability to integrate all high risk genes into single testes + discovery of new “moderate risk” genes has nearly doubled yield of diagnostic gremlin testing.  Expanded tests help, but still, total yields are around 7%.

Twin study, 1/3 of cancer risks come from genetics.  up to 55% from prostate cancer, but depends widely on the type of cancer.

Breast cancer: 20% heritability from single-gene penetrant alleles

Prostate Cancer: 55% heritability, but <5% from known single gene effects.

[Survey of literature, covering screening, risk and actionability.]

Equity:

Most genetic studies are done on non-diverse cohorts.  VUS rates differ systematically by ethnicity: BRCA1/2 ~3% for Europeans, ~7% for Africans and Asians. Similar for larger cancer panels.  Correlate to panel size as well, and systematic across other diagnostic panels.

Lack of diversity in discovery cohort leads to seriously skewed ability to process non European populations. Worse, possible misdiagnoses for non-white population.

Conclusions:

better systems to improve access;  better studies to demonstrate utility of bringing testing to wider population.

Polygenic risk is important and needs to be studied.

Issues of diversity are still plaguing us.  Need to include much more diverse populations.

#AGBTPH – Stephan Kingsmore, Delivering 26-hour diagnostic genomes to all NICU infants who will benefit in California and Arizona: Potential impact bottlenecks and solutions.

Rady Children’s Hospital.

Translating api whole genome sequencing into precision medicine for infants in intensive care units.

60 slides, 30 minutes… buckle up.

Largely, this was triggered by Obama and Collins.  In San Siego, Rady donated $160M and said “make this a reality.”

This is all still an early stage.  We’re at the 0.25%… it’s going to take 10 years to deliver on this dream and make it medicine.

Scope: 35M people into california, and we can make it into a precision medicine centre.  Focus on Newborns – when a baby is born, doctors will do anything to save a baby’s life.  In CA, all babies feed into a network of hospitals down to specialize centres for expert care.  It’s a small number of health care systems that deliver care for babies.

Can we provide a scalable service like the NIH’s, and make an impact.

Why?  14% of newborns admitted in NICU or PICU. Leading cause of death is genetic diseases: 8250 genetic diseases.  Individually, they are rare, but aggregated they are common.  Conventional testing is too slow, and cost of care is $4000/day, so genomics is cheap comparitively.

Surviving: 35 babies in level 5 NICU… median survival is 60 days with genetic diseases…

Why single gene diseases?  They are tractable.  Looking for 1-2 highly penetrant variants that will poison a protein.  We have infrastructure that can deal with this information.  Orphan drugs are becoming a part of the scene.  Potentially, gene therapy might be scalable and real.

GAP: how do you scale the 26-hour diagnosis nationally.    Any clinic?  where there are no genetics.. etc.

It is possible to have dynamic EHR agents that monitor constantly.  How do you do it for babies?  [Review case presented earlier in conference.]

Disease heterogeneity is an issue – children may not have yet grown into phenotype.  Vast number of diseases, limited number of presentations.  So, start by Data mining medical record, then translate into a differential diagnosis.  Use HPO to calculate a projection of symptoms, which can be checked against other disorders.

Computer-generated list of 341 diseases that may fit feature.

Also, then, need a genome/exome.  Which one do we do?  Speed, sensitivity and specificity.  Genomes: one day faster, exomes are cheaper.

[An old Elaine Mardis slide: Fiscal environment:  $1000 genome is still a $100,000 analysis.]

Have a big bioinformatics infrastructure.  Analytics are very good.  But, diagnostic metrics may not be as good.  Use standard filtering tools to work out causative variants.

Major goal should be to automate ACMG style classification.

Structural variants should be included.  Not yet applied in clinical practice.  We are also missing de novo genome assemblies… but that’s coming as well.

When 26 hour process works, it really works.

Big gap: Gemome reimbursement.  Quality of evidence is pretty poor.  Need more original research, more randomized control studies, Standard testing of new diagnostic tests, is not good enough.  Payers are far more interested in other metrics.

Other groups have studied this around the world, using exome sequencing.  Diagnosis rate ~28%,  making it most effective method.  (Can be 25-50%, depending on unknown characteristics.)  Quality of phenotype may be a big issue.

WES + EHR can help to raise to 51% diagnosis.

de novo mutations are leading cause of genetic diseases in infants.  Really, forced to test trios.  This is a “sea-change” for the field.

Study: Trio exome sequencing yields 7.4% more diagnoses over sequencing proband alone.  ([Not entirely convincing…]

Another Study: 58% by WES vs. 14% standard methods.   [ And more studies – can’t show numbers fast enough.]

Faster you can turn around diagnostic, the faster you can get a change in care.

No recurrent mutations in infants treated… but some presentations are enriched for successful diagnoses.

Move on to Randomized control study:  just completed, admitted any NICU patient with phenotype suggestive of genetic tests.  15% molecular diagnosis by standard tests.  41% diagnosis with rapid WGS.  Had to end test early because it was clear that WGS was making a massive impact.

Problems and solutions: Focus back on parents and families, who may have different impression/understanding of testing or methods.  Don’t have enough experts to fill gap: 850,000 MDS, but only 1100 medical geneticists and 4000 genetic councillors. (Solution: more training, and possibly other experts?)

Triangle of priorities: Pick 2…

-> Scalable Clinical Utility <-> Rapid <-> Low Cost.  <-

Solution:

  • Process engineering – scalable highly efficient methods
  • Clinical Research – much better evidence than we have now
  • Education and Engagement – med students need more training, for instance.  (Currently only get a day or a week of genetics…)

 

#AGBTPH – Mary Majumder, Prenatal testing

Baylor College of Medicine

Major worries: Conveying screening vs Diagnostic distinction.  (Do we convey that well to those who needs to know?)  Also, what to test for and report.  (How to support pregnant women and their partners.)

It’s hard to really communicate the difference between a diagnostic, vs a screen, when the screen is 99% accurate.

Personal toll on screens vs diagnostics can be significant.

When results come in, sometimes even the councillors have to do research online.  Definitive information can be hard to come by.

[This presentation is being told through comments from people who went through the process – entirely anecdotally based.  Hard to take notes on. Basically, support is lacking, and information is frequently unclear and difficult to communicate.]

Responses to challenges:  Professional societies are trying hard to improve on current state.  General predictive power calculator.  Still some distance to go.

[I’m way out of my depth – this talk is delving into social problems in the U.S. as much as the technology and the biology.  Much of this is related to terminating pregnancies, which caries social stigma here.  It’s interesting, but I can’t separate the salient points from the the asides.  The solutions to the problem mainly involve U.S. specific government structures.   I can follow, but I don’t feel that I can take notes for others that accurately reflect what’s being communicated.]

 

 

#AGBTPH – Nicolas Robine, NYGC glioblastoma clinical outcome study: Discovering therapeutic potential in GBM through integrative genomics.

Nicolas Robine, New York Genome Center  (@NotSoJunkDNA)

Collaborate with IBM to study Glioblastoma.

Big workup: Tumour normal WGS, tumour RNA-Seq, methylation array.

Pipeline: FASTQ, BAM, 3 callers each for {SNV, INDEl, SV}.  Rna-Sea uses fusionCatcher, Star-Fusion, Alignment with STAR.

It’s hard to do tumour normal comparison, so you need to get estimation of genes baseline.  Use TCGA RNA-Seq as background so you can compare.  Z-score normalization was suspicious, which correspond to regions of high-GC content.  Used EDASeq to do normalization, batch-effect correction with Combat.  Z-scores change over the course of the study, which is uncomfortable for clinicians.

Interpretation: 20h FTE/Sample.   Very time consuming with lots of steps, cumulating with a clinical report delivered to the referring physician.  Use Watson for Genomics to help.  Oncoprint created as well.

Case study presented: Very nice example of evidence, with variants found, RNA-seq used to identify complimentary deletion events, which cumulated in the patient being enrolled in a clinical trial.

Watson was fed same data – solved the issue in 9 minutes!  (Recommendations were slightly different, but same issues found.)  If the same sample is given to two different people, the same issue arrises.  It’s not perfect, but it’s not completely crazy either.

Note: don’t go cheap!  Sequence the normal sample.

[Wow]: 2/3rd of recommendations were done based on CNVs.

Now in second phase, with 200 cases, any cancer type.  29 cases complete.

What was learned:  identify novel variants in most samples, big differences between gene panel testing and WGS.  built a great infrastructure, and Watson for Genomics can be a great resource for scaling this.

More work needed, incorporating more data – and more data needed about the biology – and more drugs!

[Dring questions – First project: 30 recommendations, zero got the drugs. Patient are all at advanced phases of cancer, and has been difficult to convince doctor to start new therapies.  Better response with new project.]

#AGBTPH – Ryan Hartmaier – Genomic analysis of 63,220 tumours reveals insights into tumour uniqueness and cancer immunotherapy strategy

Ryan Hartmaier, Foundation Medicine

Intersection of genomics and cancer immunotherapy:  neooantigens are critical – identified through NGS and prediction algorithms.  Can be used for immune checkpoint inhibitors or cancer vaccines.

Extensive genetic diversity within a a given tumour.  (mutanome)

Difficult to manufacture and scale, thus expensive therapeutics.  However, TCGA datasets (and others) reinforce that individualized therapies make sense.  No comprehensive analysis data set on this approach has yet been done.

NGS-based genomic profiling for solid tumours.  FoundationCore holds data.

At time of analysis, 63,220 tumours available.  Genetic diversity was very high.

Mutanomes are unique and rarely share more than 1-2 driver mutations.  Thus, define smaller set of alterations that are found across many tumours.  Can be done at genes, type, variant or coding short variants.  Led to about 25% of tumours having at least one overlap with 10 shortlist genes.

Instead of trying to do single immunogen therapy for each person, look for those that could be used commonly across many people.  Use MHC-I binding prediction to identify specific neoantigens.  1-2% will have at least one of these variants.

Multi-epitope, non-individualized vaccines could be used, but, only apply to 1-2%.

Evidence of immunoediting in driver alterations.  Unfortunately, driver mutations produce fewer neoantigens.

Discussions of limits of method, but much room for improvement and expansion of experiment

conclusion:  Tumour mutanomes are highly unique.  25% of tumours have at least one coding mutation, potential to build vaccines is limited to 1-2% of the population.  Drivers tend not to produce neoantigens.

 

15 practical tips for bioinformaticians.

This is entirely inspired by a blog post of a very similar name from Xianjun Dong on the r-bloggers.com site.  The R-specific focus didn’t do much for me, given that R as a language leaves me annoyed and frustrated, although I do understand why others use it.  I haven’t come across Xianjun’s work before, and have never met him either online or in person, but I hope he doesn’t mind me revisiting his list with a broader scope.  Thanks to Xianjun for creating the original list!

I’ve paraphrased his points in underline, written out my point response, and highlighted what I feel is the take away.  So, lets upgrade the list a bit, shall we?

1. Use a non-random seed.  Actually, that’s pretty good, but the real point should extend this to all areas of your work:  determinism is the key both to debugging and to science – you need to be able to recreate all of your work upon demand.  That’s the basis of how we do science.

2.  The original said set your own tmp directory” so that you don’t overlap toes with other applications.  Frankly, I’d skip that, and instead, suggest you learn how the other applications work!  If you’re running a piece of code, take the time to learn it – and by extension, all of it’s parameters. The biggest mistake I see from novice bioinformaticians is trying to use code they’re not familiar with, and doing something the author never intended.  Don’t just run other people’s tools, use them properly!

3. An R-specific file name hint.  This point was far too R-centric, so I’ll just point you back to another key point: Take the time to learn the biology.  Don’t get so caught up in the programming that you forget that underneath all of the code lies an actual biology or chemistry problem that you’re trying to study, simulate or interpret.  Most often, the best bioinformatics solutions are the ones that are inspired by the biology itself.

4. Create a Readme file for your work. This is actually just the tip of the iceberg – Readme files are the last resort for any serious software project. A reasonable software project should have a wiki or a manual, as well as a host of other documentation. (Bug trackers, feature trackers, unit tests, example data files.)  The list should grow with the size of the project.  If your project is going to last more than a couple of weeks, then a readme file needs to grow into something larger.  Documentation should be an integral part of your coding practice, however you do it.

5. Comment your code.  Yes – please do.  But, don’t just comment your code, write code that doesn’t need comments!  One of the reasons why I love python is because there is a pythonic way to do things, and minimal comments are necessary to make it obvious what its supposed to do.  Of course, anytime you think of a “clever” trick, that’s a prime candidate for extra documentation, and the more clever you are, the more documentation I expect.

6. Backup your code.  Yep – I’m going to agree with the original.  However, I do disagree with the execution.  Don’t just back up your code to an extra disk, get your code into version control.  The only person who doesn’t need version control is the person who never edits their code… and I haven’t met them yet.  If you expect your project to be successful, then expect it to mature over time – and in turn, that you’ll have multiple versions.   Trust me, version control doesn’t just back up, it makes code management and collaboration possible.  Three for the price of one…. or for free if you use github.

7. clean up your intermediate data.  Actually, I think keeping intermediate data around is a useful thing, while you’re working. Yes, biological data can create big files, and you should definitely clean up after yourself, but the more important lesson is to be aware of the resources that are available to you – of which disk space is just one.  Indeed, all of programming is a tradeoff between CPU, Memory and Disk, and they’re interchangeable, of course.  If you’re not aware of the Space-Time tradeoff, then you really haven’t started your journey as a bioinformatician.  Really – this is probably the most important lesson you can learn as a programmer.

8. .bam, not .sam. This point is a bit limited in scope, so lets widen it.  All of the data you’ll ever deal with is going to be in a less-than-optimal format for storage, and it’s on you to figure out what the right format it going to be.  Have VCFs?  Gzip them!  Have .sam files?  Make them .bam files!  Of course, this doesn’t just go for storage: Do the same for how you access them.  That gzipped VCF?  You should have bgzipped it and then tabix indexed it.  Same goes for your Fasta file (FAIDX?), or whatever else you have.  Don’t just use compression, use it to your advantage.

9. Parallelize your code.  Oh man, this is a can of worms.  On the one hand, much of bioinformatics is embarrassingly parallelizeable.  That’s the good news.  The bad news is that threaded/multiprocessed code is harder to debug and maintain.  This should be the last path you go down, after you’ve optimized the heck out of your code.  Don’t parallelize what you can optimize – but use parallelization to overcome resource limitations. And only when you can’t access the resources in any other way.  (If you work with a cluster, though, this may be a quick and dirty way to get more resources…)

10. clean up and back up.  This was just a repeat of earlier points, so lets talk about networking.  The best way to keep yourself current is to listen to what others have to say.  That means making time to go to conferences, reading papers, blogs or even twitter.  Talk to other bioinformaticians because they’ll always have new ideas, and it’s far too easy to get in to a routine where you’re not exposing yourself to whatever is new and exciting.

11. OOP: Inheritance, Encapsulation, Polymorphism. Actually, on this point, I completely agree.  Understanding object oriented programming takes you from being able to write scripts to being able to write a program.  A subtle distinction, but it will broaden your horizons in so many ways, of which the most important is clearly code re-use.  And reusing your existing code means you start developing a toolkit instead of making everything a one off.

12. Save the URL of your references. Again, great start, but don’t just save the URL of your references.  Make notes on everything. Whatever you find useful or inspiring, make a note in your lab book.  Wait, you think bioinformaticians don’t have lab books?  If that’s true, it’s only because you’ve moved on to something else that keeps a permanent record, like version control for your code, or electronic notebooks for your commands.  Make sure everything you do is documented.

13. Keep Learning.  YES!  This!  If you find yourself treading water as a bioinformatician, you’re probably not far from sinking.  Neither programming or biology ever really stand still – there’s always something new that you should get to know.  Keeping up with both fields is tough, but absolutely necessary.

14. Give back what you learn.  Again, gotta agree here.  There are lots of ways to engage the community: share your code, share your experience, share your opinions, share your love of science… but get out and share it somehow.

15. Stand up on occasion.  Ok, I’ll go with this too.  The sitting/standing desks are fantastic, and definitely worth the money, if you can get one.  Bioinformaticians spend way too much time sitting, and you shouldn’t neglect your health.  Or your family, actually.  Don’t forget to work hard, and play hard.