#AGBTPH – AmirAli Talasaz, NGS analysis of circulating tumour DNA from 20,000 advanced cancer patients demonstrates similarity to tissue alteration patterns

AmirAli Talasaz, Guardant Health

Spatial and Temporal heterogeneity is becoming more important.  Resistance clones are a major challenge.

Risk and cost of lung biopsies:  19.3% patients experience adverse events.  Mostly pneumothorax.  By pass this by doing liquid biopsies.  Tumours release cell free DNA – active tumours actively shed.  We can get this via simple blood tests.  cDNA can be used to study tumour heterogeneity.

Tumour cell free signals are very dilute.  Standard NGS would limit what you can see.  Take  two samples: biobank one, process one.  digital sequencing library, involves non-random tagging.  target capture 70 genes followed by error correction and bioinformatics.  After 10’s of thousands, the performance has improved very well.

Call CNV, SNV, SV, epigenetics. Variant calling an interpretation.

Half of reported variants occur below 0.4% MAF.  Reported Somatic variants are highly variable, up to 97% MAF, but median is 0.4%.

Accuracy of Low VAF is excellent using this method – correlates perfectly.

High detection rate across most cancers – better for stuff like liver 92%, brain is in 50’s% because of the blood brain barrier [57%?].

Typical driver mutations are frequently found, ranging from 100% for an EGFR variant, down to 13% –  27% for a different EGFR variant.

TCGA and ctDNA have similar mutation patterns. (some exceptions where ctDNA reflects generally the heterogeneity of cancer)

Fusion calls are similar.

When you have access to treatment data at time of blood draw, you can see actionable resistance variants.  27% of resistance mutations found in ctDNA are potentially actionable.

Example with NSCLC – biomarkers were only complete for 37%.

Example with clinical trial – using ctDNA was able to make excellent predictions in large fraction of cases, with favourable outcomes in majority of cases.

[Wasn’t a real summary so here’s mine: – ctDNA is a thing, and their protocol seems to be working.  Very cool.]

#AGBTPH – David A . Shaywitz, Building a Global Network for Transcriptional Discovery and Clinical Application

[Missed the title – Edit: Here it is!  Thanks to Dr. Shaywitz]

Two futures: do we have the will to harness all of the information we’ve collected to build a future that takes advantage of it.

Balanced between Centralized and Globally Distributed.

DNA nexus is clinical care: distributed patient care.  Centralized Data processing, deliver consistent results.

precisionFDA Appathon – https://precision.fda.gov/challenges/app-a-thon-in-a-box

[Edit – thanks to Dr. Shaywitz for the link!]

Ideal Glogal Network:  Capabilities.  Global reach, security, rich tool set, qualified collaboration, indexing.

Three examples.

Regeneron: Revolutionize how pharma is done. Driven by science instead of trends.

Data is not equal to impact.  Innovation is driven by people, by intention, impact. [A lot of rapid fire hypothetical examples, including nature vs. environment, tipping points, important to understand data.]

Roll at DNAnexus is to empower people to have the tools they need.

Singapore Data Federation: Government sponsors -> improved hospital care.  Improve care by understanding data by understanding populations.

ORIEN Cancer Research Network: Cancer centres brought together to others who want to consume data they have (access to patients).  Cancer centres benefit from collaboration and access to data they couldn’t afford, companies get access to patients they need to fuel research.  Beginning of Networked Future.

DNA nexus is in the centre of all of this.  Drive precision medicine by being the hub that connects all of it.

 

Alan Shuldiner, DiscovEHR(y) of new drug targets and the implementation of precision medicine.

Alan Shuldiner, Regeneron Genetics Center.

A lot of R&D dollars are spend on new drugs, and the vast majority fail, and mainly in late phases for lack of efficacy.  Pre-clinical models that get you that far, just aren’t good predictors of efficacy in Humans.

Successful use of genetics based approaches:

Example with PCSK9, which led to PCSK9 antibodies. (2003 -> 2012)

Example with Null APOC3, which led to antisense inhibitors (2008 -> 2015)

Target discovery is great application.  Also: Indication discovery, Biomarkers and De-risking.

Created “Ultra high-thgouthput sequencing and analysis at regeneron”, focus on exomes.  200,000 samples prepped per year.   Sequencing 150,000 samples per year. Cloud based  platform, DNA nexus involved.

What do they sequence?  General populations, family studies, founder populations, phenotype specific cohorts.

Mendelian disease collaborations:  126 families with novel disease genes, 92 with novel variants in known genes…  some others that have not yet been solved.

DRIFT consortium: discovery research investigating founder population traits.

Geisinger-Regeneron DiscovEHR Collaboration: comprehensive genotype/phenotype resource combing de-identified genomic and clinical data. Transform genotype into improvements in patient care.  >115,000 patients in the biobank.  60,000+ exomes sequenced.  Large, unselected population.  Some focus on specific conditions where data is particularly interesting.

Constantly growing library of phenotypes to associate with genomic studies.  Includes some case/control, rich quantitive traits, measurements like EKGs.

Discovered over 4 Million variants – vast majority have frequency < 1%.   (from 50,726 patients)

Some examples of how the data is being put to use. Protective variant for cardiovascular disease.

What if everybody’s genome was available in their EHR – GenomeFirst program.

Return results that are actionable into EHR.  56 ACMG genes + 20 new ones they believe are actionable.   Exome is done in non-CLIA, but a second sample is done in a CLIA environment to confirm.

in 50,000 exomes, ~4.4% of study participants will test positive for one of the actionable events.

This has helped early diagnosis become much more effective and allows preventative treatment.

Conclusions:  Genome based discovery for drug design, actionable items possible, and can have a huge impact on human health.

 

 

#AGBTPH – Nicolas Robine, NYGC glioblastoma clinical outcome study: Discovering therapeutic potential in GBM through integrative genomics.

Nicolas Robine, New York Genome Center  (@NotSoJunkDNA)

Collaborate with IBM to study Glioblastoma.

Big workup: Tumour normal WGS, tumour RNA-Seq, methylation array.

Pipeline: FASTQ, BAM, 3 callers each for {SNV, INDEl, SV}.  Rna-Sea uses fusionCatcher, Star-Fusion, Alignment with STAR.

It’s hard to do tumour normal comparison, so you need to get estimation of genes baseline.  Use TCGA RNA-Seq as background so you can compare.  Z-score normalization was suspicious, which correspond to regions of high-GC content.  Used EDASeq to do normalization, batch-effect correction with Combat.  Z-scores change over the course of the study, which is uncomfortable for clinicians.

Interpretation: 20h FTE/Sample.   Very time consuming with lots of steps, cumulating with a clinical report delivered to the referring physician.  Use Watson for Genomics to help.  Oncoprint created as well.

Case study presented: Very nice example of evidence, with variants found, RNA-seq used to identify complimentary deletion events, which cumulated in the patient being enrolled in a clinical trial.

Watson was fed same data – solved the issue in 9 minutes!  (Recommendations were slightly different, but same issues found.)  If the same sample is given to two different people, the same issue arrises.  It’s not perfect, but it’s not completely crazy either.

Note: don’t go cheap!  Sequence the normal sample.

[Wow]: 2/3rd of recommendations were done based on CNVs.

Now in second phase, with 200 cases, any cancer type.  29 cases complete.

What was learned:  identify novel variants in most samples, big differences between gene panel testing and WGS.  built a great infrastructure, and Watson for Genomics can be a great resource for scaling this.

More work needed, incorporating more data – and more data needed about the biology – and more drugs!

[Dring questions – First project: 30 recommendations, zero got the drugs. Patient are all at advanced phases of cancer, and has been difficult to convince doctor to start new therapies.  Better response with new project.]

#AGBTPH – Ryan Hartmaier – Genomic analysis of 63,220 tumours reveals insights into tumour uniqueness and cancer immunotherapy strategy

Ryan Hartmaier, Foundation Medicine

Intersection of genomics and cancer immunotherapy:  neooantigens are critical – identified through NGS and prediction algorithms.  Can be used for immune checkpoint inhibitors or cancer vaccines.

Extensive genetic diversity within a a given tumour.  (mutanome)

Difficult to manufacture and scale, thus expensive therapeutics.  However, TCGA datasets (and others) reinforce that individualized therapies make sense.  No comprehensive analysis data set on this approach has yet been done.

NGS-based genomic profiling for solid tumours.  FoundationCore holds data.

At time of analysis, 63,220 tumours available.  Genetic diversity was very high.

Mutanomes are unique and rarely share more than 1-2 driver mutations.  Thus, define smaller set of alterations that are found across many tumours.  Can be done at genes, type, variant or coding short variants.  Led to about 25% of tumours having at least one overlap with 10 shortlist genes.

Instead of trying to do single immunogen therapy for each person, look for those that could be used commonly across many people.  Use MHC-I binding prediction to identify specific neoantigens.  1-2% will have at least one of these variants.

Multi-epitope, non-individualized vaccines could be used, but, only apply to 1-2%.

Evidence of immunoediting in driver alterations.  Unfortunately, driver mutations produce fewer neoantigens.

Discussions of limits of method, but much room for improvement and expansion of experiment

conclusion:  Tumour mutanomes are highly unique.  25% of tumours have at least one coding mutation, potential to build vaccines is limited to 1-2% of the population.  Drivers tend not to produce neoantigens.

 

#AGBT – Jeff Barret, Open Innovation Partnerships to bridge the gap from GWAS to drug targets.

Jeff Barret, Welcome Trust Sanger Institute

Drug development almost always fails.  (Hay et al, 2014)  85%+ molecules never make it to clinic.  A large fraction of late phase failures happen because of lack of efficacy.

What are ways we can improve that rate?  Human genetics can be very helpful – The ones that have genetic evidence are dramatically more likely to work.  (Retrospective look, but can we use it to predict?)  Can we develop preclinical models that help?

GSK – EMBL-EBI and Sanger came together.  Added Biogen as a partner.  We need to all collaborate – based at Welcome Genome Campus in UK.

Two things they want to do:  1. Create a bioinformatics that integrates as many data sources as possible in a systematic way.  2. Do large scale investigation (High throughput genomics).

First one is hard: combining all these things is hard – had to develop a unified model for combining data sources.  TargetValidation.org.

[Very cool demo here!]

They don’t want to be the database of record for this – instead they are a portal and integration for others.

http://opentarget.org/projects

Target information is the key outcome.  Genome scale where possible -physiological  relevance to disease.  Key technologies: use strengths from partners (Ensembl)

20 open projects, 3 disease area, human cellular experiments, leveraging genetics to build and enable resources to do more experiments.  Encourage cycle that improves data, which improves platforms, etc.

Example using Genetic screens of Immune cell function. (Use iPS derived macrophages.)  Pragmatic approach to make it possible to do this.

Mission: pre-competitive approach, committed to rapid publication, non-exclusive partnerships.

Example: IBD.  Lots of data to sift through.  We have an amazing machine that finds gene associations, but we need a new one for understanding causal variants.

  • high marker density and big sample size to do fine mapping. (Using chips, 60,000 samples + imputation.)
  • Super clean dataset & novel stats technique.  Data QC is very important
  • Building cell-specific maps (cell specificity is critical)
  • Zoom in with Sequencing (WGS)
  • Disease relevant cellular models.

We are doing a good job of finding variants, and almost all of it is coding.  Non-coding may be altering expression.  eQTL done: Not much more found than you would expect by chance.  We don’t have a good handle on what expression changes are doing (in IBD.)

Some of what’s hiding this is that we’re looking in the wrong types of cells.  Effect of non-coding variants is going to significantly affect specific cell types.

Discussion of issues where targets in specific cell types make it dangerous to use drugs because they have adverse effects in other tissue types than desired.

All this data can be brought together – tissue sample all the way to in vitro testing.

What makes this unique?  Bringing everything together.

#AGBTPH – Jonathan Berg, ClinGen and Clinvar: Approaches to variant curation and dissemination for genomic medicine.

Jonathan Berg, University of North Carolina, Chapel Hill

Views himself as an ambassador for ClinGen – What can clinGen do for you?  What can you do for ClinGen?

The Problem:  Ability to detect variants has outpaced our ability to interpret their clinical impact. 

Data is fragmented into many different academic databases, or held in clinical testing databases.  Want all this to be freely available and rich for use in medicine and research.

Many partners who are involved.

Building a genomic knowledge base to improve patient care.  Includes clinical domain working groups to curate the clinical genome.  Cover lots of different areas that are clinically relevant: cancer, mendelian, etc.

Engaging communities directly to encourage variant deposition.  

Clinvar:  Global, Archival, aggregates information.  Takes assertions about a specific variant.  NCBI  maintains providence so you know about the origin of who, what, why, etc.  Submissions continues to climb for clinvar.

Also worth noting that data has a rating system, so you can rapidly work out the reliability of the data.  Discussion of rating system, including “Expert Panels” who have brought together many different sources to collaborate on a single variant classification scheme.

GenomeConnect – another way to get the data into the public domain.

Is a gene associated with a disease:

Set out to define qualitative descriptors.  Six categories of gene-disease assertions:  definitive, strong, moderate, limited, disputed, refuted.  Based on strength of genetic evidence, strength of function evidence, replication, test of time, strength of curation.  Have a expert classification system to work through the rating.

ACMG standards and guidelines have been adopted as framework, as well.   Expert groups are using them as well.  There is a huge amount of work that goes into interpreting it, and it may be slightly different gene by gene.

Building a Genomic Knowledge base:

Website for ClinGen.  You can follow along with the working groups there,. Would love feedback on website and usability.

Major focus not Electronic Health Records

Need ways to integrate Genoics into EHR.  eg. Open Infobutton system for links to external resources.

Conclusions:

Big group of people doing many many different things – working hard to make the data accessible.

 

Jonathan Marchini, University of Oxford – Phasing, imputation and analysis of 500,000 UK individuals genotyped for UK biobank #AGBTPH

Major health resource aimed at improving the prevention and treatment of disease. Available for academic and commercial researchers worldwide. (Not completely free.. have to have good reason to use it, etc.)

Baseline questionnaire (touch screen), 4 minute interview, baseline measures.  Some subsets had additional tests.  Enhanced phenotypes were asked to do further specific tests and questionnaires as well.

Whole genome genotyping with a bespoke array.

Axiom SNP array – 830k.  Run on all participants.

First step: Quality control.  Provide robust set of quality control measures. Also provide researchers with genetic properties that are useful about genetic ancestry.

PCA done on individuals, showing geographic genetic ancestry.  [Very typical plot of first 2 PCs.]

Family relatedness: “Found a considerable number, rather more than we expected”  148,000 individuals with a relative (cousin or closer).  Can be useful, but important to know that not all individuals are independent data points.

3.2 Billion bases in the genome, but only measured 800,000 positions.  What can be said about the unmeasured fraction?  Use statistical methods to estimate haplotypes. (Haplotype estimation – phasing) Used their tool “Shapeit2”, which was ok, but not great because one step had O(n^2) behaviour.  Modified code to O(n*log(n)).  Uses hierarchical clustering in a local area.

Applied method to data set – (Nature Genetics)

Tested software using 72 trios.  Run time: 15 minutes, Switch error rate: 2.6% Total sample size: 1072.  Method was to call children in trio, and then remove the parents and call again in a group.  If the phase changes, that’s an error.

If sample size is changed to 10,000, you do much better.  error rate goes to 1.5%  At 150,000 samples, error goes to 0.3%. (run time: 38 hours) “Making just a handful of errors”.

Imputation.

Use existing data sets.  So, use those data set, where haplotypes are known.  Called Imputation.  Thus, you can use matches for your known SNPs to existing haplotypes to guess at what is in between.  (In practice, you can use many matches, and a HMM to best guess at the answer.)  Algorithm is called IMPUTE4 – 10min per sample.

800,000 SNPs –>  80 Million Imputed SNPs.  [Mostly accurate, from tests shown and getting better all the time.]

Example: standing height.

Using biobank SNPs, you don’t see much with 10,000 individuals.  With 150,000 biobank individuals, you can see a few more regions of interest.  At 350,000 individuals (subset that have homogenous ancestry), you can find several regions that are relevant.  If you apply imputation on top, you can see many regions that are likely to be interact with the trait. (adding imputation actually lets you see details on genes that aren’t there on the original SNP set.)

Leave the validation of this data for other researches.

Full release will probably happen early next year.

http://www.ukbiobank.acuk

#AGBTPH – Fowzan Alkuraya – It’s your variant, it’s your problem, and mine

Fowzan Alkuraya, Alfaisal University

We currently know a small number are benign, and a smaller number are pathogenic.  The idea is to drive it towards knowing every possible variant.  Even if we could classify every variant, it would be outdated shortly.  However, we can use phenotype, which keeps up with the gene pool – that way we can ask how the genotype translates to phenotype.  It’s not really that easy…

The formidable challenge of heterozygosity.

We are robust to heterozygous mutations, obviously.

Gene level challenge.  Is it dispensable? is there a non-disease phenotype?  Is it a recessive disease phenotype.

Variant level: Some we’ll never see because they’re embryonically lethal.  Some may never be clinically consequential.  non-coding?  truncating genes in dominant genes with no phenotype?

Fortunately, it’s all in the same species!  And, if we can show something is pathogenic, we can know that for next time.  Exploiting the special structure of the Saudi population to improve our understanding of the human genome.

  • High rates of consanguinity – endless source of homozygotes.
  • Large family size – great segregation power

Examples for Discovery of novel disease genes.

Some workflow: use predictive technologies, use frequency data.  Use model organisms. etc.  Use family data to identify how this variant exerts effect.

At the end of the day, this data can be shared so that everyone can benefit from this knowledge.

In second example, finding novel “lethal” genes.  Can’t do it statistically because it’s so rare.  Best hope is to observe balletic variants in non-viable embryonic tissue.  Show a case in which homozygous variant was present in all non-viable embryos from single family.  Able to do that without knowing anything about biology about the gene.

What do they do with it?  They put it out so everyone can share in the knowledge.  You never know which family is going to be making life-altering decisions based on the variant.

Published it – turned out to be the most frequent mutation in fetal losses in Saudi population.  Turned out to be important in endothelia protein.  (Cerebral Haemorrhages)

Now in Clinvar.

Example where it’s hard to understand the mechanism of the disease, and an example where prediction tools aren’t able to get it right.

How many variants are we just missing because they’re in the dark matter of the genome?  variants in non-coding parts of the genome/variants in the coding part = ?

We don’t know either of these, so it’s a hard problem:  Homozygosity mapping to the rescue.  Challenge of non-coding mutations.

104 families with recessive genotype that maps to a single locus.  101 of 104 were found to have genic mutations.  Vaast majority of disease causing mutations are in genes, then.

Good news: presumed non genic mutations <3%.

Bad new: many others will be missed for other reasons.

Demonstrated this with a sample cohort (33 families)

Catalogue of balletic LOF in well phenotypes individuals.    Able to find several genes that have been erroneously linked to disease phenotype.

[My paraphrasing: So, in the end, we should all be concerned about all of the variants, and getting them right.]

#AGBTPH – Hakon Hakonarson, CHP – Genomics-driven biomarker discovery and utilization: The precision medicine experience from CHOP

How they’ve leveraged their biobank into discoveries.

Novel gene editing technologies are coming – human examples are here as well.

CAG @ CHOP, founded in June 2006.  Recruit enough children that even rare conditions become common.  About 100,000 kids have been recruited, 70,000 can be reconnected with.  Almost 300,000 samples through collaborations in addition.

An early disease that was worked on, Neuroblastoma – usually found in advanced stages. Hard to treat.

Found some markers.  1% hereditary, 99% sporadic.

12% of sporadic cases had ALK mutations.  Existing drug for that, so was able to go straight to trial, which was successful and rapid.

Another project: Neurocognitive phenotyping. [Huge data collection effort, covering a very broad set of data gathering methods]  ADHD was a component of it.  Identified CNV in cohort, which clustered on Glutamate receptors in the brain (Elia, Glessner et al, 2011) Replicated  in 5 different cohorts. CNVRs overrepresented in cohort.

Have seen similar things in other neurocognitive diseases.

At the time, there was no drug for mGlutR Pathway.  There was, however, a drug that was indicated for another disease, but didn’t make it to the market.   Found up to 20% of patients have glutamate copy number variants.  They undertook new studies to demonstrate that the drug was useful for ADHD that have these mutations.  End up approved by FDA, down to 12 years of age.  IRB in November 2014, completed by May 2015.   Efficacy was extremely robust in this preliminary setting.  80%of patients had improvement following highest dose.

Expanded to included new mutations that influence mGLuR signalling, then expanded further to genes that influence that.

Tier one response was much stronger, those with mutations in the expansion groups did not have as high responses.

Some overlap with children who had co-morbid autism (including 22q deletion).  Major improvement in social behaviour and language.

Started a separate trial for 22q11.2 deletion syndrome based on effects seen in earlier results.

Repurposing compounds that already have safety data makes for rapid drug trials.