#AGBTPH – Nicolas Robine, NYGC glioblastoma clinical outcome study: Discovering therapeutic potential in GBM through integrative genomics.

Nicolas Robine, New York Genome Center  (@NotSoJunkDNA)

Collaborate with IBM to study Glioblastoma.

Big workup: Tumour normal WGS, tumour RNA-Seq, methylation array.

Pipeline: FASTQ, BAM, 3 callers each for {SNV, INDEl, SV}.  Rna-Sea uses fusionCatcher, Star-Fusion, Alignment with STAR.

It’s hard to do tumour normal comparison, so you need to get estimation of genes baseline.  Use TCGA RNA-Seq as background so you can compare.  Z-score normalization was suspicious, which correspond to regions of high-GC content.  Used EDASeq to do normalization, batch-effect correction with Combat.  Z-scores change over the course of the study, which is uncomfortable for clinicians.

Interpretation: 20h FTE/Sample.   Very time consuming with lots of steps, cumulating with a clinical report delivered to the referring physician.  Use Watson for Genomics to help.  Oncoprint created as well.

Case study presented: Very nice example of evidence, with variants found, RNA-seq used to identify complimentary deletion events, which cumulated in the patient being enrolled in a clinical trial.

Watson was fed same data – solved the issue in 9 minutes!  (Recommendations were slightly different, but same issues found.)  If the same sample is given to two different people, the same issue arrises.  It’s not perfect, but it’s not completely crazy either.

Note: don’t go cheap!  Sequence the normal sample.

[Wow]: 2/3rd of recommendations were done based on CNVs.

Now in second phase, with 200 cases, any cancer type.  29 cases complete.

What was learned:  identify novel variants in most samples, big differences between gene panel testing and WGS.  built a great infrastructure, and Watson for Genomics can be a great resource for scaling this.

More work needed, incorporating more data – and more data needed about the biology – and more drugs!

[Dring questions – First project: 30 recommendations, zero got the drugs. Patient are all at advanced phases of cancer, and has been difficult to convince doctor to start new therapies.  Better response with new project.]

#AGBTPH – Ryan Hartmaier – Genomic analysis of 63,220 tumours reveals insights into tumour uniqueness and cancer immunotherapy strategy

Ryan Hartmaier, Foundation Medicine

Intersection of genomics and cancer immunotherapy:  neooantigens are critical – identified through NGS and prediction algorithms.  Can be used for immune checkpoint inhibitors or cancer vaccines.

Extensive genetic diversity within a a given tumour.  (mutanome)

Difficult to manufacture and scale, thus expensive therapeutics.  However, TCGA datasets (and others) reinforce that individualized therapies make sense.  No comprehensive analysis data set on this approach has yet been done.

NGS-based genomic profiling for solid tumours.  FoundationCore holds data.

At time of analysis, 63,220 tumours available.  Genetic diversity was very high.

Mutanomes are unique and rarely share more than 1-2 driver mutations.  Thus, define smaller set of alterations that are found across many tumours.  Can be done at genes, type, variant or coding short variants.  Led to about 25% of tumours having at least one overlap with 10 shortlist genes.

Instead of trying to do single immunogen therapy for each person, look for those that could be used commonly across many people.  Use MHC-I binding prediction to identify specific neoantigens.  1-2% will have at least one of these variants.

Multi-epitope, non-individualized vaccines could be used, but, only apply to 1-2%.

Evidence of immunoediting in driver alterations.  Unfortunately, driver mutations produce fewer neoantigens.

Discussions of limits of method, but much room for improvement and expansion of experiment

conclusion:  Tumour mutanomes are highly unique.  25% of tumours have at least one coding mutation, potential to build vaccines is limited to 1-2% of the population.  Drivers tend not to produce neoantigens.

 

#AGBT – Jeff Barret, Open Innovation Partnerships to bridge the gap from GWAS to drug targets.

Jeff Barret, Welcome Trust Sanger Institute

Drug development almost always fails.  (Hay et al, 2014)  85%+ molecules never make it to clinic.  A large fraction of late phase failures happen because of lack of efficacy.

What are ways we can improve that rate?  Human genetics can be very helpful – The ones that have genetic evidence are dramatically more likely to work.  (Retrospective look, but can we use it to predict?)  Can we develop preclinical models that help?

GSK – EMBL-EBI and Sanger came together.  Added Biogen as a partner.  We need to all collaborate – based at Welcome Genome Campus in UK.

Two things they want to do:  1. Create a bioinformatics that integrates as many data sources as possible in a systematic way.  2. Do large scale investigation (High throughput genomics).

First one is hard: combining all these things is hard – had to develop a unified model for combining data sources.  TargetValidation.org.

[Very cool demo here!]

They don’t want to be the database of record for this – instead they are a portal and integration for others.

http://opentarget.org/projects

Target information is the key outcome.  Genome scale where possible -physiological  relevance to disease.  Key technologies: use strengths from partners (Ensembl)

20 open projects, 3 disease area, human cellular experiments, leveraging genetics to build and enable resources to do more experiments.  Encourage cycle that improves data, which improves platforms, etc.

Example using Genetic screens of Immune cell function. (Use iPS derived macrophages.)  Pragmatic approach to make it possible to do this.

Mission: pre-competitive approach, committed to rapid publication, non-exclusive partnerships.

Example: IBD.  Lots of data to sift through.  We have an amazing machine that finds gene associations, but we need a new one for understanding causal variants.

  • high marker density and big sample size to do fine mapping. (Using chips, 60,000 samples + imputation.)
  • Super clean dataset & novel stats technique.  Data QC is very important
  • Building cell-specific maps (cell specificity is critical)
  • Zoom in with Sequencing (WGS)
  • Disease relevant cellular models.

We are doing a good job of finding variants, and almost all of it is coding.  Non-coding may be altering expression.  eQTL done: Not much more found than you would expect by chance.  We don’t have a good handle on what expression changes are doing (in IBD.)

Some of what’s hiding this is that we’re looking in the wrong types of cells.  Effect of non-coding variants is going to significantly affect specific cell types.

Discussion of issues where targets in specific cell types make it dangerous to use drugs because they have adverse effects in other tissue types than desired.

All this data can be brought together – tissue sample all the way to in vitro testing.

What makes this unique?  Bringing everything together.

#AGBTPH – Jonathan Berg, ClinGen and Clinvar: Approaches to variant curation and dissemination for genomic medicine.

Jonathan Berg, University of North Carolina, Chapel Hill

Views himself as an ambassador for ClinGen – What can clinGen do for you?  What can you do for ClinGen?

The Problem:  Ability to detect variants has outpaced our ability to interpret their clinical impact. 

Data is fragmented into many different academic databases, or held in clinical testing databases.  Want all this to be freely available and rich for use in medicine and research.

Many partners who are involved.

Building a genomic knowledge base to improve patient care.  Includes clinical domain working groups to curate the clinical genome.  Cover lots of different areas that are clinically relevant: cancer, mendelian, etc.

Engaging communities directly to encourage variant deposition.  

Clinvar:  Global, Archival, aggregates information.  Takes assertions about a specific variant.  NCBI  maintains providence so you know about the origin of who, what, why, etc.  Submissions continues to climb for clinvar.

Also worth noting that data has a rating system, so you can rapidly work out the reliability of the data.  Discussion of rating system, including “Expert Panels” who have brought together many different sources to collaborate on a single variant classification scheme.

GenomeConnect – another way to get the data into the public domain.

Is a gene associated with a disease:

Set out to define qualitative descriptors.  Six categories of gene-disease assertions:  definitive, strong, moderate, limited, disputed, refuted.  Based on strength of genetic evidence, strength of function evidence, replication, test of time, strength of curation.  Have a expert classification system to work through the rating.

ACMG standards and guidelines have been adopted as framework, as well.   Expert groups are using them as well.  There is a huge amount of work that goes into interpreting it, and it may be slightly different gene by gene.

Building a Genomic Knowledge base:

Website for ClinGen.  You can follow along with the working groups there,. Would love feedback on website and usability.

Major focus not Electronic Health Records

Need ways to integrate Genoics into EHR.  eg. Open Infobutton system for links to external resources.

Conclusions:

Big group of people doing many many different things – working hard to make the data accessible.

 

Jonathan Marchini, University of Oxford – Phasing, imputation and analysis of 500,000 UK individuals genotyped for UK biobank #AGBTPH

Major health resource aimed at improving the prevention and treatment of disease. Available for academic and commercial researchers worldwide. (Not completely free.. have to have good reason to use it, etc.)

Baseline questionnaire (touch screen), 4 minute interview, baseline measures.  Some subsets had additional tests.  Enhanced phenotypes were asked to do further specific tests and questionnaires as well.

Whole genome genotyping with a bespoke array.

Axiom SNP array – 830k.  Run on all participants.

First step: Quality control.  Provide robust set of quality control measures. Also provide researchers with genetic properties that are useful about genetic ancestry.

PCA done on individuals, showing geographic genetic ancestry.  [Very typical plot of first 2 PCs.]

Family relatedness: “Found a considerable number, rather more than we expected”  148,000 individuals with a relative (cousin or closer).  Can be useful, but important to know that not all individuals are independent data points.

3.2 Billion bases in the genome, but only measured 800,000 positions.  What can be said about the unmeasured fraction?  Use statistical methods to estimate haplotypes. (Haplotype estimation – phasing) Used their tool “Shapeit2”, which was ok, but not great because one step had O(n^2) behaviour.  Modified code to O(n*log(n)).  Uses hierarchical clustering in a local area.

Applied method to data set – (Nature Genetics)

Tested software using 72 trios.  Run time: 15 minutes, Switch error rate: 2.6% Total sample size: 1072.  Method was to call children in trio, and then remove the parents and call again in a group.  If the phase changes, that’s an error.

If sample size is changed to 10,000, you do much better.  error rate goes to 1.5%  At 150,000 samples, error goes to 0.3%. (run time: 38 hours) “Making just a handful of errors”.

Imputation.

Use existing data sets.  So, use those data set, where haplotypes are known.  Called Imputation.  Thus, you can use matches for your known SNPs to existing haplotypes to guess at what is in between.  (In practice, you can use many matches, and a HMM to best guess at the answer.)  Algorithm is called IMPUTE4 – 10min per sample.

800,000 SNPs –>  80 Million Imputed SNPs.  [Mostly accurate, from tests shown and getting better all the time.]

Example: standing height.

Using biobank SNPs, you don’t see much with 10,000 individuals.  With 150,000 biobank individuals, you can see a few more regions of interest.  At 350,000 individuals (subset that have homogenous ancestry), you can find several regions that are relevant.  If you apply imputation on top, you can see many regions that are likely to be interact with the trait. (adding imputation actually lets you see details on genes that aren’t there on the original SNP set.)

Leave the validation of this data for other researches.

Full release will probably happen early next year.

http://www.ukbiobank.acuk

#AGBTPH – Fowzan Alkuraya – It’s your variant, it’s your problem, and mine

Fowzan Alkuraya, Alfaisal University

We currently know a small number are benign, and a smaller number are pathogenic.  The idea is to drive it towards knowing every possible variant.  Even if we could classify every variant, it would be outdated shortly.  However, we can use phenotype, which keeps up with the gene pool – that way we can ask how the genotype translates to phenotype.  It’s not really that easy…

The formidable challenge of heterozygosity.

We are robust to heterozygous mutations, obviously.

Gene level challenge.  Is it dispensable? is there a non-disease phenotype?  Is it a recessive disease phenotype.

Variant level: Some we’ll never see because they’re embryonically lethal.  Some may never be clinically consequential.  non-coding?  truncating genes in dominant genes with no phenotype?

Fortunately, it’s all in the same species!  And, if we can show something is pathogenic, we can know that for next time.  Exploiting the special structure of the Saudi population to improve our understanding of the human genome.

  • High rates of consanguinity – endless source of homozygotes.
  • Large family size – great segregation power

Examples for Discovery of novel disease genes.

Some workflow: use predictive technologies, use frequency data.  Use model organisms. etc.  Use family data to identify how this variant exerts effect.

At the end of the day, this data can be shared so that everyone can benefit from this knowledge.

In second example, finding novel “lethal” genes.  Can’t do it statistically because it’s so rare.  Best hope is to observe balletic variants in non-viable embryonic tissue.  Show a case in which homozygous variant was present in all non-viable embryos from single family.  Able to do that without knowing anything about biology about the gene.

What do they do with it?  They put it out so everyone can share in the knowledge.  You never know which family is going to be making life-altering decisions based on the variant.

Published it – turned out to be the most frequent mutation in fetal losses in Saudi population.  Turned out to be important in endothelia protein.  (Cerebral Haemorrhages)

Now in Clinvar.

Example where it’s hard to understand the mechanism of the disease, and an example where prediction tools aren’t able to get it right.

How many variants are we just missing because they’re in the dark matter of the genome?  variants in non-coding parts of the genome/variants in the coding part = ?

We don’t know either of these, so it’s a hard problem:  Homozygosity mapping to the rescue.  Challenge of non-coding mutations.

104 families with recessive genotype that maps to a single locus.  101 of 104 were found to have genic mutations.  Vaast majority of disease causing mutations are in genes, then.

Good news: presumed non genic mutations <3%.

Bad new: many others will be missed for other reasons.

Demonstrated this with a sample cohort (33 families)

Catalogue of balletic LOF in well phenotypes individuals.    Able to find several genes that have been erroneously linked to disease phenotype.

[My paraphrasing: So, in the end, we should all be concerned about all of the variants, and getting them right.]

#AGBTPH – Hakon Hakonarson, CHP – Genomics-driven biomarker discovery and utilization: The precision medicine experience from CHOP

How they’ve leveraged their biobank into discoveries.

Novel gene editing technologies are coming – human examples are here as well.

CAG @ CHOP, founded in June 2006.  Recruit enough children that even rare conditions become common.  About 100,000 kids have been recruited, 70,000 can be reconnected with.  Almost 300,000 samples through collaborations in addition.

An early disease that was worked on, Neuroblastoma – usually found in advanced stages. Hard to treat.

Found some markers.  1% hereditary, 99% sporadic.

12% of sporadic cases had ALK mutations.  Existing drug for that, so was able to go straight to trial, which was successful and rapid.

Another project: Neurocognitive phenotyping. [Huge data collection effort, covering a very broad set of data gathering methods]  ADHD was a component of it.  Identified CNV in cohort, which clustered on Glutamate receptors in the brain (Elia, Glessner et al, 2011) Replicated  in 5 different cohorts. CNVRs overrepresented in cohort.

Have seen similar things in other neurocognitive diseases.

At the time, there was no drug for mGlutR Pathway.  There was, however, a drug that was indicated for another disease, but didn’t make it to the market.   Found up to 20% of patients have glutamate copy number variants.  They undertook new studies to demonstrate that the drug was useful for ADHD that have these mutations.  End up approved by FDA, down to 12 years of age.  IRB in November 2014, completed by May 2015.   Efficacy was extremely robust in this preliminary setting.  80%of patients had improvement following highest dose.

Expanded to included new mutations that influence mGLuR signalling, then expanded further to genes that influence that.

Tier one response was much stronger, those with mutations in the expansion groups did not have as high responses.

Some overlap with children who had co-morbid autism (including 22q deletion).  Major improvement in social behaviour and language.

Started a separate trial for 22q11.2 deletion syndrome based on effects seen in earlier results.

Repurposing compounds that already have safety data makes for rapid drug trials.

 

#AGBTPH – Teri Manolio, NHGRI – Genomics and the Precision Medicine Initiative

Substitute speaker for Eric Green, who could not attend. “Sends his regrets… and me.”

Precision medicine initiative, announced by president in Jan 2015.  Foundation for something that will change the way we practice medicine.

  • Genomics: Clingen is one resource that will be huge help.
  • EHR have changed a lot in the last 12 years. (Paper replaced by banks of computers.)
  • Technologies, such as wearable devices. Sensors, for instance
  • Data science/Big Data is also transformative.
  • Participant Partnerships, patients become partners, not subjects.

PMI Cohort: One million volunteers, reflecting the make up of the U.S, focus on underrepresented groups.  Longitudinal cohort. (Anyone can volunteer, or, via selection processes.)

Reflect: People, health status, geography, data types.

Benefits:

  • Large and diverse,
  • support focus on underserved,
  • complimenting existing cohorts, not duplicating.

Possible issue: Biasing towards Geeky people. [nice!]

Initial aware were made for pilot studies.  Developing brand, etc.

In July, $55M for Cohort Program Components.

Collaborate with Million Veteran program.

Start with the basic usual information, but will expand as the project grows.

Transformational approach to data access – data sharing with researchers and participants. Colleges, high school etc.  Industry, citizen science.

Will launch when ready and right – want to launch before current administration leaves office, but will happen “when it’s ready”.  Anticipate 3-4 years to reach one million participants.

Funnel of innovation being used: Exploration R&D -> Platform definition -> Advance definition -> Production -> Launch.  Also, Landing Zones: MVP, Goals and Stretch Goals.  Divided into areas that must be done.  [Basically, using industry practices for R&D on academic research?]

#AGBTPH – Howard Jacob, Hudson Alpha – Clinical sequencing for patients, adoptees and the health curious

Market segments: reference labs, sequencing technology companies, bioinformatics companies, data storage companies.

How do we get all this implemented into healthcare?

Why isn’t insurance paying?  Researchers are publishing conflicting information on many questions, ethics, costs, accuracy, etc.   NGS is not a validated test.

Rare disease is a huge problem

Lots of genes… lots of possible errors, therefore many possible combinations.  Diagnosis can be far off – 8 appointments,  7 years average.

How much of the genome should we test?  80% by Encode.  Exome is 1.5% of genome.  Which would you pick?

Panels are standard, but only useful relative to clinical phenotype.  Whole genome adds value over time.

Need WGS and bioinformatics to solve value of non-coding.  We need the data in the non-coding to make sense of it all.

3000 genomes at St. Jude’s life.  But how do we do this clinically?  Example: can you find genes for developmental delay.  376 families (primary trios).  339 family done -just past 100 diagnoses this week (102.)  28% diagnosed.

Families not diagnosed are open to reanalysis…. can revisit the data over and over again.

Also part of Undiagnosed Diseases Network.  This is about patients.

Genetic tests is largely underused.  Policy is state by state – mainly because we’re still arguing over how accurate the data is.  Literature shows we’re not completely accurate, different labs are getting different results.  Exomes are being funded, but Genomes aren’t.  Doesn’t make a lot of sense.

Picking on insurance companies.  Lets start getting companies to pay for sequencing.

Is it really that inaccurate?  Lined up Baylor vs Hudson Alpha – not easy to do an apples to apples comparison.  Do they come up with the same thing: There will, of course, be differences.  However, the analytical teams both came down to the same variants being diagnostic.

Reproducibility: It’s possible, requires new tests, still evolving.  More genomes -> More accuracy.

What data to return?

Have a lot of ethicists at Hudson Apha – Options are presented to parents: Primary, Child no Rx, Adult Action and Adult-No Action.

Asked audiences: 31% of geneticist based audiences say yes, they want it, compared to ~50% of lay-people.  Not all that different.

Huge implications:  ethical, legal and social.

Some paediatric geneticists consider “diagnosis” as “actionable” because it prevents you from having to run from place to place.

They way you view the data influences how you interact with it.  Personal decisions/Personal Medicine.  Precision medicine is for physicians.

Many excellent examples of where genomic medicine would have been really helpful and either saved lives, saved money or prevented suffering.

Roi is impressive.

Average workup for patients at each new hospital on your way to diagnosis is $20,000.  If it takes 8 hospitals on average to get a diagnosis, that’s a huge cost.

WGS can be done once, and re-used over and over.

Healthcare is about taking averages. Dosing is based off of averages, is it always useful that way? No.

Rolling out Insignt Genome, being driven by utility.  What data will people use?  On average, very few variants will have a major effect at the population level. Physicians make decisions every day with incomplete data.

How do we get the system to care?

 

Julie Segre – Microbial Genomics in a clinical setting. #AGBTPH

Two cases.

Genetic disorders and microbial disorders often interact.  Nearly all microbes can be uniquely identified by shotgun nucleic acid sequencing.

Topic 1.  Infectious diseases in hospitalized patients.  Sometimes can’t tell the kingdom, even.   Sample -> sequencing -> Bioinformatics ->  hopefully identifying agent.

Human genome is often considered the contamination – can’t physically extract it out.  Opening cells for fungi requires some harsh treatment.

URPI bioinformatics pipeline used.  What do you get out, and is it even in your database?

case 1: 3 hospitalization over 4 months – 44 days in ICU. over 100 inconclusive tests.  Cured 2 weeks after NGS dx with appropriate treatment.

Very clear hit found with Leptospira santarosai.   Had been travel to Puerto Rico, and Leptospira is a water-born disease.  Used appropriate treatment, and the infection resolved. (Tests were run that validated the diagnosis.)

Clia validation of these methods is required.  It’s a step by step process.  Happens over a year.  [appears to take nearly 2 years?  April 2015 to March 2017]

Asside: Nanopore sequencing  may also be a hugely exciting development for this field because it’s so fast.

Topic 2:  Unsing sequencing to inform healthcare-associated infections.

CRE – carbapenem resistant Enterobacteriacae.  We have no antibiotics left to fight these bacteria.  (Klebsiella pneumoniae)

Patient 1 – June, but several patients in August.  Either patient 1 was unrelated or transmission occurred.

Sequencing happened: patient 1 unine sample (assume as reference genome.)  3 variants in throat isolate, 3 different snps in lung.  Patient 2 and 3, identical to throat sample from first patient.  (one extra snp in patient 3).

Patient 1 and 3 overlapped in ICU.  3 and 2 overlapped in ICU.

Patient 4 had variants matching lung(?)  so separate transmission.

This data showed that transmission was happening – ultimately, a transmission map was created with other patients.  It was ultimately clear how it was transmitted.  Helped to identify which avenues needed to be tracked down by cohorting patients.

Resistance genes are generally on plasmids, so we need to be aware of the possibility of transmission of the plasmid to other organisms.

National Pathogen Reference Database – CDC, FDA and NIH.

If you have a reference, you can pretty much assemble anything.