#AGBTPH – Patricia Deverka, Payer perspectives on NIPT.

University of North Carolina.

Payer framework for coverage of diagnostic tests.

  • Analytic Validity (CLIA, FDA, Payers)
  • Clinical Validity (FDA, Payers)
  • Clinical Utility. (Payers)

Clinical coverage criteria:  Mostly looking at publications, look at expert opinions, and independent technology assessments.  Have test developers done their own studies?  Other health care organizations.

Key questions: How to evaluate the benefits and risks that analyze many different genes and variants at once?  (And how do you design the tests that get you there?) When evaluating, why are more genes better?  How can they support appropriate clinical integration?

Challenges: What are barriers?  Data sharing, and different payers have different evidentiary standards for assessing clinical utility.

Beyond common trisomies, how do you use the information in the screens?

We need standards to have greater predictability – what is the evidence required to have consistent payer adoption.

Coverage policy study background.  cfDNA screening has been rapidly integrated into clinical practice.  Decision making process has not been systematically examined.

Method, First looked at 5 payers, covering 128Million people!   Second version had 19 payers, covering [180Million?] people.

Some insights into U.S. insurance… Blue Cross/Blue Shield is a dominant player.

Looked into every policy and details.

All private payers cover high risk.  8/19 cover average risk pregnancies.  None cover micro deletions.  One covers sex chromosome aneuploidy.  None require prior authorization from genetic councillor.

10/19 looked at analytic validity, but recognized they have no way to do it indecently.  Lack access to public data and standards to do validation. Lack of FDA regulation.  Majority referenced blue cross/blue shield study.

Most of them emphasized clinical validity.  Rich evidence base for clinical validity.

Payers looking at the same evidence for average risk pregnancies came to different results – 8 consider it medically necessary, the rest don’t.

Modelled data is enough for determining that outcomes are worth it.  Models may not have included test failures.

Clinical validity: constantly defined as “change in health outcomes.”

Conclusions:

  • For non-invasive prenatal testing, vast majorities are using standard analytic framework for understanding tests.
  • they are evaluating evidence for each chromosome abnormality separately, even if bundled.
  • payers cite same evidence, but can come to different conclusions.
  • Most payers couldn’t independently asses validity.

Medicare covers almost half of births in U.S.  Hard to interpret policy.

Blue Cross Blue Shield say that different [chapters?] make decisions separately, but there is dependence between them, judging by the data.

Summary:

  • cfDNA screening adopted rapidly in certain indications
  • Payers used standard framework for making decisions.
  • Genetic counselling a foreseeable bottleneck.

 

 

#AGBTPH – Diana Bianchi, Noninvasive prenatal DNA testing: The vanguard of genomic medicine.

Why is NIPT the Vanguard of Genomic Medicine?  More than 2M tests performed worldwide since tests became available.  Industry has driven innovation.  Clinical impact: 70% reduction in invasive procedures (worldwide).  Has had consequences on maternal medicine.

Expanding test menus have changed the paradigm for prenatal medicine.

Prenatal is the most mature and translated are of genomic medicine.  NIPT functions as well as a crude liquid biopsy.

Why is prenatal most mature?  We are measuring cell free placental DNA, mixed with maternal DNA.  A liquid placental biopsy.  Placental, really.

Tests for Down syndrome , Edwards syndrome, patau syndrome in high risk populations have high sensitivity and specificity.  Chance of false neg are 1 in 1054, 1 in 930 and 1 in 4265 respectively.  These are just screening tests, but they are VERY good screens.

Contrasts with general population, where probabilities of false negatives are lower, but the prevalence of those issues is lower.

Case for/against routine screening for CNVs:

For: may impact care, independent of maternal age, 1.7% have significant CNVs.

Against: clinical use unproven, high false positive rates, increase in procedures which may be invasive.

Study done showed you needed at least 10 million reads to detect 1Mb copy number variant.

Common micro deletions in testing panels: DiGeorge, Prader-Willi/Angelman, Jacobsen, Langer-Giedion, Cri du chat, Wolf-Hirschhorn, 1p36.  All of them are large (3-9.8Mb).  Do not align with ultrasound abnormalities.

Example focusing on DeGeorge.

RAT (Rare Autosomal Trisomies)

used a bioinformatics quality control parameter to identify potentially abnormal cases.

MT16: interesting. Anecdotally, in mosaics (since it never exists in non-mosaic) the percent of cells in the child can be a marker for complications.

Decisions should not be made upon NIPT screens.  6% of pregnancies with positive screen results for Trisomies terminated the pregnancy without properly confirming results! [Hope I got that right… might be misunderstood.]  Should do amniocentesis, because placental results may not agree, and screens may not actually reflect what’s going on in the fetus.

Tumours are often caught this way.  May be a source of false positives.

Bioinformatics errors are possible.

Not much published on pregnancy termination based on the tests.

[Notes are incomplete at the end…]

 

 

#AGBTPH – AmirAli Talasaz, NGS analysis of circulating tumour DNA from 20,000 advanced cancer patients demonstrates similarity to tissue alteration patterns

AmirAli Talasaz, Guardant Health

Spatial and Temporal heterogeneity is becoming more important.  Resistance clones are a major challenge.

Risk and cost of lung biopsies:  19.3% patients experience adverse events.  Mostly pneumothorax.  By pass this by doing liquid biopsies.  Tumours release cell free DNA – active tumours actively shed.  We can get this via simple blood tests.  cDNA can be used to study tumour heterogeneity.

Tumour cell free signals are very dilute.  Standard NGS would limit what you can see.  Take  two samples: biobank one, process one.  digital sequencing library, involves non-random tagging.  target capture 70 genes followed by error correction and bioinformatics.  After 10’s of thousands, the performance has improved very well.

Call CNV, SNV, SV, epigenetics. Variant calling an interpretation.

Half of reported variants occur below 0.4% MAF.  Reported Somatic variants are highly variable, up to 97% MAF, but median is 0.4%.

Accuracy of Low VAF is excellent using this method – correlates perfectly.

High detection rate across most cancers – better for stuff like liver 92%, brain is in 50’s% because of the blood brain barrier [57%?].

Typical driver mutations are frequently found, ranging from 100% for an EGFR variant, down to 13% –  27% for a different EGFR variant.

TCGA and ctDNA have similar mutation patterns. (some exceptions where ctDNA reflects generally the heterogeneity of cancer)

Fusion calls are similar.

When you have access to treatment data at time of blood draw, you can see actionable resistance variants.  27% of resistance mutations found in ctDNA are potentially actionable.

Example with NSCLC – biomarkers were only complete for 37%.

Example with clinical trial – using ctDNA was able to make excellent predictions in large fraction of cases, with favourable outcomes in majority of cases.

[Wasn’t a real summary so here’s mine: – ctDNA is a thing, and their protocol seems to be working.  Very cool.]

#AGBTPH – David A . Shaywitz, Building a Global Network for Transcriptional Discovery and Clinical Application

[Missed the title – Edit: Here it is!  Thanks to Dr. Shaywitz]

Two futures: do we have the will to harness all of the information we’ve collected to build a future that takes advantage of it.

Balanced between Centralized and Globally Distributed.

DNA nexus is clinical care: distributed patient care.  Centralized Data processing, deliver consistent results.

precisionFDA Appathon – https://precision.fda.gov/challenges/app-a-thon-in-a-box

[Edit – thanks to Dr. Shaywitz for the link!]

Ideal Glogal Network:  Capabilities.  Global reach, security, rich tool set, qualified collaboration, indexing.

Three examples.

Regeneron: Revolutionize how pharma is done. Driven by science instead of trends.

Data is not equal to impact.  Innovation is driven by people, by intention, impact. [A lot of rapid fire hypothetical examples, including nature vs. environment, tipping points, important to understand data.]

Roll at DNAnexus is to empower people to have the tools they need.

Singapore Data Federation: Government sponsors -> improved hospital care.  Improve care by understanding data by understanding populations.

ORIEN Cancer Research Network: Cancer centres brought together to others who want to consume data they have (access to patients).  Cancer centres benefit from collaboration and access to data they couldn’t afford, companies get access to patients they need to fuel research.  Beginning of Networked Future.

DNA nexus is in the centre of all of this.  Drive precision medicine by being the hub that connects all of it.

 

Alan Shuldiner, DiscovEHR(y) of new drug targets and the implementation of precision medicine.

Alan Shuldiner, Regeneron Genetics Center.

A lot of R&D dollars are spend on new drugs, and the vast majority fail, and mainly in late phases for lack of efficacy.  Pre-clinical models that get you that far, just aren’t good predictors of efficacy in Humans.

Successful use of genetics based approaches:

Example with PCSK9, which led to PCSK9 antibodies. (2003 -> 2012)

Example with Null APOC3, which led to antisense inhibitors (2008 -> 2015)

Target discovery is great application.  Also: Indication discovery, Biomarkers and De-risking.

Created “Ultra high-thgouthput sequencing and analysis at regeneron”, focus on exomes.  200,000 samples prepped per year.   Sequencing 150,000 samples per year. Cloud based  platform, DNA nexus involved.

What do they sequence?  General populations, family studies, founder populations, phenotype specific cohorts.

Mendelian disease collaborations:  126 families with novel disease genes, 92 with novel variants in known genes…  some others that have not yet been solved.

DRIFT consortium: discovery research investigating founder population traits.

Geisinger-Regeneron DiscovEHR Collaboration: comprehensive genotype/phenotype resource combing de-identified genomic and clinical data. Transform genotype into improvements in patient care.  >115,000 patients in the biobank.  60,000+ exomes sequenced.  Large, unselected population.  Some focus on specific conditions where data is particularly interesting.

Constantly growing library of phenotypes to associate with genomic studies.  Includes some case/control, rich quantitive traits, measurements like EKGs.

Discovered over 4 Million variants – vast majority have frequency < 1%.   (from 50,726 patients)

Some examples of how the data is being put to use. Protective variant for cardiovascular disease.

What if everybody’s genome was available in their EHR – GenomeFirst program.

Return results that are actionable into EHR.  56 ACMG genes + 20 new ones they believe are actionable.   Exome is done in non-CLIA, but a second sample is done in a CLIA environment to confirm.

in 50,000 exomes, ~4.4% of study participants will test positive for one of the actionable events.

This has helped early diagnosis become much more effective and allows preventative treatment.

Conclusions:  Genome based discovery for drug design, actionable items possible, and can have a huge impact on human health.

 

 

#AGBTPH – Nicolas Robine, NYGC glioblastoma clinical outcome study: Discovering therapeutic potential in GBM through integrative genomics.

Nicolas Robine, New York Genome Center  (@NotSoJunkDNA)

Collaborate with IBM to study Glioblastoma.

Big workup: Tumour normal WGS, tumour RNA-Seq, methylation array.

Pipeline: FASTQ, BAM, 3 callers each for {SNV, INDEl, SV}.  Rna-Sea uses fusionCatcher, Star-Fusion, Alignment with STAR.

It’s hard to do tumour normal comparison, so you need to get estimation of genes baseline.  Use TCGA RNA-Seq as background so you can compare.  Z-score normalization was suspicious, which correspond to regions of high-GC content.  Used EDASeq to do normalization, batch-effect correction with Combat.  Z-scores change over the course of the study, which is uncomfortable for clinicians.

Interpretation: 20h FTE/Sample.   Very time consuming with lots of steps, cumulating with a clinical report delivered to the referring physician.  Use Watson for Genomics to help.  Oncoprint created as well.

Case study presented: Very nice example of evidence, with variants found, RNA-seq used to identify complimentary deletion events, which cumulated in the patient being enrolled in a clinical trial.

Watson was fed same data – solved the issue in 9 minutes!  (Recommendations were slightly different, but same issues found.)  If the same sample is given to two different people, the same issue arrises.  It’s not perfect, but it’s not completely crazy either.

Note: don’t go cheap!  Sequence the normal sample.

[Wow]: 2/3rd of recommendations were done based on CNVs.

Now in second phase, with 200 cases, any cancer type.  29 cases complete.

What was learned:  identify novel variants in most samples, big differences between gene panel testing and WGS.  built a great infrastructure, and Watson for Genomics can be a great resource for scaling this.

More work needed, incorporating more data – and more data needed about the biology – and more drugs!

[Dring questions – First project: 30 recommendations, zero got the drugs. Patient are all at advanced phases of cancer, and has been difficult to convince doctor to start new therapies.  Better response with new project.]

#AGBTPH – Ryan Hartmaier – Genomic analysis of 63,220 tumours reveals insights into tumour uniqueness and cancer immunotherapy strategy

Ryan Hartmaier, Foundation Medicine

Intersection of genomics and cancer immunotherapy:  neooantigens are critical – identified through NGS and prediction algorithms.  Can be used for immune checkpoint inhibitors or cancer vaccines.

Extensive genetic diversity within a a given tumour.  (mutanome)

Difficult to manufacture and scale, thus expensive therapeutics.  However, TCGA datasets (and others) reinforce that individualized therapies make sense.  No comprehensive analysis data set on this approach has yet been done.

NGS-based genomic profiling for solid tumours.  FoundationCore holds data.

At time of analysis, 63,220 tumours available.  Genetic diversity was very high.

Mutanomes are unique and rarely share more than 1-2 driver mutations.  Thus, define smaller set of alterations that are found across many tumours.  Can be done at genes, type, variant or coding short variants.  Led to about 25% of tumours having at least one overlap with 10 shortlist genes.

Instead of trying to do single immunogen therapy for each person, look for those that could be used commonly across many people.  Use MHC-I binding prediction to identify specific neoantigens.  1-2% will have at least one of these variants.

Multi-epitope, non-individualized vaccines could be used, but, only apply to 1-2%.

Evidence of immunoediting in driver alterations.  Unfortunately, driver mutations produce fewer neoantigens.

Discussions of limits of method, but much room for improvement and expansion of experiment

conclusion:  Tumour mutanomes are highly unique.  25% of tumours have at least one coding mutation, potential to build vaccines is limited to 1-2% of the population.  Drivers tend not to produce neoantigens.

 

#AGBT – Jeff Barret, Open Innovation Partnerships to bridge the gap from GWAS to drug targets.

Jeff Barret, Welcome Trust Sanger Institute

Drug development almost always fails.  (Hay et al, 2014)  85%+ molecules never make it to clinic.  A large fraction of late phase failures happen because of lack of efficacy.

What are ways we can improve that rate?  Human genetics can be very helpful – The ones that have genetic evidence are dramatically more likely to work.  (Retrospective look, but can we use it to predict?)  Can we develop preclinical models that help?

GSK – EMBL-EBI and Sanger came together.  Added Biogen as a partner.  We need to all collaborate – based at Welcome Genome Campus in UK.

Two things they want to do:  1. Create a bioinformatics that integrates as many data sources as possible in a systematic way.  2. Do large scale investigation (High throughput genomics).

First one is hard: combining all these things is hard – had to develop a unified model for combining data sources.  TargetValidation.org.

[Very cool demo here!]

They don’t want to be the database of record for this – instead they are a portal and integration for others.

http://opentarget.org/projects

Target information is the key outcome.  Genome scale where possible -physiological  relevance to disease.  Key technologies: use strengths from partners (Ensembl)

20 open projects, 3 disease area, human cellular experiments, leveraging genetics to build and enable resources to do more experiments.  Encourage cycle that improves data, which improves platforms, etc.

Example using Genetic screens of Immune cell function. (Use iPS derived macrophages.)  Pragmatic approach to make it possible to do this.

Mission: pre-competitive approach, committed to rapid publication, non-exclusive partnerships.

Example: IBD.  Lots of data to sift through.  We have an amazing machine that finds gene associations, but we need a new one for understanding causal variants.

  • high marker density and big sample size to do fine mapping. (Using chips, 60,000 samples + imputation.)
  • Super clean dataset & novel stats technique.  Data QC is very important
  • Building cell-specific maps (cell specificity is critical)
  • Zoom in with Sequencing (WGS)
  • Disease relevant cellular models.

We are doing a good job of finding variants, and almost all of it is coding.  Non-coding may be altering expression.  eQTL done: Not much more found than you would expect by chance.  We don’t have a good handle on what expression changes are doing (in IBD.)

Some of what’s hiding this is that we’re looking in the wrong types of cells.  Effect of non-coding variants is going to significantly affect specific cell types.

Discussion of issues where targets in specific cell types make it dangerous to use drugs because they have adverse effects in other tissue types than desired.

All this data can be brought together – tissue sample all the way to in vitro testing.

What makes this unique?  Bringing everything together.

#AGBTPH – Jonathan Berg, ClinGen and Clinvar: Approaches to variant curation and dissemination for genomic medicine.

Jonathan Berg, University of North Carolina, Chapel Hill

Views himself as an ambassador for ClinGen – What can clinGen do for you?  What can you do for ClinGen?

The Problem:  Ability to detect variants has outpaced our ability to interpret their clinical impact. 

Data is fragmented into many different academic databases, or held in clinical testing databases.  Want all this to be freely available and rich for use in medicine and research.

Many partners who are involved.

Building a genomic knowledge base to improve patient care.  Includes clinical domain working groups to curate the clinical genome.  Cover lots of different areas that are clinically relevant: cancer, mendelian, etc.

Engaging communities directly to encourage variant deposition.  

Clinvar:  Global, Archival, aggregates information.  Takes assertions about a specific variant.  NCBI  maintains providence so you know about the origin of who, what, why, etc.  Submissions continues to climb for clinvar.

Also worth noting that data has a rating system, so you can rapidly work out the reliability of the data.  Discussion of rating system, including “Expert Panels” who have brought together many different sources to collaborate on a single variant classification scheme.

GenomeConnect – another way to get the data into the public domain.

Is a gene associated with a disease:

Set out to define qualitative descriptors.  Six categories of gene-disease assertions:  definitive, strong, moderate, limited, disputed, refuted.  Based on strength of genetic evidence, strength of function evidence, replication, test of time, strength of curation.  Have a expert classification system to work through the rating.

ACMG standards and guidelines have been adopted as framework, as well.   Expert groups are using them as well.  There is a huge amount of work that goes into interpreting it, and it may be slightly different gene by gene.

Building a Genomic Knowledge base:

Website for ClinGen.  You can follow along with the working groups there,. Would love feedback on website and usability.

Major focus not Electronic Health Records

Need ways to integrate Genoics into EHR.  eg. Open Infobutton system for links to external resources.

Conclusions:

Big group of people doing many many different things – working hard to make the data accessible.

 

Jonathan Marchini, University of Oxford – Phasing, imputation and analysis of 500,000 UK individuals genotyped for UK biobank #AGBTPH

Major health resource aimed at improving the prevention and treatment of disease. Available for academic and commercial researchers worldwide. (Not completely free.. have to have good reason to use it, etc.)

Baseline questionnaire (touch screen), 4 minute interview, baseline measures.  Some subsets had additional tests.  Enhanced phenotypes were asked to do further specific tests and questionnaires as well.

Whole genome genotyping with a bespoke array.

Axiom SNP array – 830k.  Run on all participants.

First step: Quality control.  Provide robust set of quality control measures. Also provide researchers with genetic properties that are useful about genetic ancestry.

PCA done on individuals, showing geographic genetic ancestry.  [Very typical plot of first 2 PCs.]

Family relatedness: “Found a considerable number, rather more than we expected”  148,000 individuals with a relative (cousin or closer).  Can be useful, but important to know that not all individuals are independent data points.

3.2 Billion bases in the genome, but only measured 800,000 positions.  What can be said about the unmeasured fraction?  Use statistical methods to estimate haplotypes. (Haplotype estimation – phasing) Used their tool “Shapeit2”, which was ok, but not great because one step had O(n^2) behaviour.  Modified code to O(n*log(n)).  Uses hierarchical clustering in a local area.

Applied method to data set – (Nature Genetics)

Tested software using 72 trios.  Run time: 15 minutes, Switch error rate: 2.6% Total sample size: 1072.  Method was to call children in trio, and then remove the parents and call again in a group.  If the phase changes, that’s an error.

If sample size is changed to 10,000, you do much better.  error rate goes to 1.5%  At 150,000 samples, error goes to 0.3%. (run time: 38 hours) “Making just a handful of errors”.

Imputation.

Use existing data sets.  So, use those data set, where haplotypes are known.  Called Imputation.  Thus, you can use matches for your known SNPs to existing haplotypes to guess at what is in between.  (In practice, you can use many matches, and a HMM to best guess at the answer.)  Algorithm is called IMPUTE4 – 10min per sample.

800,000 SNPs –>  80 Million Imputed SNPs.  [Mostly accurate, from tests shown and getting better all the time.]

Example: standing height.

Using biobank SNPs, you don’t see much with 10,000 individuals.  With 150,000 biobank individuals, you can see a few more regions of interest.  At 350,000 individuals (subset that have homogenous ancestry), you can find several regions that are relevant.  If you apply imputation on top, you can see many regions that are likely to be interact with the trait. (adding imputation actually lets you see details on genes that aren’t there on the original SNP set.)

Leave the validation of this data for other researches.

Full release will probably happen early next year.

http://www.ukbiobank.acuk