Alan Shuldiner, DiscovEHR(y) of new drug targets and the implementation of precision medicine.

Alan Shuldiner, Regeneron Genetics Center.

A lot of R&D dollars are spend on new drugs, and the vast majority fail, and mainly in late phases for lack of efficacy.  Pre-clinical models that get you that far, just aren’t good predictors of efficacy in Humans.

Successful use of genetics based approaches:

Example with PCSK9, which led to PCSK9 antibodies. (2003 -> 2012)

Example with Null APOC3, which led to antisense inhibitors (2008 -> 2015)

Target discovery is great application.  Also: Indication discovery, Biomarkers and De-risking.

Created “Ultra high-thgouthput sequencing and analysis at regeneron”, focus on exomes.  200,000 samples prepped per year.   Sequencing 150,000 samples per year. Cloud based  platform, DNA nexus involved.

What do they sequence?  General populations, family studies, founder populations, phenotype specific cohorts.

Mendelian disease collaborations:  126 families with novel disease genes, 92 with novel variants in known genes…  some others that have not yet been solved.

DRIFT consortium: discovery research investigating founder population traits.

Geisinger-Regeneron DiscovEHR Collaboration: comprehensive genotype/phenotype resource combing de-identified genomic and clinical data. Transform genotype into improvements in patient care.  >115,000 patients in the biobank.  60,000+ exomes sequenced.  Large, unselected population.  Some focus on specific conditions where data is particularly interesting.

Constantly growing library of phenotypes to associate with genomic studies.  Includes some case/control, rich quantitive traits, measurements like EKGs.

Discovered over 4 Million variants – vast majority have frequency < 1%.   (from 50,726 patients)

Some examples of how the data is being put to use. Protective variant for cardiovascular disease.

What if everybody’s genome was available in their EHR – GenomeFirst program.

Return results that are actionable into EHR.  56 ACMG genes + 20 new ones they believe are actionable.   Exome is done in non-CLIA, but a second sample is done in a CLIA environment to confirm.

in 50,000 exomes, ~4.4% of study participants will test positive for one of the actionable events.

This has helped early diagnosis become much more effective and allows preventative treatment.

Conclusions:  Genome based discovery for drug design, actionable items possible, and can have a huge impact on human health.

 

 

#AGBT – Jeff Barret, Open Innovation Partnerships to bridge the gap from GWAS to drug targets.

Jeff Barret, Welcome Trust Sanger Institute

Drug development almost always fails.  (Hay et al, 2014)  85%+ molecules never make it to clinic.  A large fraction of late phase failures happen because of lack of efficacy.

What are ways we can improve that rate?  Human genetics can be very helpful – The ones that have genetic evidence are dramatically more likely to work.  (Retrospective look, but can we use it to predict?)  Can we develop preclinical models that help?

GSK – EMBL-EBI and Sanger came together.  Added Biogen as a partner.  We need to all collaborate – based at Welcome Genome Campus in UK.

Two things they want to do:  1. Create a bioinformatics that integrates as many data sources as possible in a systematic way.  2. Do large scale investigation (High throughput genomics).

First one is hard: combining all these things is hard – had to develop a unified model for combining data sources.  TargetValidation.org.

[Very cool demo here!]

They don’t want to be the database of record for this – instead they are a portal and integration for others.

http://opentarget.org/projects

Target information is the key outcome.  Genome scale where possible -physiological  relevance to disease.  Key technologies: use strengths from partners (Ensembl)

20 open projects, 3 disease area, human cellular experiments, leveraging genetics to build and enable resources to do more experiments.  Encourage cycle that improves data, which improves platforms, etc.

Example using Genetic screens of Immune cell function. (Use iPS derived macrophages.)  Pragmatic approach to make it possible to do this.

Mission: pre-competitive approach, committed to rapid publication, non-exclusive partnerships.

Example: IBD.  Lots of data to sift through.  We have an amazing machine that finds gene associations, but we need a new one for understanding causal variants.

  • high marker density and big sample size to do fine mapping. (Using chips, 60,000 samples + imputation.)
  • Super clean dataset & novel stats technique.  Data QC is very important
  • Building cell-specific maps (cell specificity is critical)
  • Zoom in with Sequencing (WGS)
  • Disease relevant cellular models.

We are doing a good job of finding variants, and almost all of it is coding.  Non-coding may be altering expression.  eQTL done: Not much more found than you would expect by chance.  We don’t have a good handle on what expression changes are doing (in IBD.)

Some of what’s hiding this is that we’re looking in the wrong types of cells.  Effect of non-coding variants is going to significantly affect specific cell types.

Discussion of issues where targets in specific cell types make it dangerous to use drugs because they have adverse effects in other tissue types than desired.

All this data can be brought together – tissue sample all the way to in vitro testing.

What makes this unique?  Bringing everything together.

#AGBTPH – Jonathan Berg, ClinGen and Clinvar: Approaches to variant curation and dissemination for genomic medicine.

Jonathan Berg, University of North Carolina, Chapel Hill

Views himself as an ambassador for ClinGen – What can clinGen do for you?  What can you do for ClinGen?

The Problem:  Ability to detect variants has outpaced our ability to interpret their clinical impact. 

Data is fragmented into many different academic databases, or held in clinical testing databases.  Want all this to be freely available and rich for use in medicine and research.

Many partners who are involved.

Building a genomic knowledge base to improve patient care.  Includes clinical domain working groups to curate the clinical genome.  Cover lots of different areas that are clinically relevant: cancer, mendelian, etc.

Engaging communities directly to encourage variant deposition.  

Clinvar:  Global, Archival, aggregates information.  Takes assertions about a specific variant.  NCBI  maintains providence so you know about the origin of who, what, why, etc.  Submissions continues to climb for clinvar.

Also worth noting that data has a rating system, so you can rapidly work out the reliability of the data.  Discussion of rating system, including “Expert Panels” who have brought together many different sources to collaborate on a single variant classification scheme.

GenomeConnect – another way to get the data into the public domain.

Is a gene associated with a disease:

Set out to define qualitative descriptors.  Six categories of gene-disease assertions:  definitive, strong, moderate, limited, disputed, refuted.  Based on strength of genetic evidence, strength of function evidence, replication, test of time, strength of curation.  Have a expert classification system to work through the rating.

ACMG standards and guidelines have been adopted as framework, as well.   Expert groups are using them as well.  There is a huge amount of work that goes into interpreting it, and it may be slightly different gene by gene.

Building a Genomic Knowledge base:

Website for ClinGen.  You can follow along with the working groups there,. Would love feedback on website and usability.

Major focus not Electronic Health Records

Need ways to integrate Genoics into EHR.  eg. Open Infobutton system for links to external resources.

Conclusions:

Big group of people doing many many different things – working hard to make the data accessible.

 

Jonathan Marchini, University of Oxford – Phasing, imputation and analysis of 500,000 UK individuals genotyped for UK biobank #AGBTPH

Major health resource aimed at improving the prevention and treatment of disease. Available for academic and commercial researchers worldwide. (Not completely free.. have to have good reason to use it, etc.)

Baseline questionnaire (touch screen), 4 minute interview, baseline measures.  Some subsets had additional tests.  Enhanced phenotypes were asked to do further specific tests and questionnaires as well.

Whole genome genotyping with a bespoke array.

Axiom SNP array – 830k.  Run on all participants.

First step: Quality control.  Provide robust set of quality control measures. Also provide researchers with genetic properties that are useful about genetic ancestry.

PCA done on individuals, showing geographic genetic ancestry.  [Very typical plot of first 2 PCs.]

Family relatedness: “Found a considerable number, rather more than we expected”  148,000 individuals with a relative (cousin or closer).  Can be useful, but important to know that not all individuals are independent data points.

3.2 Billion bases in the genome, but only measured 800,000 positions.  What can be said about the unmeasured fraction?  Use statistical methods to estimate haplotypes. (Haplotype estimation – phasing) Used their tool “Shapeit2”, which was ok, but not great because one step had O(n^2) behaviour.  Modified code to O(n*log(n)).  Uses hierarchical clustering in a local area.

Applied method to data set – (Nature Genetics)

Tested software using 72 trios.  Run time: 15 minutes, Switch error rate: 2.6% Total sample size: 1072.  Method was to call children in trio, and then remove the parents and call again in a group.  If the phase changes, that’s an error.

If sample size is changed to 10,000, you do much better.  error rate goes to 1.5%  At 150,000 samples, error goes to 0.3%. (run time: 38 hours) “Making just a handful of errors”.

Imputation.

Use existing data sets.  So, use those data set, where haplotypes are known.  Called Imputation.  Thus, you can use matches for your known SNPs to existing haplotypes to guess at what is in between.  (In practice, you can use many matches, and a HMM to best guess at the answer.)  Algorithm is called IMPUTE4 – 10min per sample.

800,000 SNPs –>  80 Million Imputed SNPs.  [Mostly accurate, from tests shown and getting better all the time.]

Example: standing height.

Using biobank SNPs, you don’t see much with 10,000 individuals.  With 150,000 biobank individuals, you can see a few more regions of interest.  At 350,000 individuals (subset that have homogenous ancestry), you can find several regions that are relevant.  If you apply imputation on top, you can see many regions that are likely to be interact with the trait. (adding imputation actually lets you see details on genes that aren’t there on the original SNP set.)

Leave the validation of this data for other researches.

Full release will probably happen early next year.

http://www.ukbiobank.acuk

#AGBTPH – Fowzan Alkuraya – It’s your variant, it’s your problem, and mine

Fowzan Alkuraya, Alfaisal University

We currently know a small number are benign, and a smaller number are pathogenic.  The idea is to drive it towards knowing every possible variant.  Even if we could classify every variant, it would be outdated shortly.  However, we can use phenotype, which keeps up with the gene pool – that way we can ask how the genotype translates to phenotype.  It’s not really that easy…

The formidable challenge of heterozygosity.

We are robust to heterozygous mutations, obviously.

Gene level challenge.  Is it dispensable? is there a non-disease phenotype?  Is it a recessive disease phenotype.

Variant level: Some we’ll never see because they’re embryonically lethal.  Some may never be clinically consequential.  non-coding?  truncating genes in dominant genes with no phenotype?

Fortunately, it’s all in the same species!  And, if we can show something is pathogenic, we can know that for next time.  Exploiting the special structure of the Saudi population to improve our understanding of the human genome.

  • High rates of consanguinity – endless source of homozygotes.
  • Large family size – great segregation power

Examples for Discovery of novel disease genes.

Some workflow: use predictive technologies, use frequency data.  Use model organisms. etc.  Use family data to identify how this variant exerts effect.

At the end of the day, this data can be shared so that everyone can benefit from this knowledge.

In second example, finding novel “lethal” genes.  Can’t do it statistically because it’s so rare.  Best hope is to observe balletic variants in non-viable embryonic tissue.  Show a case in which homozygous variant was present in all non-viable embryos from single family.  Able to do that without knowing anything about biology about the gene.

What do they do with it?  They put it out so everyone can share in the knowledge.  You never know which family is going to be making life-altering decisions based on the variant.

Published it – turned out to be the most frequent mutation in fetal losses in Saudi population.  Turned out to be important in endothelia protein.  (Cerebral Haemorrhages)

Now in Clinvar.

Example where it’s hard to understand the mechanism of the disease, and an example where prediction tools aren’t able to get it right.

How many variants are we just missing because they’re in the dark matter of the genome?  variants in non-coding parts of the genome/variants in the coding part = ?

We don’t know either of these, so it’s a hard problem:  Homozygosity mapping to the rescue.  Challenge of non-coding mutations.

104 families with recessive genotype that maps to a single locus.  101 of 104 were found to have genic mutations.  Vaast majority of disease causing mutations are in genes, then.

Good news: presumed non genic mutations <3%.

Bad new: many others will be missed for other reasons.

Demonstrated this with a sample cohort (33 families)

Catalogue of balletic LOF in well phenotypes individuals.    Able to find several genes that have been erroneously linked to disease phenotype.

[My paraphrasing: So, in the end, we should all be concerned about all of the variants, and getting them right.]

#AGBTPH – Hakon Hakonarson, CHP – Genomics-driven biomarker discovery and utilization: The precision medicine experience from CHOP

How they’ve leveraged their biobank into discoveries.

Novel gene editing technologies are coming – human examples are here as well.

CAG @ CHOP, founded in June 2006.  Recruit enough children that even rare conditions become common.  About 100,000 kids have been recruited, 70,000 can be reconnected with.  Almost 300,000 samples through collaborations in addition.

An early disease that was worked on, Neuroblastoma – usually found in advanced stages. Hard to treat.

Found some markers.  1% hereditary, 99% sporadic.

12% of sporadic cases had ALK mutations.  Existing drug for that, so was able to go straight to trial, which was successful and rapid.

Another project: Neurocognitive phenotyping. [Huge data collection effort, covering a very broad set of data gathering methods]  ADHD was a component of it.  Identified CNV in cohort, which clustered on Glutamate receptors in the brain (Elia, Glessner et al, 2011) Replicated  in 5 different cohorts. CNVRs overrepresented in cohort.

Have seen similar things in other neurocognitive diseases.

At the time, there was no drug for mGlutR Pathway.  There was, however, a drug that was indicated for another disease, but didn’t make it to the market.   Found up to 20% of patients have glutamate copy number variants.  They undertook new studies to demonstrate that the drug was useful for ADHD that have these mutations.  End up approved by FDA, down to 12 years of age.  IRB in November 2014, completed by May 2015.   Efficacy was extremely robust in this preliminary setting.  80%of patients had improvement following highest dose.

Expanded to included new mutations that influence mGLuR signalling, then expanded further to genes that influence that.

Tier one response was much stronger, those with mutations in the expansion groups did not have as high responses.

Some overlap with children who had co-morbid autism (including 22q deletion).  Major improvement in social behaviour and language.

Started a separate trial for 22q11.2 deletion syndrome based on effects seen in earlier results.

Repurposing compounds that already have safety data makes for rapid drug trials.

 

#AGBTPH – Teri Manolio, NHGRI – Genomics and the Precision Medicine Initiative

Substitute speaker for Eric Green, who could not attend. “Sends his regrets… and me.”

Precision medicine initiative, announced by president in Jan 2015.  Foundation for something that will change the way we practice medicine.

  • Genomics: Clingen is one resource that will be huge help.
  • EHR have changed a lot in the last 12 years. (Paper replaced by banks of computers.)
  • Technologies, such as wearable devices. Sensors, for instance
  • Data science/Big Data is also transformative.
  • Participant Partnerships, patients become partners, not subjects.

PMI Cohort: One million volunteers, reflecting the make up of the U.S, focus on underrepresented groups.  Longitudinal cohort. (Anyone can volunteer, or, via selection processes.)

Reflect: People, health status, geography, data types.

Benefits:

  • Large and diverse,
  • support focus on underserved,
  • complimenting existing cohorts, not duplicating.

Possible issue: Biasing towards Geeky people. [nice!]

Initial aware were made for pilot studies.  Developing brand, etc.

In July, $55M for Cohort Program Components.

Collaborate with Million Veteran program.

Start with the basic usual information, but will expand as the project grows.

Transformational approach to data access – data sharing with researchers and participants. Colleges, high school etc.  Industry, citizen science.

Will launch when ready and right – want to launch before current administration leaves office, but will happen “when it’s ready”.  Anticipate 3-4 years to reach one million participants.

Funnel of innovation being used: Exploration R&D -> Platform definition -> Advance definition -> Production -> Launch.  Also, Landing Zones: MVP, Goals and Stretch Goals.  Divided into areas that must be done.  [Basically, using industry practices for R&D on academic research?]

#AGBTPH – Howard Jacob, Hudson Alpha – Clinical sequencing for patients, adoptees and the health curious

Market segments: reference labs, sequencing technology companies, bioinformatics companies, data storage companies.

How do we get all this implemented into healthcare?

Why isn’t insurance paying?  Researchers are publishing conflicting information on many questions, ethics, costs, accuracy, etc.   NGS is not a validated test.

Rare disease is a huge problem

Lots of genes… lots of possible errors, therefore many possible combinations.  Diagnosis can be far off – 8 appointments,  7 years average.

How much of the genome should we test?  80% by Encode.  Exome is 1.5% of genome.  Which would you pick?

Panels are standard, but only useful relative to clinical phenotype.  Whole genome adds value over time.

Need WGS and bioinformatics to solve value of non-coding.  We need the data in the non-coding to make sense of it all.

3000 genomes at St. Jude’s life.  But how do we do this clinically?  Example: can you find genes for developmental delay.  376 families (primary trios).  339 family done -just past 100 diagnoses this week (102.)  28% diagnosed.

Families not diagnosed are open to reanalysis…. can revisit the data over and over again.

Also part of Undiagnosed Diseases Network.  This is about patients.

Genetic tests is largely underused.  Policy is state by state – mainly because we’re still arguing over how accurate the data is.  Literature shows we’re not completely accurate, different labs are getting different results.  Exomes are being funded, but Genomes aren’t.  Doesn’t make a lot of sense.

Picking on insurance companies.  Lets start getting companies to pay for sequencing.

Is it really that inaccurate?  Lined up Baylor vs Hudson Alpha – not easy to do an apples to apples comparison.  Do they come up with the same thing: There will, of course, be differences.  However, the analytical teams both came down to the same variants being diagnostic.

Reproducibility: It’s possible, requires new tests, still evolving.  More genomes -> More accuracy.

What data to return?

Have a lot of ethicists at Hudson Apha – Options are presented to parents: Primary, Child no Rx, Adult Action and Adult-No Action.

Asked audiences: 31% of geneticist based audiences say yes, they want it, compared to ~50% of lay-people.  Not all that different.

Huge implications:  ethical, legal and social.

Some paediatric geneticists consider “diagnosis” as “actionable” because it prevents you from having to run from place to place.

They way you view the data influences how you interact with it.  Personal decisions/Personal Medicine.  Precision medicine is for physicians.

Many excellent examples of where genomic medicine would have been really helpful and either saved lives, saved money or prevented suffering.

Roi is impressive.

Average workup for patients at each new hospital on your way to diagnosis is $20,000.  If it takes 8 hospitals on average to get a diagnosis, that’s a huge cost.

WGS can be done once, and re-used over and over.

Healthcare is about taking averages. Dosing is based off of averages, is it always useful that way? No.

Rolling out Insignt Genome, being driven by utility.  What data will people use?  On average, very few variants will have a major effect at the population level. Physicians make decisions every day with incomplete data.

How do we get the system to care?

 

Julie Segre – Microbial Genomics in a clinical setting. #AGBTPH

Two cases.

Genetic disorders and microbial disorders often interact.  Nearly all microbes can be uniquely identified by shotgun nucleic acid sequencing.

Topic 1.  Infectious diseases in hospitalized patients.  Sometimes can’t tell the kingdom, even.   Sample -> sequencing -> Bioinformatics ->  hopefully identifying agent.

Human genome is often considered the contamination – can’t physically extract it out.  Opening cells for fungi requires some harsh treatment.

URPI bioinformatics pipeline used.  What do you get out, and is it even in your database?

case 1: 3 hospitalization over 4 months – 44 days in ICU. over 100 inconclusive tests.  Cured 2 weeks after NGS dx with appropriate treatment.

Very clear hit found with Leptospira santarosai.   Had been travel to Puerto Rico, and Leptospira is a water-born disease.  Used appropriate treatment, and the infection resolved. (Tests were run that validated the diagnosis.)

Clia validation of these methods is required.  It’s a step by step process.  Happens over a year.  [appears to take nearly 2 years?  April 2015 to March 2017]

Asside: Nanopore sequencing  may also be a hugely exciting development for this field because it’s so fast.

Topic 2:  Unsing sequencing to inform healthcare-associated infections.

CRE – carbapenem resistant Enterobacteriacae.  We have no antibiotics left to fight these bacteria.  (Klebsiella pneumoniae)

Patient 1 – June, but several patients in August.  Either patient 1 was unrelated or transmission occurred.

Sequencing happened: patient 1 unine sample (assume as reference genome.)  3 variants in throat isolate, 3 different snps in lung.  Patient 2 and 3, identical to throat sample from first patient.  (one extra snp in patient 3).

Patient 1 and 3 overlapped in ICU.  3 and 2 overlapped in ICU.

Patient 4 had variants matching lung(?)  so separate transmission.

This data showed that transmission was happening – ultimately, a transmission map was created with other patients.  It was ultimately clear how it was transmitted.  Helped to identify which avenues needed to be tracked down by cohorting patients.

Resistance genes are generally on plasmids, so we need to be aware of the possibility of transmission of the plasmid to other organisms.

National Pathogen Reference Database – CDC, FDA and NIH.

If you have a reference, you can pretty much assemble anything.

 

Notes from NGS applications – Stephen Kingsmore, Fowzan Alkuraya, Hakon Hakonarson

I’m dumping my notes as I’ve done for other conferences – obvious mistakes are obviously my fault, and not those of the speakers.


Fowzan Alkuraya, Alfaisal University

Case 1 – 13 month old girl – developmental delay.
Unrelated parents
MRI brain atrophy
Karyotype: 45,X (non mosaic turner syndrome)

In past, would have said “atypical Turner Syndrome”
Now, that’s not good enough – we can find something else. “Atypical” should really not be used anymore – there’s probably more than one “lesion”

Exome sequencing:
ADRA2B – Arg222* -> homozygous truncating mutation.

Lesson: Don’t assume – there’s no excuse for “atypical” in the genomics era.

Case 2 – 4 year old suspected autism.
Un-contributory family history with healthy brother – should raise flag: autism is more common in boys than girls. Mendelian form of Autism?
Documented cognitive impairment – otherwise normal.
All guidelines : molecular karyotype: de novo 300kb deletion on chr10.
Is it pathogenic?
Use Decipher database of structural variants. ***** why don’t we use this?
found a match.

Were conducting a study that included clinical genomics approach, and exome sequencing found:
homozygous mutation in CC2D1A. Skipping exon 6. Not in exac, but found in saudi Arabians. (1 in 500) Known to be entirely correlated with mental development.

Lessons:
Beware of founder mutations in different ethnic groups.
Exome sequencing in parallel with molecular karyotyping for neurodevelopmental disorders.

But, when do we stop? Do we always need Exome sequencing?

Case 3 – (consanguineous) Couple lost two children with severe lactic acidosis (Severe, unexplained)
First child died on 2nd day
Second child died within hours.

normal electron transport chain. Sequencing of candidate genes was negative. clinical Exome sequencing: negative.

clinical Whole Genome sequencing: Negative.

Research grade exome sequencing: Found a splicing mutation in ECHS1, a known source of acidosis

Severe reduction in NMD.

30-50% of cases in exome sequencing remain without diagnosis. Are we normally missing the mutation at the capture and sequencing stage, or at the intepretaiotn stage?

Analyzed 33 cases with negative clinical exome/genome sequencing. Found it in 29 cases.

In 18 cases, gene was novel or within 6 months at time of diagnostics.
probably not reported
In 11 cases were in known genes.

Clinical directors are probably not reported because of filtering and interpretation issues.

Lesson:
If you have a novel mutation, it’s likely to be missed by clinical sequencing.


Stephen Kingsmore, Rady Children’s Hospital:

2 cases:
first, ad birth, acute liver failure,
surgically corrected
spine defects, renal defects, surgically correctable.
Doing well until day 40, when he started to develop liver disfunction. Diagnostic workup was unrevealing.
On day 55, Rady would brought in. Race against the clock.
Whole Genome Sequencing time cut to just 26 hours.

1. consent at time 0:00
sample transport
Dna isolation 1 hour
18 hour genome sequencing, completed at time 24:30

40x genome. 120,000,00,000 bases
2,8M bases
5,1M variants
1.3M variants1% filter applied.
1,3k pathogenic or likely pathogenic.
2 variants that could cause the 341 conditions (below), both in the same gene: perforin 1.

– very typical, but has to be done fast. FPGA informatincs.

ACMG guidelines on how build cases.

Focus on pathogenic and likely pathogenic

Big issue: what are the issues that are related to the genotypd of intrerest. Used Phenomizer, etc, were able to narrow down to 341 conditions that may match the symptions.

1st variant was vary rare.
2nd variant was in 3%.

If second variant is in trans with another pathogenic, it’s likely pathogenic as well.
Provisional Diagnosis – FDA gave permission to give a verbal putative diagnosis under cases where a child’s life is in imininent danger.

Confirmatory testing was done, and the diagnosis was positive. Fortunately, there is a treatment, and the child is now thriving. Does still have the disorder,which may require a bone transplant, but the child’s life was saved.

Case 2: firstborn with transient hypoglycemia. Transient to Nicu.

At 1 month, the nurse practitioner noticed low blood suger – hyperinsulinemia.

Similar numbers to previous, whittled down to 160 conditions, with only 1 variant that matched that disease.

Known pathogenic mutation : ABCC8.

Recessive condition, inherited from father. There is also: Focal hyperinsulinism, which presents from father (uniparental disomy)

Second event was a de novo mutation in the child, which was shown to be only at the head of the pancreas – so were able to remove the damaged segment of the pancreas.

Pancreatectomy was scheduled.

TOtal time: 7 days from start to cure.

Avoided major morbidity – probably major neurologic damaged.

Does it scale?

35 cases: cohort. 57% rate percent diagnosis. In contrast, 9% are cured by standard tests.

These cases were cherry picked as having likely genetic diseases., but still demonstrates power.

2nd version:
80 cases cohort : 58% diagnosis ate.

Brand new info: Kansas city test. However, trial was discontinued because it was obvious that diagnosis was working: 15% rate in normal tests. 41% in clinical exome based tests.

Makes a significant impact in all aspects of care.

For every child tested, 2,9 quality years improvement. or, $3500 per quality years.


Hakon Hakonarson – Children’s Hospital of Pensylvania

Centre for Applied Genomics at CHOP. Collaborate with Penn.

Case from Lipid Cohort. Familial form of lipid disease. 1700 subjects, 900 families.

Case: 55 year old man – phenotype described (no fat, mild diabetes, lipoprotein panel appeared normal.) [Missing much of it – don’t know the terms]

Many features overlapped with adult progeria.

Initial genetic analysis: turned out to be homozygous for PLIN1 and heterozygous for WRN.
Balanced translocation t(8;10) as well.

Pedigree shown. Two brothers, both with much more gentle phenotypes, both had liver issues.

Goal became to map the breakpoint: were there any additional genes or elements contributing to the phenotype? The condition is far more advanced in proband.

Used Linked Read technology for translocation breakpoint mapping. (Quick review of barcoding for this technology) Gel beads in emulsion.

Fine mapping of region: near cyp26C1 and CYP26A1 and ADHFE1 on the other side.

No single gene jumped out.

Many hypotheses were considered – not clear what was going on. Next step is investigation into WRN. Cell lines used for WRN activity, protein expression and transcript assessment.

Asses changes in genes near breakpoint.

Not totally solved, but very interesting case.