Tuesday, January 12, 2016

Big Data at JP Morgan (Panel)

At the January 2016 JP Morgan biotech conference, Astellas and Deloitte sponsored an expert panel on where we stand with big data in the life sciences, and where we should be by 2025.   A brief summary is provided, along with a link to more detailed notes.

Neil Lesser, Principal, Deloitte Consulting (Overview)
John Carroll, Editor in Chief, Fierce Biotech (Moderator)
   Atul Butte, Director, UCSF Institute for Computational Health
   David Glazer, Engineering Director, Google
   Michael Pellini, CEO, Foundation Medicine
   Anthony Philippakis, Chief Data officer, Broad Institute
   Matthew Trunnell, VP/CIO, Fred Hutchinson Cancer Research Center
Jim Robinson, President, Astellas Pharma USA (Closing remarks)

SUMMARY
John Carroll of Fierce Biotechnology introduced the meeting, describing a challenging period for biopharma, with the increasingly complex development of specialty drugs, and with regulatory uncertainty balanced by accelerated pathways...balanced by increased postmarketing requirements.  And of course, coverage and payment has become an "echo chamber" about the need for value-based market entry.

Neil Lesser of Deloitte introduced the end-year 2015 report on biopharma productivity by profiling (for the fifth year) 12 large biopharmas and this year, comparing to five specialty biotechs.  Pharma seems to lack economies of scale in R&D; the largest companies may actually be less efficient.  Revenue per new drug is down and costs per new drug is up.  The report is online.  Summary from Reuters here.

David Glazer of Google described the massive capabilities for big data analysis that were developed for marketing.  This is in fact an enormous resource to refine and apply in life sciences big data.

Anthony Philippakis of Broad Institute discussed their collaboration with Google.  (“We taught Glazer to spell DNA.”)  Big data in life sciences is underway, but it will continue to demand large scale development of platforms, infrastructure.  Right now there are limited bioinformatics experts and they seem to be “recreating the wheel” at every large center, which seems inefficient.    Mathew Trunnell of the Fred Hutchinson Cancer Center described their big data efforts and goals as well.

Michael Pellini of Foundation Medicine described their work: they are already pushing genomics and bioinformatics very far into the clinical world, and doing it today.  They integrate many kinds of information, from genomics to clinical drug/gene data to clinical trial data and deliver it to all clinicians, including the 85% who work in the community.  However, the rest of the pipeline is faulty, such as payer understanding, payment for off label drugs selected by genomics, and the need to reform and improve the clinical trials infrastructure that today encompasses far too few cancer patients.

Atul Butte of UCSF discussed the tremendous increase in genomic data from new technology.  However, he also described the enormous need to harvest the patients and clinical information already extant.  They want to organize all five University of California medical centers into a data network.  He mentioned a cancer where UC had 20,000 patients, 2000 of which had gotten a particular drug.  This is ten times more than the 200 patients in the original trials, the data that the pharma and the FDA had.  UC implicitly has 10X more data already - and it is just one provider network.

The panel discussed the need to harvest and benefit from the enormous amounts of public data and in combination with private data.   One vision is that you can seamlessly have proprietary data (from large labs or pharma) and meld it into “one view” with public data (Cancer Genome Atlas, TCGA).   Public data will continue to grow exponential, with the USA Precision Medicine Initiative, the UK genome initiative, and others.  On this point, it was argued that if and when you “can do" stuff – if you have the petabytes of public data and EHR’s and so on – then the “policy” such as privacy issues and collaboration issues, will eventually be worked out.

There was a viewpoint that industry (pharma) won’t be able to do it, but academia can.  Others disagreed and argued that pharma was very much abreast and interested in all the topics being discussed and very active in them.

Grover offered the viewpoint that there were three parts, three “applications” that need to function together, the “data” application (petabytes of data), the “tools” data for analysis, and the “clinical application” eg bringing these into healthcare to patients.

What is happening now in cancer is still early, but at some point, it will be applied to other conditions as well.  It’s not only tools and big data, but it is the “insight” and “asking the right questions” that need to be developed with new skills, experience, and time.  There was also a discussion we will need many more bio software engineers.  These lack a place in traditional academia (not PhDs) but this cohort of professionals is mission-critical.  That said, it was also noted we should develop interfaces so sophisticated that non programmers can ask and answer questions as well from big data.

Question and answer turned to big data in research.   Can we do “synthetic” or in silico biology experiments?  Or can we unite big data and big molecular biology, such as using CRISPR to run thousands of gene cell culture experiments in parallel.


Jim Robinson, President of Astellas USA, wrapped up.   The field is very promising, and Astellas has invested in a large scale US big data and real world evidence center.

More detailed notes in the cloud, here.

http://tinyurl.com/JPM2016BigDataPharma