Wednesday, December 24, 2014

Theranos: A Library of Articles and Links 2006-2016

This blog began in December 2014, when a no-longer-existing blog ran a critical review of the favorable New Yorker article on Elizabeth Holmes and Theranos.

From December 2014-December 2016, this blog entry records an informal log of online news articles about Theranos.   I do not review it for the dead links that likely accumulate over time.



The original lead paragraphs of this blog, written back in December 2014, were archived here.
For an update on the March 2018 SEC action against Theranos, here.

I'll add one note from June 2018 - an article that, "Pathologists predicted the Theranos debacle, but their voices were missing from most news coverage"!!  Here.  (Archived here).

Monday, December 22, 2014

FDA to hold workshop on NGS regulation: February 20, 2015; Publishes 9-p white paper

The FDA has announced a workshop on regulation of genetic tests and next-generation sequencing, to be held in Bethesda on February 20, 2015.  The FDA announcement is here.  The FDA conference page is here.

The agenda for the day has not yet been announced.  However, FDA has published a nine page white paper outlining the agency's view of key topics and issues in NGS regulation, here.


  • Update 2/23/2015: The meeting was held on February 20; for the Genomeweb meeting review by Turna Ray, see here (subscription).


Key points are:

SUMMARY: 
The Food and Drug Administration (FDA) is announcing the following public workshop entitled “Optimizing FDA’s Regulatory Oversight of Next Generation Sequencing Diagnostic Tests.” (here) The purpose of this workshop is to discuss and receive feedback from the community on the questions in the discussion paper on diagnostic tests for human genetics or genomics using next generation sequencing (NGS) technology.

WHEN:  February 20, 2015, from 8:30 a.m. to 5 p.m.
ADDRESS:  Natcher Center at the National Institutes of Health Campus, 9000 Rockville Pike, Bldg. 45 Auditorium, Bethesda, MD 20814.

WHITE PAPER:  Section 1 is a brief introduction, Section 2 discusses the FDA's regulation of laboratory tests.  Section 3 is a two-page overview of "challenge and opportunities" raised by NGS.  It also notes FDA's several workshops and prior efforts in this area, and the clearance of Illumnia cystic fibrosis NGS tests.  The longest section, Section IV, discusses NGS analytical validity topics (IV-A) and clinical validity topics (IV-B).  There is a discussion of CLINVAR and other public data repositories, and a discussion of how the FDA, clinicians, and public should handle variants of uncertain significance.  There is a short closing section V.

I've also provided online the full-text of the FDA discussion paper on NGS, after the break.

For an NIH workshop on Precision Medicine and whole genome sequencing, Feb 11-12, 2015, see here.

On February 18, 2015, Eric Lander of the President's Council of Advisors on Science and Technology (PCAST) published an Op Ed in the NEJM on the need for special FDA treatment of NGS (here.)  Trade press at Genomeweb (here, subscription).


Monday, December 15, 2014

Can “Clinical Utility" Vary For Two Tests, if “Clinical Validity" is the Same?

Last summer, I was discussing molecular tests with a commercial plan medical director, and he referred to two gene panel breast cancer tests. In his opinion, he felt that the two tests had equal clinical validity, or if anything, the second test had higher clinical validity. However, he felt the first test had “greater clinical utility."
This puzzled me, because I had just published a paper with coauthor Felix Frueh on bringing some structure and order to communications about clinical utility. It had never occurred to me that a test could have greater clinical validity than another, similar test, yet lower clinical utility. In order to draw this conclusion, I believe there is too much “slippage" in the way clinical validity and clinical utility are defined and used. It is more useful to define them in a way such that clinical utility depends on clinical validity plus a use context. If the clinical validity for one of the tests is higher, and the use context is the same, the clinical utility for the other test cannot be better, although it might be the same.
Returning to the Definitions
In a 2014 comprehensive review, Parkinson and colleagues defined clinical validity as “the association between the biomarker and the pathophysiological state or clinical presentation of illness."1 This is essentially the same as that used by Hayes and colleagues in 2013: clinical validity is “how well the test relates to the clinical outcome of interest, such as survival or response to therapy."2
Following these definitions, the test relates what happens in a test tube (a chemical or molecular analysis) to a clinically relevant phenomenon. To be comprehensive, sometimes the test report is analytical (glucose = 125), sometimes it is genomic (we find mutation BRAF V600E is present), sometimes it is an abstract score (“recurrence score = 35"), and sometimes it is binary (a strep lateral flow immunotest is “positive").
Purely analytical tests are related to clinical states by common knowledge or definitions. For combination diagnostics, a clinical correlation is statistically (BRAF V600E is strongly associated with a high chance of response to sorafenib.) For multiple analyte tests with algorithms (MAAA tests), there is usually a double report, one being algorithmic and one being a clinical variable (recurrence score of 35 correlates with a 23% chance of 10 year recurrence).

Test
Analytical Report
Clinical Validity
Glucose
Glucose = 125
Common knowledge or definitions or protocols (not on report)
BRAF
BRAF mutation = V600E
High correlation with sorafenib response in malignant melanoma (usually on report, “interpretation")
MAAA
Recurrence score = 35
Correlated with 23% risk of 10 year recurrence in ER+, Node- breast cancer if tamoxifen treated (on report)
Strep Test
Binary
(immunoreaction positive)
Strep present (on report)
We do something with tests, which brings about their clinical utility. For Parkinson et al. (2014), “the results of the assay lead to a clinical decision that has been shown with a high level of evidence to improve outcomes."3 For Hayes et al. (2013), “whether the results of the test provide information that contributes to and improves current optimal management of the patient’s disease."4
The commercial payer’s question, with which I opened this essay, led me to think that these definitions of clinical validity and clinical utility were not designed to help people agree where the two concepts start and stop, or how they meet at a border zone. However, it is possible to think about these concepts in such a way that there is a bright-line border between them.
The Three Buckets for “AV," “CV," and “CU"
A metaphor that makes the difference clear is this:
  • Analytical validity lives in a test tube.
  • Clinical validity lives on a report and in a data file.
  • Clinical utility lives in a patient.
The simplest concept is probably analytical validity: it relates to expressions in test measurements, repeatability, reproducibility, interfering substances, or analyte sensitivity such as ng/ml. Thus, “analytical validity lives in a test tube," or at least inside the equipment that is doing the measuring.
Clinical validity lives in a report, in a data file, on a chalk board. We know that BRAF V600E is associated with sorafenib response in malignant melanoma because clinical trials showed us this was the case. We know that a range of breast cancer genomic tests – including Mammaprint, Oncotype DX, the BCI, and Prosigna – correlate with breast cancer recurrence rates because well-designed large databases tell us so.
It’s best to view clinical validity and clinical utility as separate categories even when the contents, for a test in question and its use case, are similar. For example, we might say offhand that the clinical validity and the clinical utility of a BRAF test are the same – a V600E mutation is associated with clinical response, and the lack of this mutation is associated with non response. In health technology assessments, tests like BRAF get a fast pass, because the clinical validity and clinical utility are so similar.5 However, we avoid a host of later problems if we take the position that even for these tests, the clinical validity and clinical utility are not “the same."
Test
Analytical Report
Clinical Validity
Clinical Utility
Combination Diagnostics
BRAF
BRAF mutation = V600E
V600E is correlated with increased survival when treated with sorafenib
When V600E patient is treated with sorafenib, he/she lives longer.
Her2Neu
Her2neu Positive
Positive Her2Neu is correlatedwith increased survival if treated with Herceptin
When Her2neu positive patient is treated with Herceptin, she lives longer.
Here, while “clinical validity" and “clinical utility" sound the same, they are not the same thing. Clinical validity is a correlation with an analytical test report and the outside clinical world. Clinical utility is something that really happens as a result of what we “do" for a patient. This carries out the idea that clinical validity “lives" on a test report, on a chalk board, or in a paper, whereas clinical utility “lives" in the patient herself or himself. Another way of using our metaphor is to say that analytical validity happens in the real world – although in a very tiny real world inside a test tube – and clinical validity “lives" on a report that we are confident is true because of various prior data. Clinical utility again lives in the real three dimensional living world of drugs, therapies, and patients. Yet another way of saying this – if only one factory could make Herceptin, and it blew up, and there were no more Herceptin drug supplies, the clinical utility of a Her2neu report for a doctor and his patient would be gone (at least for now, and in regards to prescribing Herceptin). However, the clinical validity would be just as true: her test report is in our hand, and cohorts of test-positive patients were reported to live “N" months longer, and it had varied greatly based on Her2neu status.
A Graphic Model of Clinical Utility
In 2014, Frueh and Quinn published a six-question approach to communications about clinical utility, including a figure that shows a one-way relationship between clinical validity and clinical utility. The model focuses on decision making for new tests, where the clinical utility is comparative to the status quo:

graphic

However, there is only an increase in clinical utility for that patient because we did something different than the status quo, a “change in management:"

graphic
And there could only be a change in management because there was something different about the information we had with the new test, relative to the status quo with the old test (or no test):

graphic

This last graphic, shown above, only makes sense if two tests that had “the same clinical validity" would have the same impact on clinical utility in the same use case. This is easy to see if we have two thermometers, or two glucose meters, that have exactly the same reports. It’s also easy to see if we have two different BRAF tests that have the same (or very nearly the same) V600E mutation or wild type reports. The model shown above is generalized for diagnostic tests. For example, if an old MRI scanner has 0.5 cm resolution, and a new MRI scanner has 0.2 cm resolution, the new scanner may have more accurate radiology reports (clinical validity) which lead to better outcomes (correct surgical decisions and other clinical decisions not misled by false positives and false negatives). On the other hand, if two MRI scanners have exact the same image field size, slice thickness, contrast ratio, and resolution, it would be difficult to imagine that two indistinguishable imaging results from, say, a Siemens and a Philips MRI scanner, would have different diagnoses in the reading room or different clinical actions.
-------------------------

1 Parkinson DR et al. (2014) Evidence of clinical utility: an unmet need in molecular diagnostics for patients with cancer. Clin Cancer Res 20:1428-44.2 Hayes DF et al. (2013) Tumor biomarker diagnostics: Breaking a vicious cycle. Science Translat Med 5:196cm63 Parkinson et al. (ref. 1); citing Olson S, Berger AC (2012) Genome based diagnostics: clarifying pathways to clinical use [Workshop]. Institute of Medicine.4 Hayes et al. (ref. 2).5 The Palmetto MolDX evaluation process provides a “fast track" for combination diagnostics approved with drugs in FDA pivotal drug trials. The BCBS Tech Evaluation Center provided a rapid satisfactory report on BRAF kinase testing in 2011.http://www.bcbs.com/blueresources/tec/vols/26/26_07.pdf

Tuesday, December 9, 2014

House Energy & Commerce: Committee Wants Feedback on Regulation of Lab Tests

On December 9, 2014, the "21st Century Cures" subcommittee of the House Energy & Commerce Committee released ten questions on which it seeks public feedback by Monday, January 5, 2015.   The 21st Century Cures committee website is here, the feedback request is here.  The request for comments is online as a PDF, here.

The request for comments notes that the committee held a roundtable on personalized medicine on July 23, 2014, and a meeting specific to the FDA's proposal to regulate LDTs on September 9, 2014.  (For my coverage of the latter, see here, including a very detailed discussion report, here.)



I've also cut-pasted the eleven questions in full, below the break.

Thursday, December 4, 2014

My talk at California Society of Pathologists

On December 5, I had the chance to give a short talk as part of a panel on changes in payer policy and how they are impacting practicing pathologists.  The location was the nationally popular winter California Society of Pathologists meeting, which gives both Californians and pathologists from cold wintry states a December trip to San Francisco for CME credits.   The final program is available here.

I was pinch hitting for a senior Medicare speaker, but did my best both to talk about what is going on at Medicare vis-a-vis pathology and to convey how the field, profession, or industry (take your pick) looks from the viewpoint of a Medicare policymaker.  My deck for the presentation is available in the cloud, here.

My key points were...

•  Federal policy changes for pathology are legion
•  Local changes to pathology policy
•  MolDX Program
MolDX LCDs
•  How pathology looks to policymakers
•  Can changes be more selective?