Thursday, March 12, 2026

Reliving an LDT Saga and Debate from 2007-2010 (PGx, Rituximab)

In the past couple years, we've lived through FDA regulation of LDTs, court cases, and expanding capabilities of genomic diagnostics, many of them LDTs. 

However, it's worth while to recall a period 2007-2010, when a PGx test to predict rituximab responders led to pushback from Genentech against LDTs, a Citizen's Petition to FDA, and a National Academies review in 2010. The PGX FCGR rituximab test largely sank out of view by then, and later meta-analysis were negative. Here is a retelling of the whole story from Chat GPT 5.4.

The article below is written entirely by Chat GPT and as a side bar I provide a link to the whole Chat GPT dialog in its original form of prompts and answers:  Here.

It would have taken me hours to research and write this essay by hand.  With AI, it took a half hour from my first vague prompt about a half-remembered something.

###

 


Genentech, Rituximab, PGxPredict, and the LDT Wars of 2008:
A ChatGPT Retelling

In the past couple years, we have lived through FDA regulation of LDTs, court cases, and rapidly expanding genomic diagnostic capabilities, many of them laboratory-developed tests. 

It is worth recalling an earlier episode, circa 2007-2010, when a pharmacogenomic test for rituximab response triggered a sharp pushback from Genentech, a Citizen Petition to FDA, and a National Academies discussion about whether CLIA alone was an adequate framework for predictive tests used to guide therapy. In the dialogue reproduced in my attached PDF [fn 1], ChatGPT 5.4 identified the test as PGxPredict:RITUXIMAB, marketed by PGxHealth, and traced how it became less a durable product than a historical marker in the debate over clinical validity, therapeutic claims, and FDA oversight of LDTs [1].

The LDT Assay of 2007

The basic story is easy to summarize and hard to forget. PGxPredict:RITUXIMAB was launched in 2007 as a pharmacogenetic assay intended to predict likelihood of response to rituximab, based on the FCGR3A 158V/F polymorphism [1]. The underlying biology was plausible. FCGR3A encodes an Fc receptor involved in antibody-dependent cellular cytotoxicity, and rituximab is an anti-CD20 antibody. If one variant binds IgG more avidly than another, it is not crazy to hypothesize that patients with the “better binding” genotype might derive more benefit from rituximab. The ChatGPT dialogue rightly emphasizes that this was not foolish biology. It was pilot data for a biologically attractive hypothesis that seemed, for a time, to promise a shortcut to personalized treatment selection [1,4].

But what looked elegant in concept quickly ran into a larger regulatory and evidentiary reality. The 2010 National Academies workshop summary lays out the fault line very clearly. FDA reviewed companion diagnostics for analytic validity and clinical validity, with transparency about claims on the label. By contrast, CLIA regulated laboratories and laboratory processes, not the clinical validity of the test itself. The workshop summary states bluntly that CLIA “does not directly regulate clinical validity,” and that laboratories offering LDTs could begin testing once the laboratory director deemed the performance specifications suitable [2]. That framework was tolerable for many conventional laboratory services, but it became increasingly controversial when the test in question was being used, or marketed, to influence use of expensive and consequential drug therapy [2].

Genentech is Unhappy and Shows It

That is exactly where Genentech stepped in. In its December 5, 2008 Citizen Petition, Genentech asked FDA to require that all in vitro diagnostic tests used or intended for use in guiding therapeutic decisions be held to the same scientific and regulatory standards, whether they were test kits sold broadly or so-called “home brew” LDTs [3]. The petition argued that the proliferation of predictive LDTs had created a disparity in oversight, and that tests making claims about clinical effectiveness, patient selection, or therapeutic decision-making should not escape FDA review simply because they were offered through a single laboratory [3]. The National Academies summary later described this petition in nearly the same terms, noting that Genentech sought “the same scientific and regulatory standards” for all tests used in therapeutic decision-making [2].

The petition did not complain about LDTs in the abstract. It pointed to concrete examples, and one of the most vivid was rituximab. The National Academies report of 2010 quotes Genentech’s representative, Dr. Mass, describing a “broad proliferation of assays” allegedly being used to make decisions about patient care without FDA clearance for efficacy or safety, and specifically cites a predictive test for rituximab in lymphoma that claimed physicians could “confidently predict” whether patients would respond [2]. The report further summarizes PGxHealth data in follicular lymphoma monotherapy suggesting that homozygotes for a specific gene variant had a 100 percent response rate, while others had a 67 percent rate [2]. Yet the same discussion immediately undercuts the stability of that claim: Dr. Mass noted that the confidence intervals were “quite wide and overlapping,” and therefore one could question whether the claims being made by the assay were relevant [2].

A Test, A Case Study

Here the rituximab story became larger than one test. Genentech’s objection was not simply commercial annoyance that a third party was making an unauthorized assertion about Rituxan. It was a deeper objection that a laboratory test was crossing the line from exploratory biomarker work into actionable therapeutic guidance without the level of review expected for a drug-diagnostic pair [2,3]. The workshop summary captures that concern with unusual force. Dr. Mass argued that predictive safety, in this context, meant “the right patient getting the right drug, and the wrong patient not getting the wrong drug” [2]. He pointed out that CLIA did not create the sort of record-keeping that would readily reveal whether patients were harmed by false or unsupported predictive claims [2]. FDA’s Dr. Gutierrez concurred that tests “intimately tied to a therapeutic” should be approved by FDA because “for the drug to be safe and effective, the device itself has to be controlled” [2]. That exchange reads today like an early prototype of the same argument we would hear again and again in later LDT policy fights.

The National Academies discussion is also striking because it does not simply endorse a maximal FDA takeover. The workshop participants saw the patchwork problem, but they also worried that forcing every LDT through a conventional FDA pathway could stifle innovation and eliminate the rapid-availability route that had made some useful tests possible [2]. One speaker warned against “throwing the baby out with the bathwater” [2]. Another stressed that investors wanted not necessarily more regulation, but clarity about regulation and reimbursement [2]. In that sense, the PGxPredict episode captures a permanent tension in diagnostics policy: the more a test aspires to shape treatment choice, the more it begins to look like a product that should be regulated like a product; yet the more one imposes full premarket burdens, the more one risks slowing exactly the sort of iterative development that has always characterized laboratory medicine.

The Rest of the Story

The ChatGPT dialogue in my attached blog PDF adds a useful business epilogue. It notes that PGxHealth did not vanish in a dramatic crash over this single assay. Rather, its parent company changed direction [1]. According to the dialogue, PGxHealth’s diagnostics business was sold to Transgenomic in 2010, while the parent company pivoted toward therapeutics and was later acquired [1]. The important detail is not just corporate genealogy. It is that FAMILION, the inherited cardiology franchise, seems to have had continuing commercial life, whereas PGxPredict:RITUXIMAB faded from view and did not become an enduring commercial franchise [1]. In other words, the rituximab assay survived mainly as a policy touchstone, not as a successful long-term product.

Early Data Doesn't Hold; the Meta Analysis

Why did the assay fade? The answer seems to lie in the familiar arc from promising early biomarker signal to mixed replication to later disillusion. The 2014 Fen Liu et al. paper is especially important because it did not merely report one more small study. It combined a retrospective analysis of 164 newly diagnosed DLBCL patients treated with R-CHOP with a meta-analysis of 731 cases from seven data sets [4]. The authors found no association between FCGR3A 158V/F genotype and overall response rate or complete response rate in their own cohort, and likewise no significant association in the pooled meta-analysis [4]. Their conclusion was crisp: “No clear relationship” existed between FCGR3A 158V/F and response to frontline R-CHOP in DLBCL [4].

That paper is worth dwelling on because it shows how biomarker ideas fail in the real world even when the original mechanistic intuition is respectable. The early literature, as Liu summarizes, had been controversial from the outset: two studies suggested predictive value, while four others did not [4]. The 2014 analysis found ORR of 87.6 percent and CR of 62.0 percent in the local cohort, with no statistically significant differences by genotype [4]. In the meta-analysis, none of the main genetic contrasts showed a significant improvement in response for V/V patients [4]. The authors even performed an equivalence-based nonsuperiority analysis and reported p < 0.0001, strengthening the case for nonassociation rather than simply “failure to prove” [4]. This is stronger than saying the result was merely inconclusive. It is close to saying that, within the limits of the available evidence, the hoped-for predictive signal was not there in a clinically useful way.

There was, to be sure, one nuance. Liu and colleagues found that the F/F genotype correlated with shorter progression-free survival, borderline overall and significant in the non-GCB subtype, while showing no overall survival difference [4]. That is interesting biology. It suggests there may still have been a subtle host-immune effect on disease course or duration of response. But that is a very different proposition from a robust predictive biomarker that would justify telling clinicians they could use a blood test to decide who would or would not benefit from rituximab-based therapy. A modest survival nuance in subsets is not the same as a clinically reliable selection rule [4].

A Time Capsule Worth Remembering

The more one reads these documents together, the more the episode looks like a classic cautionary tale in precision oncology. First comes a plausible mechanism. Then a small study or company analysis generates an arresting result. Then marketing language reaches beyond the evidentiary base. Then a drug sponsor objects, partly from self-interest and partly from real concern that unsupported claims about treatment response can misdirect care [2,3]. Then regulators and policy bodies discover that their existing categories—test kit, home brew, CLIA lab, FDA review—do not map neatly onto this new hybrid object, the predictive laboratory assay that aspires to alter the use of a specific therapy [2]. Finally, a few years later, broader studies often show that the original biomarker effect was smaller, weaker, more context-dependent, or less reproducible than first advertised [4].

In hindsight, PGxPredict:RITUXIMAB mattered less because it changed oncology practice than because it illuminated a structural problem. If an LDT is making claims that are functionally similar to a companion diagnostic—claims about who should or should not get a drug—then the old distinction between “laboratory service” and “regulated medical device” begins to break down [2,3]. The National Academies workshop made that plain. FDA’s representatives emphasized risk, intended use, and the importance of supporting claims with evidence [2]. CMS’s CLIA framework, by contrast, assured laboratory quality but did not “close the loop” on clinical validation [2]. That was the heart of Genentech’s complaint, and it was not an irrational complaint. The company’s commercial interests and public-health arguments were aligned, at least to a substantial degree [2,3].

Real Life Sometimes is Messy

At the same time, it would be too easy to turn this into a morality play in which Genentech was simply right and PGxHealth simply wrong. The real lesson is messier. The pressure for LDT flexibility existed for good reasons: speed, iteration, clinical curiosity, and the ability to test ideas before industry or NIH would fund definitive trials [2]. The problem comes when that flexibility is used not to explore, but to market. Once a test begins telling physicians that it can “confidently predict” therapeutic response, it is no longer living in the modest world of exploratory translational science [2]. It is entering the high-stakes world of treatment selection, where weak evidence does not merely create academic embarrassment; it can create the wrong treatment path for a patient.

Closing

That, I think, is why this small and now largely forgotten rituximab assay deserves to be remembered. The story prefigured almost everything we still argue about today: whether CLIA is enough; how much clinical validity is required before a test can shape care; when a predictive LDT starts to look like a companion diagnostic; how much uncertainty should be tolerated in early commercialization; and whether FDA intervention will save patients, slow innovation, or both [2,3]. The later negative meta-analytic literature did not merely sink one biomarker. It exposed how fragile many early therapeutic-response signals can be, especially when they are extracted from small studies and converted too quickly into clinical claims [4].

So, yes: the PGX rituximab test largely sank out of view. But the debate it triggered did not. In a sense, we are still living inside the same argument [1-4].

References

[1] Quinn B. 2026 Blog Chat GPT on FDA Genentech LDTs PGXPREDICT.pdf. Attached dialogue summary and historical reconstruction of PGxPredict:RITUXIMAB, PGxHealth, Genentech, and later developments.
https://bqwebpage.blogspot.com/2026/03/fcgr-polymorphisms-fda-genentech-ldts.html

[2] National Academies. 2010 Natl Acad on Precis Med and Regulation Summ.pdf. Workshop summary on precision medicine, diagnostics regulation, LDT oversight, and discussion of the Genentech petition and rituximab-response testing.
https://www.ncbi.nlm.nih.gov/books/NBK220030/

[3] Genentech. 2008 Genentech FDA citizen petition LDT 32p.pdf. Citizen Petition to FDA requesting consistent oversight for in vitro diagnostics used to guide therapeutic decisions.
https://www.aab.org/images/aab/pdf/Genentch%20FDA%20Petition.pdf

[4] Liu F, et al. FCGR3A 158V/F Polymorphism and Response to Frontline R-CHOP Therapy in Diffuse Large B-Cell Lymphoma. [with meta analysis]  DNA Cell Biol. 2014. Attached as 2014 DNA Cell Biol LIU 158VF and Ritxuimab and MetaAnalysis.pdf.
https://pmc.ncbi.nlm.nih.gov/articles/PMC4144364/