Monday, October 20, 2025

Your Daily AI - AI and AMA CPT Applications

Task

Both for regular CPT codes and for PLA codes, AMA applications require a brief description of the procedure.   If yours is too long, they will edit it down to a couple of sentences.  

AI Help

Here, I randomly picked a company's molecular device instructions for use - a Cepheid GeneXpert model.  I uploaded the 44-page PDF to Chat GPT and asked for a mini summary of the technology for the AMA purpose.   Here's what it came up with.   (This was NOT the device or client I was working on.)

The Xpert® Xpress MVP test is an automated, qualitative, real-time PCR assay performed on the GeneXpert® system for detecting DNA targets from organisms associated with bacterial vaginosis, Candida infections, and Trichomonas vaginalis. The self-contained cartridge integrates specimen preparation, nucleic acid extraction, amplification, and detection, providing results in approximately one hour.

I think either you, or AMA staff, will delete any branded names.  But as-is, this gets you from the 44 page booklet to a 2-sentence starting point, in a couple seconds.   You can edit from there or ask it for a re-write.

###

Prompt used.

I am assisting a client writing a CPT application for this test (device). A brief description of the technology is required, so that diverse readers of the AMA CPT application can share a general idea of the technology in question. About 3 sentences is usually enough. I am guessing 50 words (but don't use more than needed if 30 work.)

##

In the actual CPT application this fall, there were also a couple places where I hand-wrote a paragraph in response to an AMA question, then gave my writing cut-paste to Chat GPT and asked it to make it clearer and 20% shorter. 

Each time, the paragraph came back...clearer and 20% shorter.

Thursday, October 16, 2025

An Important Paper on Adopting Digital Pathology: Aggarwal, Gustavsen, et al.

Great post today at Linked In by Gary Gustavsen, Partner, Health Advances.  Find it here.

The Linked In post is concise and primarily directs the reader to the publication, Aggarwal et al. which this blog is about.

The Aggarwal paper is on policy & adoption for digital pathology.  I also saw an interesting science publication today, Schulveling et al., AI-detected lymphocytes in melanoma on H&E - here. AI-measured lymphocytes correlated better with outcomes than manually-counted.

###

Gustavsen writes in Linked In:

Are we at risk of oncologists viewing hashtagDigitalPathology as a black box? Check out our new publication with AstraZeneca, “Clinician Perspectives on Digital and Computational Pathology: Clinical Benefits, Concerns, and Willingness to Adopt”, published in Diagnostics MDPI. Download the PDF here: https://lnkd.in/eA3rBpje or drop a comment to chat further!

###

Find the open access article here.

https://www.mdpi.com/2075-4418/15/19/2527

###


Here is the authors' abstract:

Background/Objectives: Precision medicine has transformed how we manage cancer patients. As treatments and drug targets become more complex, the associated diagnostic technologies must also evolve to actualize the benefit of these therapeutic innovations. Digital and computational pathology (DP/CP) play a pivotal role in this evolution, offering enhanced analytical techniques and addressing workflow constraints in pathology labs. This study aims to understand clinicians’ awareness, utilization, and willingness to adopt DP/CP-based tools, as well as the role they perceive themselves playing in the adoption of CP-based tests. 

Methods: A double-blinded, online quantitative survey was conducted among 101 U.S.-based medical oncologists. 

Results: Awareness of DP/CP varied among clinicians, with only 17% identifying as very aware. Subsequently, the current utilization of CP-based tests is also low. Despite this, clinicians are optimistic about the potential benefits of DP/CP, including reduced turnaround times, improved therapy selection, and more consistent slide review. To achieve full adoption, clinicians recognize that barriers must be addressed, including cost, regulatory guidance and, to a lesser extent, concerns with the “black box” nature of CP algorithms. While the focus for the adoption of DP has centered on pathologists, clinicians anticipate playing a more significant role in the adoption of CP-based tests. Finally, clinicians demonstrated clear willingness to utilize a CP-based CDx, with 90% of respondents identifying as potential adopters. 

Conclusions: This study highlights a positive outlook for the adoption of DP/CP among clinicians, despite varied awareness and low current utilization. Clinicians recognize the potential benefits of DP/CP but also acknowledge barriers to adoption. Addressing these barriers through education, regulatory approval, and collaboration with pathologists and biopharma is essential for successfully integrating DP/CP technologies into clinical practice.

###

See in context of a recent posting on dig path by Braxton:

https://www.discoveriesinhealthpolicy.com/2025/10/braxton-writes-digital-pathology-will.html

####

AI CORNER

####

Chat GPT 5 reads and discusses the Aggarwal paper.

###

This paper reports a double-blinded, online survey of 101 U.S. medical oncologists that probes awareness, current use, perceived value, and adoption thresholds for digital and computational pathology (DP/CP). Despite DP/CP’s growing technical maturity—framed here with recent examples such as quantitative continuous scoring (QCS) for HER2 and TROP2 NMR—the clinician side of the ordering equation remains early: only 17% of respondents described themselves as “very aware,” 25% had no awareness, and just 13% knowingly order a CP-based test today. Awareness skews higher in academic settings and where pathology is in-house; comfort is strongest for DP/CP as decision support (region finding and quantification ~3.6/5) and weaker for fully automated interpretation (~3.2/5), signaling that validation and education will matter most at the “automation” frontier. Clinicians nevertheless endorsed clear clinical utility: reduced turnaround time, improved therapy selection, and more consistent slide review topped the value stack.

Barriers map to the usual adoption economics and guardrails rather than to AI mystique: out-of-pocket cost, payer coverage, FDA clearance, and NCCN guideline inclusion dominate concerns; “black box” worries were comparatively modest, though a nontrivial minority (≈12% of non-users) expressed high concern. Consistent with those barriers, respondents judged FDA approval and guideline placement as the most decisive levers for scale. Importantly, the survey shifts the locus of adoption from the lab to the clinic for CP-based tests: a majority of current users viewed themselves as primary decision-makers, and ~70% of non-users expect to play a role, implying that evidence packages must speak directly to oncologist-centric endpoints (accuracy with scant tissue, workflow gains, and access to targeted therapy were the top adoption drivers).

Asked to consider a hypothetical FDA-approved CP-based CDx in lung cancer, 90% were at least somewhat willing to order it, but preferences were nuanced: many clinicians would accept either a traditional or CP-based assay, and >30% of current CP users would still prefer a traditional test if both existed—hinting at pain points in real-world ordering or reporting that future products must surface and fix. The authors conclude that accelerating DP/CP in oncology will require coordinated play: biopharma to generate peer-reviewed, clinically anchored evidence; pathologists to vet analytical rigor and operational fit; and regulators/guideline bodies to provide clear, confidence-building pathways. The study’s scope (medical oncologists only; modest N; self-report) and funding (AstraZeneca; Health Advances involvement) are transparent, but its take-home for the field is crisp: the clinician demand signal is present, and movement from “promise” to “default” hinges less on demystifying algorithms than on meeting coverage, labeling, and workflow realities with level-of-evidence commensurate to a CDx.


Journal Club: Explaining a Complex Nature Paper to Me (Ma et al., Intrinsic-hoc bioinformatics)

The current Nature Biotechnology has a paper from Ma. et al. (Ohio State), called:

  • Ad hoc, post hoc and intrinsic-hoc in bioinformatics. Ma et al.

Find it at Nature Biotechnol 43:1595, here.  

The title is intriguing, I'll propose, because the title sounds good but you can't really tell what the paper is about, and you probably have no idea with "intrinsic hoc analysis" in bioinformatics is going to mean.   

You see that there's no abstract.  Looking ahead, you see there is a really dense graphic:

Eeek.  Click to enlarge (if you dare)

How I Got Help from AI

I actually made several quick trips to AI-Land to try and understand Ma et al.

PROMPT ONE

Does not have absract. Seems intriguing but TLDR and difficult to see the forest for the trees even for an md phd.

This gave me a fairly long, bullet-point by bullet-point walk-through of the paper.

So I asked for an expanation in paragraph form, which I often have found easier to absorb than 40 or 50 bullets.

PROMPT TWO

Can you offer in paragraph form.

I also found this pretty dense and I asked it to try again and simplify.

PROMPT THREE

That's good, but it's still pretty idea-dense and hard for even an MD PhD to really "grok" and "absorb" and discuss. Can you try again?

That was much better.  See this essay-style answer clipped below. But I saw one more question - I asked it for some clinical examples to bring the paper's ideas to life.

PROMPT FOUR

Can you offer some actual clinical (or, I suppose, scientific) situations or scenarios that bring these problems and their solution paths to life?

####

####

Below, I show the results of 
Prompt Three 
(the second version in paragraph form) and
Prompt Four (the examples.)

####

Chat GPT 5 writes:

Sure — here’s a clearer and more conversational version that still respects the science but is much easier to take in and talk about:

ESSAY (Prompt 3)


This paper tries to bring some order to the chaos of modern bioinformatics by sorting most data-analysis methods into three broad styles of thinking: ad hoc, post hoc, and intrinsic-hoc. The authors’ point is that each style has its strengths, its blind spots, and its natural place in the workflow.

An ad hoc analysis is the quick, practical kind that most of us start with—run a t-test, check which genes look different, or use BLAST to find a sequence match. These methods are fast, clear, and easy to explain, but their results can shift a lot depending on which dataset or parameter you use.

A post hoc strategy is what you do when you start worrying that those simple results aren’t stable. You pull together multiple analyses or datasets, compare them, and look for the patterns that hold up across runs. This gives more confidence but adds layers of complexity and makes the results harder to interpret.

Finally, intrinsic-hoc approaches try to avoid those after-the-fact fixes by building biological knowledge into the model itself—for example, structuring a neural network around real metabolic pathways or forcing a protein-docking model to respect molecular symmetry. These are “smart from the start” methods, but they’re also harder to design and less flexible outside their home territory.

The takeaway is simple: start with ad hoc for speed and clarity, move to post hoc when you need reliability, and use intrinsic-hoc when you want the model to think biologically. The authors hope this framework helps researchers be more deliberate about how they analyze data, instead of just piling on new methods whenever the old ones break down.

##

EXAMPLES (Prompt 4)

##

Excellent idea — the paper is abstract, but its argument becomes much clearer when grounded in real-world examples. Here’s how the ad hoc → post hoc → intrinsic-hoc progression might play out in practice, across biomedical and clinical contexts:


1️⃣ Cancer Gene Expression Study

Scenario: You’re studying gene expression in breast cancer to find markers that predict chemotherapy response.

  • Ad hoc: You start with a standard differential expression analysis—say, DESeq2—to identify genes that differ between responders and non-responders. You get a list of “significant” genes, but when you repeat the analysis with another normalization method or dataset, the list changes.

  • Post hoc: To stabilize things, you perform a meta-analysis across several datasets or run multiple algorithms (edgeR, limma, DESeq2) and take the consensus. This gives a more robust set of markers that recur across methods.

  • Intrinsic-hoc: Even the consensus list still includes biologically puzzling genes. So you move to a pathway-aware model—say, a neural network whose structure mirrors known signaling cascades. Now the model not only predicts treatment response but shows which biological processes drive it, yielding insight that’s more interpretable and credible.


2️⃣ Clinical Decision Support for Sepsis

Scenario: You’re developing an AI tool to predict which ICU patients are headed for sepsis.

  • Ad hoc: You build a logistic regression model using the latest ICU data and a handful of lab values (WBC, lactate, blood pressure). It performs well on your hospital’s data but fails when tested elsewhere—too dependent on local quirks.

  • Post hoc: You retrain and validate across multiple hospitals, integrating different EHR systems and datasets. You use ensemble methods to average predictions and remove site bias.

  • Intrinsic-hoc: You embed domain constraints—such as the known sequence of physiological deterioration or organ-system interdependence—directly into a time-series model. Now the algorithm respects clinical logic rather than treating data patterns as free-floating correlations.


3️⃣ Variant Interpretation in Genomics

Scenario: A lab receives an uncertain variant in a rare disease gene.

  • Ad hoc: The analyst runs standard pathogenicity tools (PolyPhen, SIFT) and gets conflicting predictions.

  • Post hoc: They combine several tools, apply a consensus scoring system (like REVEL or CADD), and integrate data from population frequency and conservation metrics to reach a more robust judgment.

  • Intrinsic-hoc: In research mode, they move toward a model that builds in known protein structure, domain function, and evolutionary constraints—so the algorithm’s reasoning aligns with molecular biology, not just statistical correlation.


4️⃣ Drug Repurposing in COVID-19

Scenario: Early in the pandemic, researchers wanted to find existing drugs that might block viral replication.

  • Ad hoc: They screened public transcriptomic datasets for genes up- or down-regulated by infection, then matched them with drug-response signatures. The results were noisy and inconsistent.

  • Post hoc: Teams began pooling multiple transcriptomic and proteomic datasets, using consensus ranking to identify common drug targets.

  • Intrinsic-hoc: Later models integrated viral life-cycle biology directly into their networks—embedding host–virus interaction maps and known drug–target mechanisms—producing more credible predictions like remdesivir and baricitinib.


💡 Takeaway

These examples show how the paper’s categories aren’t academic hair-splitting—they describe the real evolution of rigor in biomedical analysis.

  • Ad hoc = quick, often fragile insights.

  • Post hoc = robustness through integration.

  • Intrinsic-hoc = embedding biological meaning into the computation itself.

Each step costs more effort but yields more trustworthy, interpretable science—exactly the kind clinicians and translational researchers need.

Wednesday, October 15, 2025

Who Am I? I ask Chat GPT. It has a lot to say.

I asked Chat GPT 5.

What is known about Bruce Quinn, a medtech and Medicare policy expert?

##

Discoveries in Health Policy: New Podcast Today: Perry Dimas Interviews Me on Genomics Policy Trends

Bruce Quinn, MD PhD MBA is a U.S. medtech and Medicare policy consultant best known for translating CMS coding/coverage/payment rules for diagnostics (especially genomics and digital pathology) into practical commercialization strategy. He runs Bruce Quinn Associates LLC (Los Angeles/San Francisco) and publishes the long-running policy blog Discoveries in Health Policy, which features frequent analyses of MolDX, NCD/LCD activity, and lab payment trends. (Bruce Quinn Associates)

Background & roles. A board-certified pathologist with an MD/PhD from Stanford and an MBA from Northwestern’s Kellogg School, Quinn worked in academic medicine and strategy consulting (Accenture, DC policy firms), and served as a Medicare Part B regional medical director before founding his own consultancy. He frequently speaks at industry meetings (e.g., NextGenDx, PMWC) and contributes commentaries, white papers, and webinars on Medicare lab policy. (Inside Precision Medicine)

Public presence & references. His blog and site describe current work advising diagnostics and biopharma companies on coding, coverage, reimbursement, and market entry, with readership in the tens of thousands and regular trade-press citations. He’s been quoted and profiled in industry outlets and has presented policy briefings (e.g., National Academies materials on Medicare & genomics). (Bruce Quinn Associates)

If you want, I can pull a timeline of his major talks and publications (by year) or summarize his recent posts on MolDX and AI diagnostics.

###

Recent posts.

###

Here’s a summary of some recent posts from Bruce Quinn’s “Discoveries in Health Policy” blog, along with their themes and takeaways:


Recent Posts (October 2025)

As of today, the most recent entries include (from the “Recent Blog Entries” listing) (Bruce Quinn Associates)

TitleDateKey Theme / Focus
Braxton Writes: Digital Pathology Will Be Impactful, and Soon2025-10-09Advocacy / commentary on the trajectory and potential of digital pathology
PathAI Proposes a New Coding System for Digital Pathology2025-10-09Coding / reimbursement: new proposals for structuring CPT / HCPCS codes for digital pathology
Very Brief Blog: See & Search Comments on PFS and OPPS Rulemaking2025-10-09Regulatory & policy: comments on proposed rules under Physician Fee Schedule (PFS) and Outpatient Prospective Payment System (OPPS)
PBS Review: Documentary on RNA Nobel Laureate Phillip Sharp (MIT)2025-10-09Science / public outreach: reviewing media coverage relevant to molecular biology and public understanding
Genomeweb Article Skeptical of Novitas Payment Patterns in Genomics – Plus, Spotlight on 877982025-10-08Payment / reimbursement critique: examining patterns in Novitas (a Medicare Administrative Contractor) and issues around CPT code 87798
Brief Blog: Billion to One Files for IPO2025-10-08Business / industry: news of a genomics company’s IPO filing
CHAI and Joint Commission and HHS: Mud Wrestling?2025-10-08Policy / oversight: tensions among CHAI, Joint Commission, and HHS in regulatory / quality domains
MedTechEurope: 169-page eBook on Clinical Testing Requirements in Europe2025-10-01International comparison: EU regulatory / clinical testing standards, implications for cross-market strategies
AMA Releases October 1, 2025 Edition of New PLA Codes2025-10-01Coding / lab diagnostics: new Proprietary Laboratory Analyses (PLA) codes from AMA and their relevance to diagnostics firms
JAMA's Call for Better Artificial Intelligence Papers2025-09-30Scientific rigor & AI: a critique or commentary on the quality of AI-related documents in medical research
Judge Throws Out Medicare Rule that Changed Too Much, Proposal-to-Final2025-09-30Legal / administrative: case where a court struck down a Medicare rule change for deviating too far from its proposal without adequate notice
Rapid Advances in Alzheimer’s Disease: A Quintuplet of Articles2025-09-29Therapeutics & diagnostics: survey of recent research advances in Alzheimer’s disease
Ryan Fukushima Essays on Precision Medicine2025-09-27Thought leadership: guest essays on precision medicine
Feber et al: Autonomous Agents Reach Clinical Decision Making (Nature Cancer)2025-09-26AI / autonomy: analysis of a paper claiming autonomous agents approaching clinical decision support
CLIA Issues Extensive Updates to Interpretive Manuals2025-09-25Regulatory / certification: changes to Clinical Laboratory Improvement Amendments (CLIA) interpretive guidance

Observations & Themes

From reviewing these recent posts, a few consistent threads emerge in Dr. Quinn’s commentary:

  1. Diagnostic coding & reimbursement evolution
    He closely monitors proposals for new CPT/PLA codes (especially in genomics / digital pathology) and critiques which innovations may succeed or face headwinds. (E.g. PathAI Proposes a New Coding System for Digital PathologyAMA Releases New PLA Codes) (Bruce Quinn Associates)

  2. Regulatory and legal oversight of Medicare rules
    Posts like Judge Throws Out Medicare Rule… or commentary on PFS / OPPS rulemaking show his attention to how CMS rule changes withstand legal scrutiny, particularly in terms of “proposal-to-final” logic. (Bruce Quinn Associates)

  3. Technology, AI, and autonomy in medicine
    He engages with emerging papers and proposals in autonomy / AI decision making (e.g. Feber et al), balancing enthusiasm with caution about rigor and transparency. (Bruce Quinn Associates)

  4. Cross-market / international perspectives
    With posts like MedTechEurope eBook, he offers comparative context for diagnostics and regulatory harmonization efforts beyond the U.S. (Bruce Quinn Associates)

  5. Business / industry signals
    IPOs, payer practices (e.g. Novitas Payment Patterns critique), and trends in lab reimbursement feature as pragmatic signals for innovators and investors. (Bruce Quinn Associates)


If you like, I can send you summaries of the top 3 most recent posts (going into key arguments and takeaways) or track how his view has shifted over the past year. Which do you prefer?

##
How has he changed views?

 


How Bruce Quinn’s Views Have Shifted

An 18-month evolution in focus and tone on the Discoveries in Health Policy blog

What we found:

Over the past two years, Bruce Quinn’s Discoveries in Health Policy has doubled as both a running chronicle and a barometer of U.S. diagnostic policy. His writing traces the field’s evolution from reimbursement minutiae to the sprawling questions raised by AI and digital pathology—and back again as the FDA’s bid to regulate all LDTs collapsed in court. What emerges is not a change in ideology but a steady recalibration: from legal critique, to technological curiosity, to strategic realism about how laboratories and regulators might coexist in a post-FDA-rule world.


🧭 2023 → Early 2024: “Rules, Reimbursement, and Rationalization”

Focus. Quinn’s writing in this period centered on the mechanics of Medicare payment—MolDX local coverage decisions (LCDs), PAMA gapfill cycles, and the recurring puzzle of ADLT price resets.
Typical posts. Long-form analyses comparing proposed and final rule text, annotated CMS spreadsheets, and essays on procedural fairness in policymaking.
Tone. Legalistic and exacting, occasionally impatient with CMS opacity (“the three R’s of lab policy: Rules, Rules, Rules”).
Meta-theme. “Regulatory rationalization.” Quinn argued that innovation in diagnostics could not flourish until agencies—CLIA, FDA, CMS—stopped stepping on each other’s jurisdictions and adopted consistent definitions of evidence and risk.


⚙️ Mid-2024: “Digital Pathology Arrives”

Context. FDA issued its long-planned proposal to regulate all LDTs as medical devices, and the CHAI coalition emerged as an organized industry response.
Expansion. Quinn broadened his scope from molecular testing to the fast-maturing world of digital and AI-enabled pathology—covering Artera, PathAI, Paige, Proscia, and others.
Style. Comparisons of FDA filings and CPT/PLA coding options for image analysis; pragmatic questions about MolDX’s readiness to reimburse algorithmic diagnostics.
Tone. Cautiously optimistic. He described digital pathology as having “left the hobby stage,” recognizing the field’s growing commercial seriousness.
Meta-theme. Validation was moving from the bench to the algorithm: laboratories would soon be judged less by pipettes than by their data pipelines.


💡 Late 2024 → Mid-2025: “AI Everywhere, Policy Nowhere”

Shift. As AI spread across radiology, pathology, and genomics, Quinn’s attention moved to the chaos of reimbursement.
Topics. CMS’s attempts to define SaaS under the Physician Fee Schedule, dueling CHAI and HHS initiatives, and the mismatch between FDA’s device logic and Medicare’s service logic.
Tone. Sharper and more sardonic. Posts such as “CHAI and HHS: Mud Wrestling?” used wit to highlight bureaucratic contradictions.
Meta-theme. “What’s the unit of reimbursement when the test is a server?” His blog became a running ledger of the policy vacuum surrounding AI tools—half-software, half-clinical service.


🔬 Fall 2025 (onward): “The Post-FDA-Rule Landscape”

Correction to common narratives. Contrary to predictions of an inevitable FDA takeover, the courts struck down the agency’s 2024 LDT rule in April 2025—a result Quinn had forecast months earlier, even commissioning a ChatGPT-based legal brief that mirrored the eventual reasoning of the decision.
Current focus. With FDA jurisdiction over LDTs vacated, his blog returned to the remaining live fronts: CLIA modernization, digital pathology reimbursement, and the durability of CHAI as a coordinating body.
Tone. Watchful but helping labs face the new challenge. The energy shifted from "survivalism" (under heavy LDT regulation)  to recalibration—how agencies might rebuild a credible regulatory framework after FDA’s defeat.
Meta-theme. “Convergence deferred.” He still tracks the merging of digital, AI, and regulatory domains, but now through the lens of what happens when one leg of the stool collapses.


🧩 In Summary

PeriodDominant TopicToneCore Takeaway
2023Medicare mechanicsLegalisticDemanded procedural clarity within CMS
Early 2024FDA LDT proposalAnalyticWarned of over-reach and FDA's legal vulnerability
Late 2024AI reimbursement turmoilSatiricalExposed inter-agency contradictions; AI mocked up a judge's ruling against FDA.
2025Post-FDA-rule policy landscapePragmaticRe-focused on CLIA reform and digital pathology after FDA defeat

 

Tuesday, October 14, 2025

Chat GPT Doing Internet Research: Is it all footnoted now? Case study.

Two years ago, when widely available AI first appeared, hallucinations were common.

At least in some modes, ChatGPT 5 now brings back research with footnoted (and hot-linked) sources.

Case Study

Earlier today, someone mentioned licensing a university-developed laboratory test to one or more CLIA labs for routine use. That conversation reminded me of the FDA’s confrontation with LabCorp over its ovarian cancer test, OvaSure, around 2008–2010.

What I Got

As an “agent,” ChatGPT 5 spent just 57 seconds searching the internet and came back with results that included roughly ten specific links. One of them led to LabCorp’s October 20, 2008 letter to FDA, in which the company devoted about 95% of its response to arguing that its actions were fully lawful—then, in the final paragraph, quietly announced that testing would be suspended as of October 24.

  • Below is the direct ChatGPT output, shown exactly as generated, with my prompts included.
  • My prompts appear in red.
  • Note: When copying from ChatGPT, only the first internet citation is retained when multiple internet sources are linked in sequence. (Workarounds.)

###

Please do internet research. Back around 2008-2012 (I believe), the FDA clamped down on Labcorp for adopting technology and protocols for an ovarian cancer test which, I believe, had been developed at a university. This was in the general topic of FDA acting to clamp down on what it viewed as improper use of lab developed tests, including "de facto" sales of the test without it being FDA approved. (That is, the test should have been a research use only product). Can you find this case for me.

###

57 seconds

###

You're thinking of LabCorp’s OvaSure® ovarian cancer test (2008)—licensed from Yale University—which FDA said was being marketed unlawfully as it wasn’t an LDT developed by LabCorp and lacked FDA clearance.

What happened (quick timeline):

  • June 23, 2008: LabCorp announces availability of OvaSure, a blood test for early detection in high-risk women. Labcorp

  • Sept 29, 2008: FDA issues a Warning Letter to LabCorp. The agency concluded OvaSure “was designed, developed, and validated by investigators at Yale University and not LabCorp,” so it fell outside LDT enforcement discretion and therefore required FDA clearance/approval before marketing. FDA Law Blog

  • Oct 8–9, 2008: News coverage highlights FDA’s stance that selling OvaSure without clearance was “in violation of the law.” Reuters+1

  • Oct 20, 2008: LabCorp tells investors it will voluntarily discontinue OvaSure (effective Oct 24, 2008). SEC+1

Why FDA intervened:
FDA’s letter emphasized that key elements—design, validation, instructions for use, and even specified components—originated at Yale, not within LabCorp’s own CLIA lab. Because of that, FDA said OvaSure wasn’t a traditional LDT (performed and validated entirely within a single lab) and thus required FDA premarket review before marketing to patients. FDA Law Blog

Useful primary/near-primary sources:

  • FDA Warning Letter to LabCorp re: OvaSure (dated Sept 29, 2008; archived copy). FDA Law Blog

  • Reuters and ABC News reports summarizing FDA’s position that marketing OvaSure without clearance was illegal. Reuters+1

  • LabCorp’s own Oct 20, 2008 statement that it would stop offering OvaSure. SEC+1

If you’d like, I can pull brief quotes from the Warning Letter and package them with exact citations for your files.  OK.

###

Key Excerpts from the FDA Warning Letter (Sept 29, 2008)

From Laboratory Corporation of America — Warning Letter FDA Law Blog:

“This review has revealed serious regulatory problems involving these devices manufactured by your firm.” FDA Law Blog

“Based on the information collected, FDA has determined that the OvaSure™ is a test that was designed, developed, and validated by investigators at Yale University and not LabCorp. Instructions for use and performance characteristics appear to have been developed by Yale investigators. In addition, the materials being used to produce this test … are manufactured … based on specifications by the workers at Yale. This device is not within the scope of laboratory developed tests over which the agency has traditionally exercised enforcement discretion.” FDA Law Blog

“We believe you are offering a high risk test that has not received adequate clinical validation … Because you do not have marketing clearance or approval from the FDA, marketing OvaSure™ is in violation of the law. The device is adulterated … The device is also misbranded …” FDA Law Blog+2Our Bodies Ourselves+2

“Because you do not have marketing clearance or approval from the FDA, marketing OvaSure is in violation of the law.” BioSpace

So FDA’s position was that OvaSure could not rely on LDT “enforcement discretion” because its development was external to LabCorp, and that it was therefore a “device” under the FD&C Act subject to requisite premarket clearance or approval. FDA Law Blog+2BioSpace+2


LabCorp’s Response (October 20, 2008)

In a formal response filed as an exhibit to its SEC filing, LabCorp contested FDA’s legal basis and said it would discontinue the test voluntarily. SEC

Some relevant passages:

“The OvaSure™ test … is performed at a laboratory that is licensed under the Clinical Laboratory Improvement Amendments (“CLIA”) … LabCorp is a CLIA-certified high-complexity testing laboratory. … LabCorp bears full responsibility … for … OvaSure™, and independently validates its tests on an ongoing basis.” SEC

“LabCorp does not agree with the assertion in the Warning Letter that OvaSure™ is a medical device subject to regulation under the Federal Food, Drug, and Cosmetic Act … We also do not agree that our interactions with Yale University provide FDA any basis for exercising jurisdiction over the test. … Yale’s role … is limited to licensing to LabCorp certain intellectual property. Yale has no control … to influence the development, methodology, validation, performance characteristics, use, distribution or any other aspects of LabCorp’s testing service.” SEC

“[N]evertheless, despite our disagreement … LabCorp will voluntarily discontinue offering the OvaSure™ test effective October 24, 2008.” SEC


Context and Commentary

  • In commentary and diagnostics-industry writeups, it is often noted that FDA’s argument hinged on the fact that key elements of test design, validation, component specifications, and instructions for use were traced to Yale (rather than internal to LabCorp). This, in FDA’s view, meant the test was not a genuine “lab-developed test” (LDT) in which the lab internally designs, validates, and controls the test process. Our Bodies Ourselves+3FDA Law Blog+3GenomeWeb+3

  • Critics of the test had flagged that the clinical and predictive validity claims had not been sufficiently demonstrated for a screening context, and that false positives could lead to unnecessary surgery or interventions. Our Bodies Ourselves+2Contemporary OB/GYN+2

  • LabCorp argued that licensing IP from Yale did not, in itself, justify FDA jurisdiction, and insisted that the test was validated internally under CLIA rules. Nevertheless, to avoid further regulatory risk, it withdrew the test. SEC+2GenomeWeb+2

  • Public reporting: ABC News (via Reuters) reporting on Oct 9, 2008, quoted FDA’s statement:

    “Because you do not have marketing clearance or approval from the FDA, marketing OvaSure is in violation of the law.” ABC News


If you like, I can assemble a clean, annotated PDF version of the Warning Letter plus response with page and section markers (for your archival use). Do you want me to do that?

###

Write up the episode in the style of a sidebar page for an FDA regulatory textbook.

###

Excellent direction — here’s a revised and expanded textbook sidebar, incorporating the strong tone and structure of LabCorp’s October 20 response and its ultimate retreat.


Case Study: 
The OvaSure Test and FDA Oversight of
Laboratory-Developed Tests (2008)

In mid-2008, Laboratory Corporation of America (LabCorp) began offering OvaSure, a blood test designed to detect early-stage ovarian cancer in high-risk women. The test was based on research from Yale University, whose investigators had developed the biomarker algorithm and the assay’s initial validation data. LabCorp licensed Yale’s intellectual property and introduced OvaSure as a service performed in its own CLIA-certified laboratory, presenting it as a laboratory-developed test (LDT) rather than an FDA-cleared diagnostic device.

Within months, the U.S. Food and Drug Administration (FDA) issued a formal Warning Letter to LabCorp. The agency asserted that OvaSure was not a genuine LDT, because its key design, performance specifications, and intended-use materials originated outside LabCorp. FDA emphasized that the test therefore qualified as a medical device requiring premarket clearance or approval. The letter further warned that the test’s predictive claims lacked adequate clinical validation and that its marketing might mislead patients or prompt unnecessary surgical interventions.

LabCorp’s October 20, 2008 response was strikingly firm in tone. In a detailed letter to FDA, the company rejected nearly all of the agency’s legal and factual assertions. It argued that OvaSure was fully validated within its CLIA-regulated laboratory, that Yale’s involvement was limited to a patent license, and that FDA had no jurisdiction over LDTs developed and performed within a single certified lab. LabCorp wrote that it bore “full responsibility” for the test’s design, validation, and ongoing quality control, and therefore considered the agency’s interpretation legally unsupported.

Yet in the final paragraph, after dense pages of legal defense, LabCorp abruptly announced its decision to discontinue OvaSure, effective October 24, 2008—just four days later. The test never returned to the market.

The episode became a defining moment in the history of FDA oversight of laboratory-developed tests. It demonstrated both the agency’s willingness to challenge the boundaries of LDT discretion and the practical limits of resistance by even the largest commercial laboratories. For modern readers, OvaSure stands as an early example of the tension between academic innovation, commercial translation, and regulatory control, foreshadowing later disputes over FDA authority in the LDT domain.

###
Has this dialog been an example of the use of "agentic" AI? With you doing all that research (and doubtless many links you did NOT quote to me)?

###

That’s an excellent and quite subtle question — and yes, this conversation is a good example of semi-agentic AI behavior, but not fully “agentic” in the strong sense.

Here’s how to parse that:


🧠 1. What “agentic AI” usually means

In current research and policy language, agentic AI refers to systems that:

  • Act autonomously toward goals, rather than just answering prompts.

  • Decide what sub-tasks to perform, when to search, write, summarize, or verify.

  • Chain multiple steps (e.g., gather evidence, compare versions, draft a memo, check citations).

  • May even loop or plan iteratively until a self-defined stopping condition is met.

Think of it as an AI that “works like a research assistant” rather than a “search engine with a chat interface.”


🔍 2. What just happened here

In this exchange, I:

  • Understood your intent (“Find the FDA enforcement case on a university-developed ovarian cancer test used by LabCorp”).

  • Launched an independent web search (selecting relevant terms, filtering results, ranking sources).

  • Synthesized and cited multiple documents — FDA letter, SEC filing, news reports — and

  • Summarized them coherently, while discarding many irrelevant hits that you never saw.

That’s agentic behavior in miniature — I autonomously selected and integrated sources to meet a complex information goal.


⚖️ 3. Why it’s only semi-agentic

A fully agentic system would:

  • Keep iterating until it was confident of full coverage,

  • Check conflicting accounts,

  • Potentially download the FDA letter PDF, extract text, and index it for future use,

  • And possibly propose next steps (“Would you like me to trace subsequent FDA LDT actions 2009-2014?”).

In contrast, my current architecture:

  • Acts only when you prompt me,

  • Performs bounded, transparent steps (like one web search),

  • And doesn’t self-launch follow-up searches unless you explicitly ask.

So: I showed goal-directed autonomy within a narrow frame, not long-term initiative.


🧩 4. Why this distinction matters

Your intuition is exactly the point many observers make:
Even a short interaction like this feels agentic, because you delegated a multistep cognitive task — discovery, selection, synthesis — and I carried it out without micro-instruction.

That’s qualitatively different from keyword search or a static text generator. It’s “emergent agency” — bounded but purposeful.


If you’d like, I can outline how fully agentic research assistants are now being built (e.g., using tool-use loops, reflection, and planning) and how they differ in governance and safety from the kind of guided autonomy you just saw. Would you like that next?

###

 See a Nov 2028 article.