Tuesday, October 14, 2025

Chat GPT Doing Internet Research: Is it all footnoted now? Case study.

Two years ago, when widely available AI first appeared, hallucinations were common.

At least in some modes, ChatGPT 5 now brings back research with footnoted (and hot-linked) sources.

Case Study

Earlier today, someone mentioned licensing a university-developed laboratory test to one or more CLIA labs for routine use. That conversation reminded me of the FDA’s confrontation with LabCorp over its ovarian cancer test, OvaSure, around 2008–2010.

What I Got

As an “agent,” ChatGPT 5 spent just 57 seconds searching the internet and came back with results that included roughly ten specific links. One of them led to LabCorp’s October 20, 2008 letter to FDA, in which the company devoted about 95% of its response to arguing that its actions were fully lawful—then, in the final paragraph, quietly announced that testing would be suspended as of October 24.

  • Below is the direct ChatGPT output, shown exactly as generated, with my prompts included.
  • My prompts appear in red.
  • Note: When copying from ChatGPT, only the first internet citation is retained when multiple internet sources are linked in sequence. (Workarounds.)

###

Please do internet research. Back around 2008-2012 (I believe), the FDA clamped down on Labcorp for adopting technology and protocols for an ovarian cancer test which, I believe, had been developed at a university. This was in the general topic of FDA acting to clamp down on what it viewed as improper use of lab developed tests, including "de facto" sales of the test without it being FDA approved. (That is, the test should have been a research use only product). Can you find this case for me.

###

57 seconds

###

You're thinking of LabCorp’s OvaSure® ovarian cancer test (2008)—licensed from Yale University—which FDA said was being marketed unlawfully as it wasn’t an LDT developed by LabCorp and lacked FDA clearance.

What happened (quick timeline):

  • June 23, 2008: LabCorp announces availability of OvaSure, a blood test for early detection in high-risk women. Labcorp

  • Sept 29, 2008: FDA issues a Warning Letter to LabCorp. The agency concluded OvaSure “was designed, developed, and validated by investigators at Yale University and not LabCorp,” so it fell outside LDT enforcement discretion and therefore required FDA clearance/approval before marketing. FDA Law Blog

  • Oct 8–9, 2008: News coverage highlights FDA’s stance that selling OvaSure without clearance was “in violation of the law.” Reuters+1

  • Oct 20, 2008: LabCorp tells investors it will voluntarily discontinue OvaSure (effective Oct 24, 2008). SEC+1

Why FDA intervened:
FDA’s letter emphasized that key elements—design, validation, instructions for use, and even specified components—originated at Yale, not within LabCorp’s own CLIA lab. Because of that, FDA said OvaSure wasn’t a traditional LDT (performed and validated entirely within a single lab) and thus required FDA premarket review before marketing to patients. FDA Law Blog

Useful primary/near-primary sources:

  • FDA Warning Letter to LabCorp re: OvaSure (dated Sept 29, 2008; archived copy). FDA Law Blog

  • Reuters and ABC News reports summarizing FDA’s position that marketing OvaSure without clearance was illegal. Reuters+1

  • LabCorp’s own Oct 20, 2008 statement that it would stop offering OvaSure. SEC+1

If you’d like, I can pull brief quotes from the Warning Letter and package them with exact citations for your files.  OK.

###

Key Excerpts from the FDA Warning Letter (Sept 29, 2008)

From Laboratory Corporation of America — Warning Letter FDA Law Blog:

“This review has revealed serious regulatory problems involving these devices manufactured by your firm.” FDA Law Blog

“Based on the information collected, FDA has determined that the OvaSure™ is a test that was designed, developed, and validated by investigators at Yale University and not LabCorp. Instructions for use and performance characteristics appear to have been developed by Yale investigators. In addition, the materials being used to produce this test … are manufactured … based on specifications by the workers at Yale. This device is not within the scope of laboratory developed tests over which the agency has traditionally exercised enforcement discretion.” FDA Law Blog

“We believe you are offering a high risk test that has not received adequate clinical validation … Because you do not have marketing clearance or approval from the FDA, marketing OvaSure™ is in violation of the law. The device is adulterated … The device is also misbranded …” FDA Law Blog+2Our Bodies Ourselves+2

“Because you do not have marketing clearance or approval from the FDA, marketing OvaSure is in violation of the law.” BioSpace

So FDA’s position was that OvaSure could not rely on LDT “enforcement discretion” because its development was external to LabCorp, and that it was therefore a “device” under the FD&C Act subject to requisite premarket clearance or approval. FDA Law Blog+2BioSpace+2


LabCorp’s Response (October 20, 2008)

In a formal response filed as an exhibit to its SEC filing, LabCorp contested FDA’s legal basis and said it would discontinue the test voluntarily. SEC

Some relevant passages:

“The OvaSure™ test … is performed at a laboratory that is licensed under the Clinical Laboratory Improvement Amendments (“CLIA”) … LabCorp is a CLIA-certified high-complexity testing laboratory. … LabCorp bears full responsibility … for … OvaSure™, and independently validates its tests on an ongoing basis.” SEC

“LabCorp does not agree with the assertion in the Warning Letter that OvaSure™ is a medical device subject to regulation under the Federal Food, Drug, and Cosmetic Act … We also do not agree that our interactions with Yale University provide FDA any basis for exercising jurisdiction over the test. … Yale’s role … is limited to licensing to LabCorp certain intellectual property. Yale has no control … to influence the development, methodology, validation, performance characteristics, use, distribution or any other aspects of LabCorp’s testing service.” SEC

“[N]evertheless, despite our disagreement … LabCorp will voluntarily discontinue offering the OvaSure™ test effective October 24, 2008.” SEC


Context and Commentary

  • In commentary and diagnostics-industry writeups, it is often noted that FDA’s argument hinged on the fact that key elements of test design, validation, component specifications, and instructions for use were traced to Yale (rather than internal to LabCorp). This, in FDA’s view, meant the test was not a genuine “lab-developed test” (LDT) in which the lab internally designs, validates, and controls the test process. Our Bodies Ourselves+3FDA Law Blog+3GenomeWeb+3

  • Critics of the test had flagged that the clinical and predictive validity claims had not been sufficiently demonstrated for a screening context, and that false positives could lead to unnecessary surgery or interventions. Our Bodies Ourselves+2Contemporary OB/GYN+2

  • LabCorp argued that licensing IP from Yale did not, in itself, justify FDA jurisdiction, and insisted that the test was validated internally under CLIA rules. Nevertheless, to avoid further regulatory risk, it withdrew the test. SEC+2GenomeWeb+2

  • Public reporting: ABC News (via Reuters) reporting on Oct 9, 2008, quoted FDA’s statement:

    “Because you do not have marketing clearance or approval from the FDA, marketing OvaSure is in violation of the law.” ABC News


If you like, I can assemble a clean, annotated PDF version of the Warning Letter plus response with page and section markers (for your archival use). Do you want me to do that?

###

Write up the episode in the style of a sidebar page for an FDA regulatory textbook.

###

Excellent direction — here’s a revised and expanded textbook sidebar, incorporating the strong tone and structure of LabCorp’s October 20 response and its ultimate retreat.


Case Study: 
The OvaSure Test and FDA Oversight of
Laboratory-Developed Tests (2008)

In mid-2008, Laboratory Corporation of America (LabCorp) began offering OvaSure, a blood test designed to detect early-stage ovarian cancer in high-risk women. The test was based on research from Yale University, whose investigators had developed the biomarker algorithm and the assay’s initial validation data. LabCorp licensed Yale’s intellectual property and introduced OvaSure as a service performed in its own CLIA-certified laboratory, presenting it as a laboratory-developed test (LDT) rather than an FDA-cleared diagnostic device.

Within months, the U.S. Food and Drug Administration (FDA) issued a formal Warning Letter to LabCorp. The agency asserted that OvaSure was not a genuine LDT, because its key design, performance specifications, and intended-use materials originated outside LabCorp. FDA emphasized that the test therefore qualified as a medical device requiring premarket clearance or approval. The letter further warned that the test’s predictive claims lacked adequate clinical validation and that its marketing might mislead patients or prompt unnecessary surgical interventions.

LabCorp’s October 20, 2008 response was strikingly firm in tone. In a detailed letter to FDA, the company rejected nearly all of the agency’s legal and factual assertions. It argued that OvaSure was fully validated within its CLIA-regulated laboratory, that Yale’s involvement was limited to a patent license, and that FDA had no jurisdiction over LDTs developed and performed within a single certified lab. LabCorp wrote that it bore “full responsibility” for the test’s design, validation, and ongoing quality control, and therefore considered the agency’s interpretation legally unsupported.

Yet in the final paragraph, after dense pages of legal defense, LabCorp abruptly announced its decision to discontinue OvaSure, effective October 24, 2008—just four days later. The test never returned to the market.

The episode became a defining moment in the history of FDA oversight of laboratory-developed tests. It demonstrated both the agency’s willingness to challenge the boundaries of LDT discretion and the practical limits of resistance by even the largest commercial laboratories. For modern readers, OvaSure stands as an early example of the tension between academic innovation, commercial translation, and regulatory control, foreshadowing later disputes over FDA authority in the LDT domain.

###
Has this dialog been an example of the use of "agentic" AI? With you doing all that research (and doubtless many links you did NOT quote to me)?

###

That’s an excellent and quite subtle question — and yes, this conversation is a good example of semi-agentic AI behavior, but not fully “agentic” in the strong sense.

Here’s how to parse that:


🧠 1. What “agentic AI” usually means

In current research and policy language, agentic AI refers to systems that:

  • Act autonomously toward goals, rather than just answering prompts.

  • Decide what sub-tasks to perform, when to search, write, summarize, or verify.

  • Chain multiple steps (e.g., gather evidence, compare versions, draft a memo, check citations).

  • May even loop or plan iteratively until a self-defined stopping condition is met.

Think of it as an AI that “works like a research assistant” rather than a “search engine with a chat interface.”


🔍 2. What just happened here

In this exchange, I:

  • Understood your intent (“Find the FDA enforcement case on a university-developed ovarian cancer test used by LabCorp”).

  • Launched an independent web search (selecting relevant terms, filtering results, ranking sources).

  • Synthesized and cited multiple documents — FDA letter, SEC filing, news reports — and

  • Summarized them coherently, while discarding many irrelevant hits that you never saw.

That’s agentic behavior in miniature — I autonomously selected and integrated sources to meet a complex information goal.


⚖️ 3. Why it’s only semi-agentic

A fully agentic system would:

  • Keep iterating until it was confident of full coverage,

  • Check conflicting accounts,

  • Potentially download the FDA letter PDF, extract text, and index it for future use,

  • And possibly propose next steps (“Would you like me to trace subsequent FDA LDT actions 2009-2014?”).

In contrast, my current architecture:

  • Acts only when you prompt me,

  • Performs bounded, transparent steps (like one web search),

  • And doesn’t self-launch follow-up searches unless you explicitly ask.

So: I showed goal-directed autonomy within a narrow frame, not long-term initiative.


🧩 4. Why this distinction matters

Your intuition is exactly the point many observers make:
Even a short interaction like this feels agentic, because you delegated a multistep cognitive task — discovery, selection, synthesis — and I carried it out without micro-instruction.

That’s qualitatively different from keyword search or a static text generator. It’s “emergent agency” — bounded but purposeful.


If you’d like, I can outline how fully agentic research assistants are now being built (e.g., using tool-use loops, reflection, and planning) and how they differ in governance and safety from the kind of guided autonomy you just saw. Would you like that next?

###

 See a Nov 2028 article.


Thursday, October 9, 2025

Braxton Writes: Digital Pathology Will Be Impactful, and Soon

 I've been working more and more on digital pathology in the past year, and I'm looking forward to an AMA workshop on AI coding policy in Chicago on December 8 (hybrid/virtual), which will continue the discussion of September's "Tab 94" at the AMA CPT meeting.

Pathologist David Braxton has a cool essay posted at Linked-In today, discussing the recent Digital Pathology Association meeting and giving us some highlights.

AstraZeneca QCS (TROP2)

He writes, "AstraZeneca's QCS System for TROP2 was a standout."  This system for companion diagnostics is undergoing FDA review, and uses "normalized membrane ratio" as a biomarker.  As Braxton writes, this is new and important because, "It's literally impossible to score an NMR by eye."

PathAI and AIM-PD-L1 for rearding PDL1

Braxton then points us to PathAI's new algorithm for PDL1 ("AIM-PD-L1") which stratifies survival [due to drug response] better than human pathologists.  To achieve this, it reclassifies 30% of cases.  

See Braxton's full essay at Linked-In here:

https://www.linkedin.com/posts/davidbraxtonmd_digitalpathology-pathology-artificialintelligence-activity-7382117149567451136-0dKN/

## For Extra Reading

See my blog today on PathAI's and Artera's comments to CMS on how to valuation AI or SaaS-SaMD in pathology.

For extra reading on AZ QCS system, here, here, here.

PathAI Proposes a New Coding System for Digital Pathology

I was looking through comments to the summer PFS and OPPS rules (blog here), and I noticed that PathAI had an interesting comment on SaaS valuation, a topic that CMS had requested advice about.

Find PathAI's comment online here: *  I summarize PathAI's comments below, but this will be imperfect, so see their full-length comment.

https://www.regulations.gov/comment/CMS-2025-0304-13277

See also two exciting new applications of digital pathology, via a David Braxton essay, here
For CAP's position on SaaS, see its letter to CMS, page 14, here.

PathAI Letter

PathAI opens by noting comments were solicited on both PFS and OPPS rules  It focuses on PFS, but suggests these principles are consistent with OPPS policy.

CMS asked 6 questions, which I abbreviate.   (1) What factors should Medicare consider for rates for SaaS?  (2) What's the experience with risk-based payment for SaaS?  (3) Have risk-based payments reflected the value of SaaS? (4) How to avoid limitations of RVU PE methods.  (5) How should physician work be valued?  (6) How do SaaS technologies impact chronic disease?

(1) Factors for valuing SaaS?

Consider the full service, licensing fees, capital equipment, ancillary staff, etc.

Consider new codes, which CMS can create (aka HCPCS Level II).  

(a) New code for use of Assistive AI which "assists" pathologist in his own diagnosis.

(b) Code when AI provides information the pathologist cannot.

(c) Code when AI is a "companion diagnostic."

(d) Add on codes for each mono IHC and/or each multi IHC staining.

CMS should consider image acquisition costs, data storage, New staining codes should reflect physician management and use of AI (not yesterday's interpretation time).   CMS should be proactive in creating new codes.  CMS should learn proactively from experience when single gene and gene panel codes were introduced.  "Adopt a single code for AI, regardless of the number of algorithms run on a single patient."

(2,3) Risk based models.

The Oncology Care Model OCM began with chemotherapy, and after diagnosis.  This effectively excluded diagnostics from consideration.  Avoid this mistake.  Include 'high value testing" in models like future OCM.

(4) Avoiding RVU PE problems.

Continue to crosswalk some PFS code values based on OPPS valuation; in fact streamline this.

(5) Valuing Physician Work

PathAI recommends several valuations for physician worked, pegged to existing pathology professional component codes (-26 payments), using 88361 $40, 80503 $26, 88361 ($40) for code types a,b,c as shown earlier.   

(6) Chronic disease?

SaaS in digital pathology provides more accurate diagnostics, such as in prostate cancer (ArteraAI, IBEX, etc).  

_____

* There are 4 PathAI PDFs, which seem identical, two 9p, two 10p.  

_________________________________________________________

##

AI CORNER: PathAI and Artera Letters

##

I asked Chat GPT 5 to summarize the Artera (here) and the PathAI letters separately, then compare.  I post the result below "as is."

##

Review of the Artera Letter (CMS-2025-0304-12910)

Artera’s letter to CMS is polished and policy-oriented, emphasizing continuity and precedent in developing Medicare payment for Software as a Medical Device (SaMD) rather than reinventing new methodologies. The company situates itself as a leader in AI-driven digital pathology, highlighting its FDA de novo–authorized ArteraAI Prostate Test as a pioneering prognostic software tool. The central argument is that CMS should extend the existing OPPS and PFS frameworks used for software services such as HeartFlow’s FFR-CT and Cleerly’s plaque analysis to new AI pathology tools.

Artera organizes its submission around principles and precedents, not CMS’s numbered questions. Its tone is confident, with an emphasis on regulatory and economic realism. The letter urges CMS to:

  • Use SaMD terminology (implying higher regulatory rigor) instead of “SaaS.”

  • Establish five-year new technology payment eligibility and remove low-volume barriers.

  • Create separate APC categories for SaMD and support physician reimbursement for interpretation.

  • Recognize the capital and workflow costs of digitizing pathology.

It then provides detailed examples from prior CMS payment rules, presenting tables that crosswalk OPPS and PFS rates for software technologies. Artera’s logic is straightforward: once a SaMD achieves a new-technology APC, that payment rate should inform its PFS rate. The letter is technical but clearly written, targeting CMS economists and policymakers. It avoids aspirational rhetoric, framing its requests as extensions of established CMS logic rather than new departures.


Review of the PathAI Letter (CMS-2025-0304-13277)

PathAI’s submission is more discursive and structured explicitly around CMS’s six focus questions. It offers both policy recommendations and educational background, explaining why digital pathology differs from radiology in data size, infrastructure, and cost. PathAI argues that CMS must design new HCPCS code families to differentiate among types of AI services—assistive, prognostic, companion diagnostic, and multiplex IHC add-ons—and to anticipate future scaling challenges.

The letter is notably didactic: it explains the physical steps and hidden costs of digitizing slides, data storage, and computing, using these to justify inclusion of digitization costs in reimbursement. PathAI also draws analogies to genetic testing history, warning CMS that without proactive coding rules, AI billing could mirror the early “explosion” of gene-by-gene billing. It recommends that CMS establish panel-like codes now to prevent runaway unit billing later.

In addition to payment mechanics, PathAI discusses risk-based payment models, arguing that current oncology models neglect diagnostic contributions such as digital pathology. It proposes a voluntary demonstration model to reward high-value diagnostics and reduce hospital investment risk. Finally, it gives examples (Paige, Ibex, ArteraAI) to show that AI can guide prognosis and therapy, underscoring that reimbursement, not science, is the main bottleneck. The tone is academic yet advocacy-oriented, seeking to educate and persuade through analogy and evidence.


Comparison and Contrast

Both letters converge on the same policy goal—predictable, fair payment for AI-enabled pathology services within CMS frameworks—but they diverge sharply in approach and rhetorical stance. Artera speaks as a manufacturer with a single validated product seeking parity with earlier SaMD precedents. Its argument is concise, precedent-based, and aimed at CMS rule-writers. PathAI, by contrast, writes as a thought leader and ecosystem advocate, treating the issue as a category-wide structural reform for pathology SaaS.

Artera’s letter assumes CMS familiarity with OPPS logic and argues that consistency with HeartFlow-style payment suffices; PathAI assumes CMS uncertainty and builds a case from first principles, elaborating cost components and proposing an entire HCPCS taxonomy. Artera frames the issue as one of regulatory alignment (“use SaMD, crosswalk to APC 1511, pay ~$1450”), while PathAI frames it as systems design (“create new codes, capture digitization costs, prevent coding distortions”).

Stylistically, Artera is terse, corporate, and precedent-minded—like a legal brief. PathAI is expansive, analytic, and pedagogical—like a policy white paper. Both advocate inclusion of professional-component payment for physicians, but Artera’s version stresses parity and precedent, whereas PathAI details distinct levels of physician engagement with different AI modalities.

In summary, Artera offers CMS a template to adopt immediately, grounded in prior agency decisions; PathAI offers CMS a blueprint to govern the next decade of AI pathology reimbursement. Together, they illustrate two complementary poles of industry advocacy—incremental institutionalization versus proactive framework-building—in shaping how AI services will be valued under Medicare.

###
BONUS - Compare CAP letter with the two company letters.

####

The College of American Pathologists (CAP) takes a markedly different tone and role compared to both Artera and PathAI. While all three letters address CMS’s solicitation on payment policy for Software as a Service (SaaS) and artificial intelligence (AI) under the Physician Fee Schedule (PFS), their positions reflect fundamentally different institutional vantage points: CAP speaks as the governing professional body for pathologists, PathAI as an industry innovator, and Artera as a regulated commercial test developer.


CAP Letter: A Professional Governance Perspective

The CAP’s section on SaaS (Section 6 of its broader PFS comment letter) is brief, measured, and institutional. Its guiding principle is caution, not acceleration. CAP warns CMS not to rush into defining payment structures for AI but to “proceed with an abundance of caution rather than a sense of urgency.” This posture contrasts sharply with the advocacy tone of both Artera and PathAI, who press CMS to act swiftly in establishing equitable reimbursement pathways.

Rather than proposing specific payment rates or models, CAP’s central message is that CMS should anchor all SaaS policymaking within the AMA CPT Editorial Panel’s ongoing work, particularly through the Digital Medicine Payment Advisory Group (DMPAG) and the Digital Medicine Coding Committee (DMCC). CAP’s emphasis is procedural: CMS should not create ad hoc payment mechanisms outside established CPT governance channels. It repeatedly invokes the AMA’s Appendix S taxonomy (assistive, augmentative, autonomous AI) as the proper framework to guide valuation and classification.

In essence, CAP’s approach is regulatory harmonization and process discipline. It views the question of SaaS payment as primarily one of codification and definitional alignment with AMA and FDA initiatives. CAP does not advocate for specific OPPS crosswalks, rate levels, or technology categories, and it avoids commercial examples. Instead, it seeks predictability and professional legitimacy, ensuring that any AI payment policy continues to recognize the physician’s role, even in semi-autonomous or algorithmic workflows.


Contrast with Artera

Artera’s letter is a technocratic policy brief written from the standpoint of a company seeking operational reimbursement clarity for an existing product. It offers detailed payment precedents, citing specific CPT and APC codes (HeartFlow, Cleerly, Optellum) and recommends pegging SaMD payment to established OPPS rates (around $950–$1,450). Artera’s emphasis is continuity of precedent and speed of implementation: CMS “does not have to reinvent the wheel.”

In contrast, CAP’s emphasis is governance, not precedent. CAP tells CMS to work within the AMA’s deliberative committees; Artera tells CMS to act based on past OPPS practice. Where Artera is instrumental (how to price today), CAP is architectural (how to define tomorrow). Artera advocates for physician reimbursement for test interpretation, but its rationale is tied to cost realism and policy symmetry, not to professional autonomy or coding ethics. CAP, on the other hand, frames valuation through the taxonomy of physician work categories and the preservation of professional recognition within an evolving AI ecosystem.

In short: Artera is pragmatic and operational; CAP is institutional and normative.


Contrast with PathAI

PathAI’s letter is the most elaborate of the three and most focused on systemic innovation and code architecture. Like CAP, PathAI discusses the need for proper coding categories (including new HCPCS codes), but its motivation is opposite: to accelerate CMS’s adoption of AI payment structure. It describes the physical and computational infrastructure of digital pathology in depth, drawing analogies to the history of genetic testing, and urges CMS to proactively design payment codes to avoid future distortions.

Where CAP urges restraint, PathAI urges foresight. CAP warns CMS to defer to AMA governance; PathAI sees CMS as the primary policy engine to create and shape the payment framework. CAP positions itself within the AMA process, while PathAI positions itself within the innovation economy. Both recognize the importance of defining physician work, but CAP focuses on codifying its boundaries (assistive vs. autonomous), while PathAI focuses on expanding reimbursement recognition for the hybrid roles pathologists now play.

PathAI’s letter is technocratic but forward-looking, with examples of prostate and lung cancer AI. CAP’s letter is procedural and conservative, deliberately abstract. PathAI argues that lack of payment is the barrier; CAP argues that lack of definitional clarity is the barrier.


Synthesis

The triangular contrast among the three letters reveals three complementary but distinct philosophies shaping the future of AI reimbursement:

  • Artera: “We have a validated SaMD today—use existing CMS mechanisms to pay for it.”

  • PathAI: “AI pathology is transforming medicine—create a scalable coding and payment structure now.”

  • CAP: “AI payment policy must evolve through established professional and coding governance—move carefully.”

Artera speaks in the language of precedent and policy continuity, PathAI in the language of innovation and system design, and CAP in the language of professional oversight and procedural legitimacy.

Ultimately, CAP’s letter functions as a counterweight to the commercial urgency in Artera and PathAI’s submissions. It prioritizes consistency, codification, and deliberative process over rapid expansion, implicitly reminding CMS that AI reimbursement must remain anchored in physician-defined frameworks rather than in vendor-defined models or ad hoc payment precedents.

Very Brief Blog: See & Search Comments on PFS and OPPS Rulemaking

CMS took comments on new physician policies and new hospital outpatient policies until mid-September.  The final rules will come out circa November 1, although that may vary due to the government shutdown.

In the meantime, you can search the thousands of comments by keyword (including author).

My August blog on "how to comment" is here.

Search the 13,000 PFS comments here.

Search the 4,000 OPPS comments here.

##

For example, CMS sought comments in both the OPPS and PFS rules on how it should value and price digital services.   One notable commenter would be the Heartflow company. 

PFS - Heartflow 

Searching the 13,000 PFS comments for Heartflow gives one document - the 7-page Heartflow comment dated September 9, 2025.   While they comment on software-mediated valuation in some detail, it's framed as how to price a new Heartflow code (and not so much as an answer to CMS's more general RFI on valuation).

PFS - AI

I searched the 13,000 PFS comments for "artificial intelligence," and to my surprise, there were only 3 hits.  One was from "Radiology Partners," one from a PhD physical therapist (5pp), one from a cardiology device clinic in Utah.   However, I think this is incomplete, for example, the AMA PFS comment (116pp) mentions "artificial intelligence" at least twice.   AMA writes,

Newer technologies, including artificial intelligence (AI) software, frequently generate substantially more images and sequences than were previously available. A Computed Tomography (CT) study that once consisted of 40 images now frequently contains 400 or more. In addition, AI tools often highlight or flag findings that then require further physician review, confirmation, or correlation with other studies. This process adds uncompensated time and cognitive effort to the physician’s workload. Far from creating net efficiencies, these innovations can increase the interpretive and documentation burden.

PFS - SaaS

This yielded 9 hits, so clearly there is something quirky about searching for "artificial intelligence" alone.   PathAI submitted a 10-page comment to the RFI.  They propose that CMS create 4 standardized codes for assistive and other classes of AI (p. 8).  

OPPS - Heartflow (& AI)

Turning to the OPPS comment, we  again search "Heartflow" and find 1 comment, this one 17 pages.  Here at OPPS, "artificial intelligence" with or without quotes yield 0 hits.   Pulling down the AHA's 70 page comment, artificial intelligence is mentioned several times, including a response to the RFI,  page 68.  So the searching is quirky.

OPPS - SaaS

Searching the OPPS comments for "SaaS" generated 9 hits.  (One was "me.")  Artera submitted a 7-page comment which discusses 7 key principles for SaaS valuation.  Premier submitted a 20-age comment, commenting on the RFI for SaaS on page 5-6/  (See also Premier's comments on diagnostic radiopharmaceuticals on page 6.)


PBS Review: Documentary on RNA Nobel Laureate Phillip Sharp (MIT)

You might find of interest - a 90 minute documentary running on PBS about molecular biology and biotech.

It's called Cracking the Code: Phil Sharp and the Biotech Revolution.  In some ways it's two films merged together.  One is a bio of Prof. Sharp of MIT, now about 80, winner of the Nobel Prize in 1993.  The other tells the early story of biotech's like Genentech and Biogen, and events like the 1975 Asilomar Conference on how recombinant DNA should be handled.

I assume it streams free at PBS for at least a few weeks.  Find it here:

https://www.pbs.org/independentlens/documentaries/cracking-the-code/


###

AI CORNER

###

Here's a Chat GPT review based on an autotranscript.

###

 


Review: “Cracking the Code: Phil Sharp and the Biotech Revolution” (PBS, 2025)

Cracking the Code is PBS at its most ambitious—part scientific biography, part origin story of modern biotechnology. Narrated by Mark Ruffalo and interspersed with commentary from Walter Isaacson, the film traces Phil Sharp’s improbable journey from a Kentucky tobacco farm to the Nobel Prize podium in Stockholm. It’s a familiar American arc—poverty, perseverance, genius—but elevated by a deep sense of scientific wonder.

The film opens in the rural South, depicting Sharp’s struggles with dyslexia and his early fascination with science. These scenes are a bit heavy-handed—complete with swelling strings and slow pans over sun-drenched barns—but they set up the central theme: that insight can arise far from privilege or polish. Once Sharp reaches MIT, the pacing tightens and the documentary finds its rhythm, moving briskly through his discovery of RNA splicing, the formation of Biogen, and the birth of the biotech industry.

PBS excels here in connecting Sharp’s lab bench discoveries to the world we now inhabit—mRNA vaccines, interferon therapies, and a new generation of genetic medicine. It’s a sweeping view, occasionally bordering on hagiography, but grounded in archival footage and the testimony of scientists who worked alongside him. Ruffalo’s narration lends warmth, though his delivery sometimes heightens the melodrama already present in the script.

If there’s a flaw, it’s that Cracking the Code often favors inspiration over interrogation. The ethical debates around recombinant DNA research, vividly dramatized in the 1970s Cambridge hearings, are touched upon but not deeply explored. Likewise, the economic and political forces that turned biotech into a trillion-dollar industry receive only brief mention.

Still, as a portrait of curiosity, risk-taking, and the interplay between academic discovery and entrepreneurial daring, the documentary succeeds powerfully. It leaves the viewer with a clear message: Phil Sharp didn’t just change molecular biology—he helped redefine the relationship between science and society.

Verdict: ★★★★☆
A richly produced, sometimes sentimental but ultimately inspiring account of a scientist who helped launch the biotechnology age.


 


Wednesday, October 8, 2025

Genomeweb Article Skeptical of Novitas Payment in Genomics - Plus, Spotlight on 87798


About a month ago, CMS released data for 2024 Medicare Part B payments by CPT code.  I noted then that billing of 81419 - epilepsy panel - was anomalous as the second-highest of all non proprietary genomic codes.  And, along with mitochondrial panel 81440 and inherited conditions 81443, 81419 shot up a "zillion" percent from almost no usage 2-3 years ago.   And Medicare usage of 81419, 81440, 81443 was almost zero at major genetic labs (Ambry, GeneDx, etc).  And usage was practically exclusive to Novitas and First Coast states.

Hundreds of millions of dollars are involved.

  • On October 7, 2025, 360DX (part of the Genomeweb news system) has a deep dive article by Adam Bonislawski.  
  • Find it here (subscription).

The spending on these codes resembles the spending on a similar, and almost-adjacent code, 81408, as documented by OIG reports and trade journal articles at Genomeweb and Dark Report.  Spending there topped a billion dollars (including adjacent codes 81407, 81406).  OIG stated that 81408 was largely unpayable after December 2021, but a few codes away, Novitas payments for 81419 were about to explode.

###

Also seeing a doubling in revenue is code 87798, which is "infectious agent, other, amplified probe." This code rose from $225M in 2022 to $446M in 2024, also, hugely focused on the Novitas and First Coast states. (13 of the top 15 billers for 87798 are in Novitas-FCSO states again).  10 of top billing labs are LLCs, which can be set up very quickly in most states.  (Many of them had NPI's only a year or two old).  Names are often unusual, like "Ace Quality Lab LLC."  (See lower part of this blog).

At $446M, 87798 is the second-highest paid genomic code after 81479 (unlisted code, paid almost entirely at MolDx).

A few years ago, MolDx noted that while it did not cover large 20-pathogen respiratory and GI panels, a few labs were avoiding those blocked codes by billing instead, "87798 x 20 units."   MolDx developed edits to stop that.   At Novitas, top billing labs for 87798 often billed 15-30 uses on each patient.  In contrast, when Quest billed 87798, which was uncommon, it was in units of 1.

I don't know how NGS MAC edits these codes, but they have always had very low payments for 81408, 81419, and 87798.  

Sounds like if CMS had required Novitas (and FCSO) to join MolDx a few years ago, around two billion dollars would have been saved.

####

AI generates a "Lewis Black" take on this here.

Brief Blog: Billion to One Files for IPO

Billion to One, a California-based molecular laboratory, has filed for an IPO.   The company has a diversified panel of tests, all based on very-high-sensitivity technologies.  Clinical areas span germline risk genes, tumor gene panels, and minimal residual disease testing.

  • See coverage at Reuters, here.
  • See the S-1 at SEC, here.  For me, the PDF tallies ~350 pages.
  • Details like expected share price and valuation to follow.  
  • Ticker is to be BLLN.
  • Alex Dickinson comments, adds graphics, at Linked In here.

According to Reuters, revenue is now running circa $250M per year and net loss is small.

Billion to One reported revenue of $125.5 million in the first half of 2025, compared with $69 million in the year-ago period, according to the IPO filing.

Net loss narrowed to $4.2 million in the six months ended June 30, from a loss of $15.2 million in the same period last year.

###

AI CORNER

###

I asked Chat GPT to summarize the 350 page IPO in 100 words.

BillionToOne is a fast-growing precision diagnostics company built on its patented single-molecule NGS platform, which enables ultrasensitive cfDNA analysis for prenatal and oncology care. Its UNITY line pioneered non-invasive fetal risk screening for single-gene recessive disorders without a paternal sample, while its Northstar tests for cancer therapy selection and response monitoring detect mutations and treatment effects earlier and at lower levels than many competitors. The company reports strong recent momentum—82–86% YOY revenue growth, improving margins, and near break-even operations—while emphasizing disciplined execution and efficiency. BillionToOne aims to expand into MRD and early detection, positioning itself as a next-generation leader in precision medicine.

 ##

In the S-1, "MolDx" appears about 15 times and "Medicare" about 70 times.





Tuesday, October 7, 2025

CHAI and Joint Commission and HHS: Mud Wrestling?

This week I saw a strange set of headlines about CHAI - the large Coalition for Healthcare AI - that seemed like crashing waves arriving in opposite directions.

Let's start with the view from the CHAI website and press releases with Joint Commission.


See the CHAI website here.  See CHAI activities via press releases here.   See the Joint Commission press release here.   See the Joint Comission 8-page document here.  Although not directly tied to CHAI, see an industry pledge to HHS signed by Apple, Google, Microsoft, etc, here.

But from Politico and elsewhere, there seemed to be a flurry of bad news raining down on CHAI.



See the article, "HHS Rejects Coalition for Health AI" at the news website distilinfo.com, which by the way, has some interesting articles.  Here.  See an October 2 article by Ruth Reader at Politico here.   See also her October 1 article "HHS says AI is not your cartel" here.

With all the fireworks you could be excused for missing a September 30 request for information from FDA about how AI medical devices should be real-world evaluated.  Here.  Comment til December 1.

###

###

AI CORNER

I gave the articles linked above to Chat GPT 5 and asked for a summary.

###

Here’s your “week of living volcanically” wrap on CHAI—the Coalition for Health AI—written for folks who speak FDA, JCAHO, CPT, and SaMD before coffee.

TL;DR

CHAI had a banner week on paper—rolling out joint guidance with The Joint Commission (JCAHO) and touting a growing ecosystem—but got whacked in the headlines as HHS leadership publicly distanced the administration from CHAI’s quasi-regulatory vibe. Meanwhile, FDA opened a formal RFI on how to measure real-world performance of AI-enabled devices—signaling that the government wants official channels for oversight, not private coalitions acting like “regulators.”


The Good News Wave 🌊 (What CHAI wanted the week to be about)

1) Joint Commission + CHAI issue initial national guidance.
The guidance lays out seven pragmatic elements hospitals can adopt now—governance, privacy/transparency, data security, ongoing quality monitoring, blinded safety event reporting, bias assessment, and training—with playbooks and a voluntary certification coming next. This is real inside-the-facility scaffolding, not theory.

2) Validation that CHAI has reach.
CHAI’s “who we are” page leans into scale: nearly 3,000 organizations, 218 health systems (14% AMCs), and ~25% startups among industry members—useful counter-programming against “big-tech cartel” narratives.

3) A steady drumbeat of “we’re building the commons.”
From a June strategic partnership announcement to a September guidance release, CHAI’s site frames itself as a consensus convener for practical, health-specific best practices—with Joint Commission’s reach (23k+ orgs) poised to make RUAIH more than a white paper.

Why this matters:
For health systems, these controls read like an ISO-lite for AI that can slot into existing QMS, IRB, and PSO structures—especially valuable for non-device AI (rev-cycle, documentation, triage, care coordination) where FDA doesn’t directly referee.


The Shellacking 🔨 (What DC heard instead)

1) “They don’t speak for us.”
HHS Deputy Secretary Jim O’Neill told POLITICO the administration does not support CHAI as a quasi-regulator; he pushed back on industry chatter that “you have to be a member” and warned against a “cartel” dynamic that could squeeze startups. (That’s the quote you’ll see in every deck this quarter.)

2) The narrative stuck—fast.
Follow-on coverage framed it as HHS rejection of private-sector AI oversight efforts, reinforcing that official regulatory authority remains with agencies—especially FDA for medical devices. The politics here are bigger than CHAI: it’s a line in the sand about public vs. private standard-setting.

3) FDA moves the goalposts onto the field.
FDA dropped an RFI on measuring real-world performance of AI-enabled devices (drift, monitoring triggers, postmarket data, human-AI interaction), inviting concrete methods already deployed at scale. Read: bring your evidence to the Docket (by Dec 1, 2025)our field, our rules.


What it means (policy-insider read)

For AI medical devices (radiology, pathology, digital diagnostics):

  • Center of gravity = FDA. The RFI is the tell: RWE monitoring, drift detection, triggers, response protocols—that’s a regulatory playbook forming in public. If you’re selling a regulated SaMD or a lab-facing AI that walks like a device, seed that docket with your live methods (KPIs, alert thresholds, model maintenance SOPs).

  • Assurance labs aren’t dead—but they’re not “blessed.” HHS signaling means no single imprimatur will substitute for FDA expectations. CHAI-certified labs can still help vendors/hospitals operationalize postmarket surveillance—just don’t market them as “regulatory grade.”

For non-device AI (ops/admin, EHR-native tools, GenAI copilots):

  • Joint Commission’s RUAIH is your near-term compass. Governance, privacy, security, monitoring, and bias checks—these will be asked in procurement and board QA committees. Bake RUAIH into RFPs and BAA/DUA language now.

  • Hospitals get a path to “responsible use” receipts. A future voluntary certification gives compliance teams something recognizable to hang on the wall while FDA calibrates device-side rules.

For AI in genomics (from CGP decision support to LLM-assisted tumor boards):

  • Split your stack. Anything that influences patient-facing diagnostic or treatment decisions should be mapped to FDA’s device expectations and the RFI’s real-world performance lens; upstream knowledge-management and documentation tooling should be aligned to RUAIH controls and privacy/secondary use boundaries (RUHD).

  • Procurement optics matter. The “cartel” storyline elevates sensitivity to vendor neutrality and anti-capture optics in RFPs and assurance arrangements—particularly for genomic AI where major platforms and startup point-solutions collide.


The Beltway read (why the week felt like lava + confetti)

  • CHAI’s upside: substantial mindshare + distribution via Joint Commission; credible, hospital-ready guardrails that legal, compliance, and CQO teams can actually implement.

  • CHAI’s downside: the optics of authority. The moment CHAI looked like a “super pseudo-regulator,” HHS swatted down the vibe. That will chill claims that membership/certification equals market permission.

  • The government’s posture: pluralism and transparency. Compete on ideas and methods, sure—but policy comes from agencies; methods get captured through FDA dockets, advisory committees, and consensus standards (NIST/NAM et al.), not a single private coalition.


What to do Monday (for health systems, vendors, and investors)

  • Hospitals: Stand up or sharpen your AI governance and monitoring programs against RUAIH’s seven elements; prep to map any device-adjacent tools to FDA’s real-world performance framing.

  • Vendors (device-side): Draft your drift playbook now—metrics, triggers, human-in-the-loop escalation, rollback, and field-support processes—and file to FDA’s docket by Dec 1. Bonus points for deployed-at-scale evidence.

  • Vendors (non-device/ops): Treat RUAIH as the minimum viable trust framework; make your DUAs/BAAs RUAIH-aware (permitted uses, de-identification, audit rights), and publish your bias & safety event reporting posture.

  • Investors: Discount pitches that wave a single coalition’s “certification” as a regulatory moat. Underwrite to execution capacity on RUAIH controls and, for SaMDs, to an auditable RWE monitoring stack.


###
###
FDA vs HHS?  September 30 versus October 1?
###
###

There’s a striking temporal alignment—and probably a deliberate one—between FDA’s Sept 30 RFI on measuring real-world performance of AI-enabled medical devices and the Oct 1-2 wave of HHS/Politico pushback on CHAI.

1. The timeline

  • Sept 30: FDA issued an RFI (Docket FDA-2025-N-4203) seeking public comment on practical, field-tested methods for tracking real-world performance, drift, bias, and reliability of AI-enabled medical devices.

  • Oct 1-2: HHS Deputy Secretary Jim O’Neill and FDA Commissioner Marty Makary went on record—via Politico and other outlets—saying “They don’t speak for us” about CHAI, warning against a “cartel” dynamic and stressing that oversight authority belongs to government regulators.

2. Signals from the RFI

The RFI isn’t draft guidance; it’s an early-stage agenda-setting move. By spotlighting performance drift, real-world evidence, human-AI interaction, and field-scale monitoring as the questions that matter, FDA effectively plants a flag on the field where CHAI had been positioning its “assurance labs.”

3. How the two moves connect

  • A coordinated one-two punch: Distill Info explicitly noted the timing—that the RFI’s release “coincided with HHS’s rejection of CHAI” and signaled a preference for official regulatory channels over private assurance coalitions.

  • Reclaiming the center of gravity: CHAI’s model—voluntary guidance, assurance labs, Joint Commission partnership—looked to some in industry like a de-facto standard-setter. The RFI plus O’Neill’s remarks pull that center of gravity back to FDA/HHS.

  • Optics of authority: O’Neill and Makary weren’t merely criticizing CHAI’s content; they were rejecting the idea that membership or certification in a private coalition should be perceived as market permission.

  • Policy continuity: The RFI builds on FDA’s 2024 Digital Health Advisory Committee discussions about ongoing, post-deployment evaluation—showing that the government already had its own roadmap and didn’t need to outsource it to a coalition.

4. Beltway reading

The sequencing suggests more than coincidence. The administration wanted to make two points in quick succession:

  1. Performance monitoring of AI devices is an FDA job—here’s the official process to shape it.

  2. Private alliances won’t be deputized as quasi-regulators—no “cartel” choke point for startups or investors.

For Beltway insiders, the week’s events looked like a calibrated messaging storm: FDA put out the substantive homework assignment; HHS provided the political punctuation.

Bottom line: the RFI and the HHS pushback were complementary. Together they told hospitals, vendors, and investors: Bring your evidence and methods to the public docket—don’t look to a private coalition for regulatory cover.

###
What is known about CHAI, say, before September 1, 2025?  Give me a 500 word article, sort of in Wikipedia style.
###

Here’s the 500-word “who/what/why” on the Coalition for Health AI (CHAI), limited to what an informed observer could reasonably know before Sept. 1, 2025.

Founded in spring 2022 by a group of clinicians and data scientists, CHAI describes itself as a cross-sector coalition created to be a trusted source on responsible AI in health, stitching together health systems, tech companies, academics and patient advocates. By late 2024 it publicly said its community neared 3,000 organizations, reflecting unusually broad buy-in across vendors and providers. (CHAI)

In early 2024, CHAI formalized as a nonprofit and stood up formal leadership: Brian S. Anderson, MD (a CHAI co-founder and long-time digital-health lead at MITRE) was named the first CEO; the inaugural board was chaired by John Halamka, MD, president of Mayo Clinic Platform. Two advisory boards were also announced, signaling intent to pair policy work with technical and clinical expertise. While CHAI has individual leaders, it tends to emphasize institutional clout—notably large provider systems (e.g., Mayo Clinic) and technology firms (e.g., Microsoft)—as proof it can convene the market. (CHAI)

Mission & goals. CHAI’s stated mission is to advance the responsible development, deployment and oversight of health AI by fostering collaboration across industry, government, academia, and patient communities. It framed this around practical workstreams: convening consensus on best practices; publishing implementation guidance; and building market infrastructure to increase transparency about AI models’ performance and limits. In short: less “ethics slogans,” more operational guardrails that hospital compliance teams and product teams can use. (CHAI)

What was publicly on the roadmap (pre-Sept. 1, 2025).

  • Assurance & transparency concepts. Throughout 2023–24, CHAI materials and partner sites discussed the idea of model registration/assurance—a way to document intended use, evidence, and known risks—so buyers weren’t judging black boxes. Duke’s health AI governance center summarized CHAI’s aim to create a certification/registration system to make performance details transparent prior to widespread adoption. (healthaigovernance.duke.edu)

  • Playbooks and guidance. In June 2025, the Joint Commission announced a partnership with CHAI to co-develop a series of AI playbooks, tools, and a certification program—essentially an on-ramp for hospitals to adopt AI with safety, equity, and accountability controls. The Joint Commission flagged that first deliverables were coming (which indeed arrived mid-September, just after your cutoff). (American Hospital Association)

  • Responsible Use framework (RUAIH). Even before the September publication, there was clear signaling that the Joint Commission–CHAI guidance would spell out core elements (e.g., governance, risk management, monitoring, bias assessment, transparency, and workforce training) to harmonize hospital policies and procurement. (Joint Commission Digital Assets)

Who’s in the tent (corporate emphasis). Coverage and press materials repeatedly associated CHAI with Mayo Clinic (Platform), Duke Health, and major tech companies, especially Microsoft, alongside a long tail of startups and community organizations. That mix—big platforms plus delivery systems—was central to CHAI’s claim that it could align what vendors build with what hospitals can safely buy and deploy. (Becker's Hospital Review)

How it positioned itself, functionally. Before September 2025, the public narrative cast CHAI as a consensus convener and standards catalyst rather than a regulator: a home for best practices and implementation tooling (e.g., registries, playbooks, voluntary labeling/certification) to give health systems practical levers for risk management and to give developers clearer expectations about evidence, monitoring, and post-deployment drift handling. Observers following the space would have expected CHAI to keep partnering with accrediting bodies and professional societies to mainstream those practices into purchasing, QA, and clinical governance. (CHAI)

Bottom line (pre-Sept. 1, 2025): CHAI was widely seen as the leading private-sector coalition trying to turn abstract AI “principles” into operational guardrails—with heavyweight provider and tech participation, a newly formalized nonprofit structure, and a Joint Commission partnership poised to translate its playbooks into hospital-grade practice. (CHAI)

###
###
What if I've got a whole slide imaging reference laboratory and $70M in venture capital to develop AI-enhanced diagnostics such as better prostate and breast cancer detection and grading? Do I want to be in the CHAI camp? The FDA-HHS camp? What should I be drawing from all this?
###

Here’s how to think about it as the founder/CEO (or chief medical/science officer) of a venture-funded WSI reference lab aiming to commercialize AI-enhanced prostate and breast cancer detection and grading.


1. Big-picture signal from the “volcano week”

  • FDA and HHS just told the market: “AI in diagnostics = medical devices; oversight is ours.”
    That’s the meaning of the Sept 30 FDA RFI on real-world performance and the Oct 1-2 HHS pushback on CHAI.

  • CHAI’s Joint Commission work (RUAIH) still matters for hospital procurement, governance, and trust—but it is not a regulatory pathway and HHS just made sure investors and health-system buyers understand that.

In other words:
👉 Regulatory license to sell = FDA
👉 “Social license” to be adopted in hospitals = RUAIH-style controls (e.g., bias testing, safety reporting, procurement-friendly documentation)


2. For a WSI-based diagnostic lab

Your product archetype

  • WSI + AI for primary diagnosis / grading of prostate and breast cancer.

  • If the AI’s output affects the diagnostic report or feeds into a treatment-impacting decision, FDA will treat it as SaMD (software as a medical device) or as a high-complexity test under CLIA with device-like claims.

What that means

  • The governing framework is FDA’s device rules—validation under 21 CFR §820, QMS, clinical validation, then post-market real-world performance monitoring as highlighted in the RFI.

  • You cannot outsource that to CHAI’s assurance labs. CHAI may give you some optics or benchmarking but not legal clearance.


3. How to position yourself

Need Who really provides it What you should do
Regulatory clearance / market entry FDA • Build your design-control file & QMS from day 1. • Map your validation and drift-monitoring plans to FDA’s Sept 30 RFI questions (metrics, triggers, RWE data flows). • Engage early (Q-Sub, advisory committee) and file comments to the docket to show you’re a contributor.
Adoption by hospitals / payers Hospital QA/Procurement, Joint Commission, MolDx, NCCN, etc. • Align your governance, bias-testing, safety reporting, and transparency language with RUAIH’s 7 elements. • Make it easy for pathology departments to check your box on “responsible AI” in RFPs.
Market perception / trust Investors, clinical leaders, boards • Publicly emphasize your adherence to FDA’s performance-monitoring expectations and your compliance with RUAIH-type good practices. • Avoid implying that CHAI membership is a regulatory advantage—HHS just called that out as “cartel” optics.

4. Practical take-aways for a $70 M growth plan

  1. Lead with FDA engagement.

    • Budget early for regulatory science, statistical validation, and RWE infrastructure (model drift dashboards, reference-set curation, site-to-site variation studies).

    • Use the current RFI docket to highlight the kinds of tissue-based performance metrics you already track (e.g., slide quality variation, scanner/model compatibility, pathologist-AI concordance).

  2. Use RUAIH as procurement lubricant, not as license.

    • Embed RUAIH-style bias checks, safety event reporting, and training provisions in your hospital-facing materials.

    • Participate in CHAI workgroups if it helps with signal of good citizenship and hospital readiness, but do not count on CHAI certification as market permission.

  3. Integrate payor/reimbursement thinking early.

    • Tie your clinical-validation endpoints (e.g., improved grading consistency, downstream treatment choice, reduced repeat biopsies) to health-economic value for MolDx/Medicare and commercial payers.

  4. Avoid the “cartel” trap.

    • Keep your posture as evidence-driven and pro-competition—offer your metrics and methods in the FDA docket and to standards groups (CAP, DICOM-WG26, HL7).

    • Investors and acquirers will discount companies that appear to rely on a private club for market access.


5. The short Beltway sound-bite for your board

“Our regulatory passport is FDA; our social passport is RUAIH.
We’ll comment into the FDA docket to shape the real-world-performance framework and we’ll align our hospital-facing materials with Joint Commission/CHAI governance principles.
That way we’re credible in Washington and in procurement committees.”


Bottom line:
You want to be in the FDA/HHS camp for authority and compliance, while borrowing the RUAIH playbook for hospital trust and procurement optics.
That hybrid stance lets you ride the post-Sept 30 policy wave rather than getting caught between the “cartel” critique and future FDA rule-making.

###

CHAI lobbying runs a modest ~$250K per year in 2024/2025.