Tuesday, October 7, 2025

CHAI and Joint Commission and HHS: Mud Wrestling?

This week I saw a strange set of headlines about CHAI - the large Coalition for Healthcare AI - that seemed like crashing waves arriving in opposite directions.

Let's start with the view from the CHAI website and press releases with Joint Commission.


See the CHAI website here.   See the Joint Commission press release here.   See the Joint Comission 8-page document here.

But from Politico and elsewhere, there seemed to be a flurry of bad news raining down on CHAI.



See the article, "HHS Rejects Coalition for Health AI" at the news website distilinfo.com, which by the way, has some interesting articles.  Here.  See an October 2 article by Ruth Reader at Politico here.   See also her October 1 article "HHS says AI is not your cartel" here.

With all the fireworks you could be excused for missing a September 30 request for information from FDA about how AI medical devices should be real-world evaluated.  Here.  Comment til December 1.

###

###

AI CORNER

I gave the articles linked above to Chat GPT 5 and asked for a summary.

###

Here’s your “week of living volcanically” wrap on CHAI—the Coalition for Health AI—written for folks who speak FDA, JCAHO, CPT, and SaMD before coffee.

TL;DR

CHAI had a banner week on paper—rolling out joint guidance with The Joint Commission (JCAHO) and touting a growing ecosystem—but got whacked in the headlines as HHS leadership publicly distanced the administration from CHAI’s quasi-regulatory vibe. Meanwhile, FDA opened a formal RFI on how to measure real-world performance of AI-enabled devices—signaling that the government wants official channels for oversight, not private coalitions acting like “regulators.”


The Good News Wave 🌊 (What CHAI wanted the week to be about)

1) Joint Commission + CHAI issue initial national guidance.
The guidance lays out seven pragmatic elements hospitals can adopt now—governance, privacy/transparency, data security, ongoing quality monitoring, blinded safety event reporting, bias assessment, and training—with playbooks and a voluntary certification coming next. This is real inside-the-facility scaffolding, not theory.

2) Validation that CHAI has reach.
CHAI’s “who we are” page leans into scale: nearly 3,000 organizations, 218 health systems (14% AMCs), and ~25% startups among industry members—useful counter-programming against “big-tech cartel” narratives.

3) A steady drumbeat of “we’re building the commons.”
From a June strategic partnership announcement to a September guidance release, CHAI’s site frames itself as a consensus convener for practical, health-specific best practices—with Joint Commission’s reach (23k+ orgs) poised to make RUAIH more than a white paper.

Why this matters:
For health systems, these controls read like an ISO-lite for AI that can slot into existing QMS, IRB, and PSO structures—especially valuable for non-device AI (rev-cycle, documentation, triage, care coordination) where FDA doesn’t directly referee.


The Shellacking 🔨 (What DC heard instead)

1) “They don’t speak for us.”
HHS Deputy Secretary Jim O’Neill told POLITICO the administration does not support CHAI as a quasi-regulator; he pushed back on industry chatter that “you have to be a member” and warned against a “cartel” dynamic that could squeeze startups. (That’s the quote you’ll see in every deck this quarter.)

2) The narrative stuck—fast.
Follow-on coverage framed it as HHS rejection of private-sector AI oversight efforts, reinforcing that official regulatory authority remains with agencies—especially FDA for medical devices. The politics here are bigger than CHAI: it’s a line in the sand about public vs. private standard-setting.

3) FDA moves the goalposts onto the field.
FDA dropped an RFI on measuring real-world performance of AI-enabled devices (drift, monitoring triggers, postmarket data, human-AI interaction), inviting concrete methods already deployed at scale. Read: bring your evidence to the Docket (by Dec 1, 2025)our field, our rules.


What it means (policy-insider read)

For AI medical devices (radiology, pathology, digital diagnostics):

  • Center of gravity = FDA. The RFI is the tell: RWE monitoring, drift detection, triggers, response protocols—that’s a regulatory playbook forming in public. If you’re selling a regulated SaMD or a lab-facing AI that walks like a device, seed that docket with your live methods (KPIs, alert thresholds, model maintenance SOPs).

  • Assurance labs aren’t dead—but they’re not “blessed.” HHS signaling means no single imprimatur will substitute for FDA expectations. CHAI-certified labs can still help vendors/hospitals operationalize postmarket surveillance—just don’t market them as “regulatory grade.”

For non-device AI (ops/admin, EHR-native tools, GenAI copilots):

  • Joint Commission’s RUAIH is your near-term compass. Governance, privacy, security, monitoring, and bias checks—these will be asked in procurement and board QA committees. Bake RUAIH into RFPs and BAA/DUA language now.

  • Hospitals get a path to “responsible use” receipts. A future voluntary certification gives compliance teams something recognizable to hang on the wall while FDA calibrates device-side rules.

For AI in genomics (from CGP decision support to LLM-assisted tumor boards):

  • Split your stack. Anything that influences patient-facing diagnostic or treatment decisions should be mapped to FDA’s device expectations and the RFI’s real-world performance lens; upstream knowledge-management and documentation tooling should be aligned to RUAIH controls and privacy/secondary use boundaries (RUHD).

  • Procurement optics matter. The “cartel” storyline elevates sensitivity to vendor neutrality and anti-capture optics in RFPs and assurance arrangements—particularly for genomic AI where major platforms and startup point-solutions collide.


The Beltway read (why the week felt like lava + confetti)

  • CHAI’s upside: substantial mindshare + distribution via Joint Commission; credible, hospital-ready guardrails that legal, compliance, and CQO teams can actually implement.

  • CHAI’s downside: the optics of authority. The moment CHAI looked like a “super pseudo-regulator,” HHS swatted down the vibe. That will chill claims that membership/certification equals market permission.

  • The government’s posture: pluralism and transparency. Compete on ideas and methods, sure—but policy comes from agencies; methods get captured through FDA dockets, advisory committees, and consensus standards (NIST/NAM et al.), not a single private coalition.


What to do Monday (for health systems, vendors, and investors)

  • Hospitals: Stand up or sharpen your AI governance and monitoring programs against RUAIH’s seven elements; prep to map any device-adjacent tools to FDA’s real-world performance framing.

  • Vendors (device-side): Draft your drift playbook now—metrics, triggers, human-in-the-loop escalation, rollback, and field-support processes—and file to FDA’s docket by Dec 1. Bonus points for deployed-at-scale evidence.

  • Vendors (non-device/ops): Treat RUAIH as the minimum viable trust framework; make your DUAs/BAAs RUAIH-aware (permitted uses, de-identification, audit rights), and publish your bias & safety event reporting posture.

  • Investors: Discount pitches that wave a single coalition’s “certification” as a regulatory moat. Underwrite to execution capacity on RUAIH controls and, for SaMDs, to an auditable RWE monitoring stack.


###
###
FDA vs HHS?  September 30 versus October 1?
###
###

There’s a striking temporal alignment—and probably a deliberate one—between FDA’s Sept 30 RFI on measuring real-world performance of AI-enabled medical devices and the Oct 1-2 wave of HHS/Politico pushback on CHAI.

1. The timeline

  • Sept 30: FDA issued an RFI (Docket FDA-2025-N-4203) seeking public comment on practical, field-tested methods for tracking real-world performance, drift, bias, and reliability of AI-enabled medical devices.

  • Oct 1-2: HHS Deputy Secretary Jim O’Neill and FDA Commissioner Marty Makary went on record—via Politico and other outlets—saying “They don’t speak for us” about CHAI, warning against a “cartel” dynamic and stressing that oversight authority belongs to government regulators.

2. Signals from the RFI

The RFI isn’t draft guidance; it’s an early-stage agenda-setting move. By spotlighting performance drift, real-world evidence, human-AI interaction, and field-scale monitoring as the questions that matter, FDA effectively plants a flag on the field where CHAI had been positioning its “assurance labs.”

3. How the two moves connect

  • A coordinated one-two punch: Distill Info explicitly noted the timing—that the RFI’s release “coincided with HHS’s rejection of CHAI” and signaled a preference for official regulatory channels over private assurance coalitions.

  • Reclaiming the center of gravity: CHAI’s model—voluntary guidance, assurance labs, Joint Commission partnership—looked to some in industry like a de-facto standard-setter. The RFI plus O’Neill’s remarks pull that center of gravity back to FDA/HHS.

  • Optics of authority: O’Neill and Makary weren’t merely criticizing CHAI’s content; they were rejecting the idea that membership or certification in a private coalition should be perceived as market permission.

  • Policy continuity: The RFI builds on FDA’s 2024 Digital Health Advisory Committee discussions about ongoing, post-deployment evaluation—showing that the government already had its own roadmap and didn’t need to outsource it to a coalition.

4. Beltway reading

The sequencing suggests more than coincidence. The administration wanted to make two points in quick succession:

  1. Performance monitoring of AI devices is an FDA job—here’s the official process to shape it.

  2. Private alliances won’t be deputized as quasi-regulators—no “cartel” choke point for startups or investors.

For Beltway insiders, the week’s events looked like a calibrated messaging storm: FDA put out the substantive homework assignment; HHS provided the political punctuation.

Bottom line: the RFI and the HHS pushback were complementary. Together they told hospitals, vendors, and investors: Bring your evidence and methods to the public docket—don’t look to a private coalition for regulatory cover.

###
What is known about CHAI, say, before September 1, 2025?  Give me a 500 word article, sort of in Wikipedia style.
###

Here’s the 500-word “who/what/why” on the Coalition for Health AI (CHAI), limited to what an informed observer could reasonably know before Sept. 1, 2025.

Founded in spring 2022 by a group of clinicians and data scientists, CHAI describes itself as a cross-sector coalition created to be a trusted source on responsible AI in health, stitching together health systems, tech companies, academics and patient advocates. By late 2024 it publicly said its community neared 3,000 organizations, reflecting unusually broad buy-in across vendors and providers. (CHAI)

In early 2024, CHAI formalized as a nonprofit and stood up formal leadership: Brian S. Anderson, MD (a CHAI co-founder and long-time digital-health lead at MITRE) was named the first CEO; the inaugural board was chaired by John Halamka, MD, president of Mayo Clinic Platform. Two advisory boards were also announced, signaling intent to pair policy work with technical and clinical expertise. While CHAI has individual leaders, it tends to emphasize institutional clout—notably large provider systems (e.g., Mayo Clinic) and technology firms (e.g., Microsoft)—as proof it can convene the market. (CHAI)

Mission & goals. CHAI’s stated mission is to advance the responsible development, deployment and oversight of health AI by fostering collaboration across industry, government, academia, and patient communities. It framed this around practical workstreams: convening consensus on best practices; publishing implementation guidance; and building market infrastructure to increase transparency about AI models’ performance and limits. In short: less “ethics slogans,” more operational guardrails that hospital compliance teams and product teams can use. (CHAI)

What was publicly on the roadmap (pre-Sept. 1, 2025).

  • Assurance & transparency concepts. Throughout 2023–24, CHAI materials and partner sites discussed the idea of model registration/assurance—a way to document intended use, evidence, and known risks—so buyers weren’t judging black boxes. Duke’s health AI governance center summarized CHAI’s aim to create a certification/registration system to make performance details transparent prior to widespread adoption. (healthaigovernance.duke.edu)

  • Playbooks and guidance. In June 2025, the Joint Commission announced a partnership with CHAI to co-develop a series of AI playbooks, tools, and a certification program—essentially an on-ramp for hospitals to adopt AI with safety, equity, and accountability controls. The Joint Commission flagged that first deliverables were coming (which indeed arrived mid-September, just after your cutoff). (American Hospital Association)

  • Responsible Use framework (RUAIH). Even before the September publication, there was clear signaling that the Joint Commission–CHAI guidance would spell out core elements (e.g., governance, risk management, monitoring, bias assessment, transparency, and workforce training) to harmonize hospital policies and procurement. (Joint Commission Digital Assets)

Who’s in the tent (corporate emphasis). Coverage and press materials repeatedly associated CHAI with Mayo Clinic (Platform), Duke Health, and major tech companies, especially Microsoft, alongside a long tail of startups and community organizations. That mix—big platforms plus delivery systems—was central to CHAI’s claim that it could align what vendors build with what hospitals can safely buy and deploy. (Becker's Hospital Review)

How it positioned itself, functionally. Before September 2025, the public narrative cast CHAI as a consensus convener and standards catalyst rather than a regulator: a home for best practices and implementation tooling (e.g., registries, playbooks, voluntary labeling/certification) to give health systems practical levers for risk management and to give developers clearer expectations about evidence, monitoring, and post-deployment drift handling. Observers following the space would have expected CHAI to keep partnering with accrediting bodies and professional societies to mainstream those practices into purchasing, QA, and clinical governance. (CHAI)

Bottom line (pre-Sept. 1, 2025): CHAI was widely seen as the leading private-sector coalition trying to turn abstract AI “principles” into operational guardrails—with heavyweight provider and tech participation, a newly formalized nonprofit structure, and a Joint Commission partnership poised to translate its playbooks into hospital-grade practice. (CHAI)

###
###
What if I've got a whole slide imaging reference laboratory and $70M in venture capital to develop AI-enhanced diagnostics such as better prostate and breast cancer detection and grading? Do I want to be in the CHAI camp? The FDA-HHS camp? What should I be drawing from all this?
###

Here’s how to think about it as the founder/CEO (or chief medical/science officer) of a venture-funded WSI reference lab aiming to commercialize AI-enhanced prostate and breast cancer detection and grading.


1. Big-picture signal from the “volcano week”

  • FDA and HHS just told the market: “AI in diagnostics = medical devices; oversight is ours.”
    That’s the meaning of the Sept 30 FDA RFI on real-world performance and the Oct 1-2 HHS pushback on CHAI.

  • CHAI’s Joint Commission work (RUAIH) still matters for hospital procurement, governance, and trust—but it is not a regulatory pathway and HHS just made sure investors and health-system buyers understand that.

In other words:
👉 Regulatory license to sell = FDA
👉 “Social license” to be adopted in hospitals = RUAIH-style controls (e.g., bias testing, safety reporting, procurement-friendly documentation)


2. For a WSI-based diagnostic lab

Your product archetype

  • WSI + AI for primary diagnosis / grading of prostate and breast cancer.

  • If the AI’s output affects the diagnostic report or feeds into a treatment-impacting decision, FDA will treat it as SaMD (software as a medical device) or as a high-complexity test under CLIA with device-like claims.

What that means

  • The governing framework is FDA’s device rules—validation under 21 CFR §820, QMS, clinical validation, then post-market real-world performance monitoring as highlighted in the RFI.

  • You cannot outsource that to CHAI’s assurance labs. CHAI may give you some optics or benchmarking but not legal clearance.


3. How to position yourself

Need Who really provides it What you should do
Regulatory clearance / market entry FDA • Build your design-control file & QMS from day 1. • Map your validation and drift-monitoring plans to FDA’s Sept 30 RFI questions (metrics, triggers, RWE data flows). • Engage early (Q-Sub, advisory committee) and file comments to the docket to show you’re a contributor.
Adoption by hospitals / payers Hospital QA/Procurement, Joint Commission, MolDx, NCCN, etc. • Align your governance, bias-testing, safety reporting, and transparency language with RUAIH’s 7 elements. • Make it easy for pathology departments to check your box on “responsible AI” in RFPs.
Market perception / trust Investors, clinical leaders, boards • Publicly emphasize your adherence to FDA’s performance-monitoring expectations and your compliance with RUAIH-type good practices. • Avoid implying that CHAI membership is a regulatory advantage—HHS just called that out as “cartel” optics.

4. Practical take-aways for a $70 M growth plan

  1. Lead with FDA engagement.

    • Budget early for regulatory science, statistical validation, and RWE infrastructure (model drift dashboards, reference-set curation, site-to-site variation studies).

    • Use the current RFI docket to highlight the kinds of tissue-based performance metrics you already track (e.g., slide quality variation, scanner/model compatibility, pathologist-AI concordance).

  2. Use RUAIH as procurement lubricant, not as license.

    • Embed RUAIH-style bias checks, safety event reporting, and training provisions in your hospital-facing materials.

    • Participate in CHAI workgroups if it helps with signal of good citizenship and hospital readiness, but do not count on CHAI certification as market permission.

  3. Integrate payor/reimbursement thinking early.

    • Tie your clinical-validation endpoints (e.g., improved grading consistency, downstream treatment choice, reduced repeat biopsies) to health-economic value for MolDx/Medicare and commercial payers.

  4. Avoid the “cartel” trap.

    • Keep your posture as evidence-driven and pro-competition—offer your metrics and methods in the FDA docket and to standards groups (CAP, DICOM-WG26, HL7).

    • Investors and acquirers will discount companies that appear to rely on a private club for market access.


5. The short Beltway sound-bite for your board

“Our regulatory passport is FDA; our social passport is RUAIH.
We’ll comment into the FDA docket to shape the real-world-performance framework and we’ll align our hospital-facing materials with Joint Commission/CHAI governance principles.
That way we’re credible in Washington and in procurement committees.”


Bottom line:
You want to be in the FDA/HHS camp for authority and compliance, while borrowing the RUAIH playbook for hospital trust and procurement optics.
That hybrid stance lets you ride the post-Sept 30 policy wave rather than getting caught between the “cartel” critique and future FDA rule-making.

###

CHAI lobbying runs a modest ~$250K per year in 2024/2025.