In October 2025, Rubrum Advising, in service of Harrison AI, submitted a 34-page petition to the FDA asking for widespread exemptions from review (from premarket notification) of certain medical AI software. Find it here.
In Federal Register December 29, FDA requests public comment on the petition; comment open for 60 days. Find it here.
At Linked In, J David Giese discusses, here.
Below, Chat GPT weighs in.
###
Sidebar: New article by David Horgan et al. on medical AI in Europe - here.
###
AI CORNER
##
SUMMARY. In late December, FDA published a little-noticed Federal Register notice that could have major implications for clinical AI: the Agency has opened public comment on a citizen petition requesting partial exemption from 510(k) requirements for certain radiology AI devices. The petition, filed by Rubrum Advising on behalf of Harrison.ai, does not seek deregulation. Instead, it proposes a conditional model in which manufacturers that have already obtained at least one 510(k) clearance could deploy additional AI tools under the same regulation without filing a new 510(k), provided robust post-market surveillance, transparency, and training obligations remain in place.
The petition argues that the current U.S. regulatory framework unintentionally restricts AI functionality compared to Europe and Asia, contributes to an innovation gap, and drives up costs—without commensurate safety benefits. FDA has not endorsed the proposal, but its decision to solicit public comment signals that the idea is being taken seriously. For companies already FDA-cleared, and for policymakers grappling with AI at scale, this is a development worth close attention.
FDA Opens the Door—Again—to Partial 510(k) Exemptions for Radiology AI
Late in December 2025, FDA quietly published a Federal Register notice that may turn out to be one of the most consequential regulatory signals for clinical AI in years: the Agency has formally accepted for comment a citizen petition requesting partial exemption from 510(k) requirements for certain radiology AI devices .
If this sounds familiar, it should. Versions of this idea have surfaced before—most notably during the first Trump administration, and later through FDA’s ill-fated Software Pre-Certification pilot. But this time, the proposal is narrower, more legally grounded, and far more explicit about guardrails.
At the center of the current action is a 34-page citizen petition filed by Rubrum Advising on behalf of Harrison.ai, an Australia-based imaging AI company with broad international deployment and a modest but growing U.S. footprint .
What FDA Actually Did (and Did Not Do)
First, a reality check. FDA has not approved anything. The December 29 notice simply acknowledges receipt of the petition and opens a public comment period through February 27, 2026 .
That said, publication in the Federal Register is not automatic. It signals that FDA views the request as legally cognizable under section 510(m)(2) of the FD&C Act—the provision that allows FDA to exempt certain Class II devices from premarket notification if safety and effectiveness can be reasonably assured through other controls.
In other words, FDA is saying: this is serious enough to ask the field what it thinks.
The Core Proposal: A “Once-Cleared, Then Expand” Model
The Rubrum petition does not ask for blanket deregulation of AI. Instead, it proposes a conditional, partial exemption for a defined set of radiology CAD, CADx, and CADt device types, identified by existing regulations and product codes (including POK, MYN, QDQ, QAS, QFM) .
The key idea is simple:
Once a manufacturer has obtained at least one 510(k) clearance within a given regulation, subsequent devices under the same regulation would no longer require a new 510(k)—provided robust post-market, transparency, and training obligations are met.
Importantly, nothing else goes away:
Class II status remains
Special controls remain
Quality System Regulation applies
Establishment registration and listing apply
FDA retains enforcement authority
What changes is where the regulatory burden sits—less ex ante paperwork, more post-market accountability.
Why Harrison.ai Frames This as an “Innovation Gap”
Much of the petition is devoted to documenting what it calls a U.S. AI innovation gap, particularly in radiology .
The argument runs as follows:
U.S. radiology AI clearances are typically narrow, often covering only one or two findings per submission.
In contrast, EU, UK, and Asia-Pacific regulators routinely clear multi-finding systems, sometimes covering 100+ findings in a single review cycle.
As a result, U.S. versions of AI tools are often functionally constrained compared to their non-U.S. counterparts—even when built on the same underlying platform.
The petition backs this with comparative data (including figures originally presented to FDA’s Digital Health Advisory Committee) showing dramatically broader diagnostic coverage outside the U.S., especially for chest imaging and emergency findings .
The claim is not that FDA is careless, but that 510(k) mechanics—especially reader-study expectations and finding-by-finding submissions—are poorly matched to how modern AI systems actually work.
The Economic Subtext FDA Rarely Says Out Loud
One of the more striking sections of the petition is its cost arithmetic.
Under current practice, clearing a comprehensive radiology AI system in the U.S. could require:
Dozens of submissions
Years of sequential review cycles
Millions of dollars in study and submission costs
The petition contrasts this with single-cycle, multi-finding approvals abroad, arguing that FDA’s current approach unintentionally incentivizes:
Narrow claims
Triage-only indications
Higher prices to recover regulatory costs
This matters because, absent reimbursement, AI pricing is implicitly capped by professional interpretation fees—often single-digit dollars per exam .
Why This Time Might Be Different
David Giese’s widely shared LinkedIn post captured why this petition is drawing attention now .
Several things distinguish it from earlier efforts:
It is partial, not absolute. No one is arguing that AI should be unregulated.
Eligibility is earned, not automatic. A manufacturer must already have demonstrated FDA clearance capability.
It implicitly solves the “predetermined change control plan” problem for certain product codes by replacing it with a portfolio-level concept.
It advantages incumbents, which is politically uncomfortable—but legally realistic.
Giese also notes what is missing: some product codes (e.g., QIH) are excluded, suggesting deliberate scoping rather than overreach.
What Happens Next
Between now and late February, FDA will receive public comments—from manufacturers, radiologists, patient safety advocates, and likely critics concerned about self-regulation.
After that, FDA has options:
Deny the petition
Grant it in full
Grant it in modified form
Or do what FDA often does: use it as a policy test balloon, informing future guidance or rulemaking rather than immediate action
Regardless of outcome, the signal is unmistakable. FDA is again wrestling with how to regulate AI platforms rather than single algorithms, and it is doing so under statutory authority—not pilot programs.
For companies already inside the 510(k) tent, this is a moment worth watching closely. For those still on the outside, the subtext of the debate is even clearer:
If a portfolio-based future is coming, being cleared at least once may matter more than ever.
##
##
##
Didn't FDA do something like this early in COVID???
##
What FDA Did During Early COVID (2020–2021)
In early COVID, FDA did experiment with a “once-cleared, then expand” concept, but not as a durable regulatory pathway. It appeared in two related places:
-
Digital Health Enforcement Discretion (COVID PHE)
FDA exercised enforcement discretion for certain low-risk digital health and behavioral health SaMD, especially tools addressing:-
anxiety, depression, insomnia
-
stress, PTSD, substance use support
If a company already had one fully reviewed, compliant product, FDA signaled it would not actively enforce premarket requirements for closely related follow-on products during the Public Health Emergency.
-
-
Software Pre-Cert Program (Pre-Cert)
Around the same time, FDA’s Software Pre-Certification pilot explicitly tested the idea that:a trusted developer, once vetted, could bring multiple software products to market with reduced premarket friction.
Behavioral health SaMD was a primary sandbox for this idea.
Why It Didn’t Stick
-
These COVID policies were temporary enforcement discretion, not statutory exemptions.
-
Pre-Cert never converted into binding law or regulation.
-
FDA ultimately reverted to traditional pathways post-PHE.
Why the Harrison/Rubrum Petition Is Different
The current petition:
-
uses explicit statutory authority (FD&C Act §510(m)(2))
-
targets Class II radiology AI, not low-risk wellness software
-
proposes permanent partial exemption, not emergency discretion
So yes — FDA has tried this logic before. What’s new is the legal framing and permanence.
##
##
##