On November 12, 2025, NEJM published an important new essay by Vinay Prasad MD and Martin Makary MD, the head of drug approvals and the commissioner of the FDA.
The essay, titled, "FDA's New Plausible Mechanism Pathway," states that new therapies "challenge traditional models of drug and biologic development" at FDA, and require new thinking. The article is getting a fair bit of attention.
Let's ask, are there ideas in the Prasad-Makary assay that apply to genomics and other diagnostic tests? If so, what are the points of application, and where is there a mismatch?
See coverage at Fierce Biotech, Politico Pro, Biopharma Dive, and Stat+.
See also an excellent 7-page review at Linked In by AgencyIQ about Makary’s vision for "continuous trials." Here.
##
AI CORNER - Chat GPT 5
##
SUMMARY
The new Prasad & Makary article in NEJM proposes a strikingly different regulatory model for certain high-need therapies, one that relies on tightly anchored biological mechanisms rather than large randomized trials. Their “plausible mechanism pathway” is designed for diseases where the causal lesion is known, the natural history is grim, and the therapeutic intervention directly targets the underlying abnormality—as in the recent case of base-editing for a lethal metabolic disorder.
Instead of demanding traditional efficacy trials, the FDA could rely on strong mechanistic logic, demonstration of target engagement, and small-N or even N-of-1 experiences evaluated against well-established natural history. Over time, these individual cases could accumulate into evidence for a broader “platform approval,” with postmarketing obligations providing the long-term safety net.
When we apply this philosophy to diagnostics, some principles translate well while others do not.
Diagnostics already operate on a spectrum from tightly mechanistic (pathogenic variants in a monogenic disease) to diffuse, correlative signatures (proteomic panels, polygenic risk, and black-box AI).
A mechanistic, small-N approach could be appropriate for the former—tests that directly measure the causal lesion, are used in high-severity diseases, and where natural history makes the benefits of early or accurate detection obvious. In such cases, FDA could conceivably streamline approval by relying on analytic validity plus strong biological plausibility, supplemented by limited case-based evidence and real-world follow-up.
But the analogy breaks down for diagnostics whose relationship to biology is indirect. Unlike drugs, diagnostic tests have no intrinsic clinical effect—they improve decisions, not disease trajectories directly—and this complicates the use of natural-history reasoning. Furthermore, diagnostics can scale rapidly across large populations, magnifying even subtle calibration errors, and the now-defeated FDA LDT regulatory framework had been moving toward more formal evidence for diagnostics, not less.
Still, the Prasad–Makary logic points to a future where platform-based diagnostic systems—sequencing platforms, MRD platforms, digital pathology/AI platforms—could receive more modular approvals that lean on analytic rigor, causal anchoring, and postmarket surveillance rather than enormous premarket trials.
The key takeaway is not that diagnostics should adopt the plausible mechanism pathway wholesale, but that FDA might carve out a similar streamlined track for a narrow class of tests: those with clear biological grounding, strong analytical certainty, high-severity indications, and mechanistic connections that make the clinical value self-evident even with small premarket datasets.
##
[BQ - I suggested at an August 2025 conference that Medicare MACs might produce "fit for purpose" and faster LCDs for discrete categories, such as new diagnostic tests that are clearly better, and less expensive, that tests that Medicare already pays for. (The already-pays-for meaning that the use case and clinical utility was already established). Tests in this niche category should not have to wait 3-4 years to get in queue for an LCD. I envisioned this as a pathway for favorable LCDs; unfavorable ones would probably demand more proof-of-reasoning on the part of the MAC.]
###
###
FULL AI ANALYSIS
Chat GPT 5 writes:
The article is short but actually quite radical once you unpack it, and I think it maps surprisingly well to diagnostics—but with some important asymmetries.
1. Key points of the Prasad & Makary “Plausible Mechanism Pathway”
Prasad and Makary describe what is essentially a formalized, mechanistic shortcut to licensure for bespoke therapies where a traditional randomized trial is not feasible, using the base-editing case (“Baby K.J.”) as the paradigm.
The core elements:
1.1. Tight biological anchoring
-
The pathway is reserved for diseases where the biologic cause is known and specific—e.g., genetic conditions where a particular mutation cleanly explains the phenotype.
-
It explicitly excludes diseases defined by loose clinical criteria or many weak GWAS hits; the goal is to avoid mechanistic hand-waving applied to heterogeneous syndromes.
1.2. Therapy must target the proximate lesion
-
The product must directly address the underlying or proximate abnormality (e.g., base-editing the causal CPS1 mutation).
-
“Blunt” agents like corticosteroids don’t qualify, because the relationship between drug and pathophysiology is too distant.
1.3. Natural history as the comparison
-
The pathway leans heavily on a well-characterized natural history for the untreated disease—progressive neurological damage and early death in CPS1 deficiency, for example.
-
Patients can, in effect, act as their own control; the question becomes: is the observed trajectory fundamentally incompatible with what we know about untreated disease?
1.4. Evidence of “target engagement”
-
There should be confirmation that the target has been successfully edited or drugged, at least in a subset of patients, through animal models, biopsies, or other direct readouts, recognizing this won’t always be feasible (e.g., retina, CNS).
1.5. Clinically meaningful improvement
-
The FDA looks for:
-
sustained improvement in progressive diseases, or
-
prolonged remission in relapsing conditions.
-
-
The data must be robust enough to rule out regression to the mean.
1.6. From N=1 to a platform
-
A crucial conceptual move: a single-patient expanded-access IND is not just “compassionate use” but a data-generating experiment.
-
After several consecutive successes in different patients, the agency can move toward approval of:
-
a platform (e.g., a base-editing system) that is then adapted to many mutations with similar biology; or
-
a series of bespoke therapies supported by platform-level evidence.
-
1.7. Heavy postmarketing commitments
-
Approval comes with robust real-world evidence (RWE) obligations:
-
monitoring for off-target edits,
-
long-term growth and developmental outcomes,
-
unexpected safety signals.
-
-
Feasibility matters: e.g., it’s easier to study off-target editing in hematopoietic cells than in the CNS.
1.8. Scope beyond ultra-rare diseases
-
The pathway prioritizes ultra-rare, high-severity conditions, but the authors explicitly foresee use in:
-
common diseases with large unmet need, or
-
diseases with many distinct mutations requiring “150 different therapies” sharing a common functional effect.
-
-
They note it could extend to non-biologic drugs and even small molecules deployed in individualized fashion over time.
Overall, this is a small-N, mechanistically anchored, platform-centric regulatory strategy, with RWE as the backstop.
2. How far do these principles carry over to diagnostics?
Diagnostics and drugs differ in obvious ways (no direct physiological “effect,” massive differences in volume and risk). But conceptually, the plausible mechanism pathway lines up with several tensions we already see in FDA diagnostics:
-
Mechanistic plausibility vs. randomized outcomes trials
-
Platform approvals vs. bespoke analytes
-
N-of-1 or ultra-rare use vs. high-volume population tests
Let’s map the core tenets, one by one.
3. Mapping the key tenets to diagnostic tests
3.1. “Known biologic cause” → well-specified analyte and disease biology
Analogy for diagnostics:
-
A diagnostic test is strongest when it measures an analyte directly tied to a causal pathway:
-
pathogenic variant in a monogenic disorder,
-
BCR-ABL fusion in CML,
-
EGFR exon 19 deletion predicting response to EGFR inhibitors.
-
In a “diagnostic plausible mechanism pathway,” the FDA could:
-
Reserve fast-tracked or small-N pathways for tests that:
-
interrogate a clearly causal target (e.g., canonical loss-of-function variant in a known disease gene), and
-
are used in indications with high unmet need and clear natural history.
-
By contrast, tests that are weakly anchored to biology—complex proteomic signatures, polygenic risk scores, black-box AI scores—would not be natural candidates for this kind of streamlined mechanism-based approach. They’re more like the corticosteroids in Prasad & Makary’s essay: partially effective perhaps, but mechanistically diffuse.
3.2. “Targets the proximate lesion” → measures the proximate biology, not just correlation
In therapeutics, the pathway favors interventions that act right at the molecular lesion. For diagnostics, the parallel is:
-
Tests that directly read out the causal lesion:
-
identifying a specific pathogenic variant,
-
measuring absence of a critical enzyme,
-
detecting a resistance gene directly responsible for antibiotic failure.
-
Versus:
-
Tests that are indirect correlates:
-
nonspecific inflammatory markers,
-
vague imaging features,
-
AI scores that correlate with risk but don’t map onto a specific pathophysiologic step.
-
Mechanistic logic would argue: if your assay directly measures the thing that matters, and we trust the biology, the evidentiary bar for clinical utility might be lowered, especially in rare, severe conditions.
3.3. Natural history as “control arm” for diagnostics
Drugs: use natural history to judge whether an individual’s post-therapy course is radically better than expected.
Diagnostics: the analogues are:
-
Using natural history to show that without the test, missed or late diagnoses have grim outcomes.
-
Then demonstrating that the test identifies patients earlier and leads to earlier, guideline-concordant interventions.
In ultra-rare settings, where randomized diagnostic utility trials are impossible, FDA could envision:
-
Within-patient or historical-control reasoning:
-
e.g., in a urea cycle disorder where delays in diagnosis are typically catastrophic, a highly specific genetic test used early might be accepted based on:
-
analytical validity,
-
mechanistic plausibility (correct gene, correct mutation), and
-
a few compelling real-world cases showing dramatic change in trajectory.
-
-
But here, a key difference emerges:
For drugs, the effect is direct (therapy → biology → outcome). For diagnostics, the pathway is indirect (test → physician decision → therapy → outcome). That makes natural history-based reasoning more fragile, because clinical behavior is a mediating variable.
3.4. “Confirmation that the target was successfully drugged/edited” → analytic and biological verification
For diagnostics, we already have an extensive tradition of:
-
Analytical validation (accuracy, precision, LoD, cross-reactivity).
-
And sometimes biological verification:
-
variant confirmations with orthogonal methods,
-
concordance with known phenotypes,
-
segregation in families for genetic tests.
-
A diagnostic version of the plausible mechanism pathway would likely require more stringent and transparent linking between:
-
Analytical signal (e.g., the NGS read or mass-spec peak)
-
Biologic meaning (e.g., this variant abolishes enzyme function)
-
Clinical consequence (e.g., leads to phenotype X, which needs intervention Y)
In other words, the FDA could be more willing to approve rapid, small-N, strongly mechanistic diagnostic platforms if the analytic and biologic chain is tight—even in the absence of large, classical clinical utility trials.
3.5. “Improvement in clinical course” vs. “improvement in decision quality”
This is where the analogy strains.
-
Drugs: you can see a direct improvement in seizure frequency, ammonia levels, tumor shrinkage, survival.
-
Diagnostics: the test doesn’t improve anything by itself; it improves decisions (earlier transplant listing, optimized drug choice, fewer adverse reactions).
To port the plausible mechanism idea into diagnostics, you’d have to accept that:
-
Surrogate outcomes for diagnostics are inherently more remote:
-
time to correct diagnosis,
-
time to initiation of effective therapy,
-
reduction in diagnostic odyssey.
-
FDA would have to be comfortable approving tests based mainly on:
-
Analytical performance,
-
strong mechanistic reasoning, and
-
plausible, but not fully quantified, improvements in decision quality—supported by postmarket RWE.
That’s not wholly alien to how FDA already treats some rare-disease genetic tests, but it would be a more explicit, principled extension of that logic.
3.6. From “single-patient INDs” to “N-of-1 or micro-cohort diagnostics”
The drug pathway imagines:
-
N-of-1 base-editing experiments aggregating into platform-level approval.
For diagnostics, analogues exist:
-
NGS platforms: The sequencer + pipeline is the “platform”; each new variant interpretation is like a bespoke “edit.”
-
Tumor NGS / MRD: Similar analytic pipeline, many individualized mutation signatures.
-
AI platforms: A digital pathology or radiology algorithm adapted or fine-tuned to specific diseases/inputs.
A “diagnostic plausible mechanism pathway” might look like:
-
Approve the platform (sequencer + chemistry + bioinformatics) in a tightly defined, deeply studied indication.
-
Allow iterative expansion to:
-
additional genes,
-
additional indications,
-
additional AI decision tasks,
using a mixture of: -
analytic bridging,
-
mechanistic plausibility, and
-
focused, low-N clinical studies + RWE.
-
Conceptually, this is already how some CGP platforms and AI tools are being handled, but the Prasad/Makary logic would:
-
Make the platform-first, small-N, RWE-backstopped logic explicit.
4. Where the analogy doesn’t carry cleanly
4.1. Diffuse biomarkers vs. sharp genetic lesions
The plausible mechanism framework works best when:
-
Biology is simple and tight: mutation → loss of function → disease.
But many diagnostic tests (especially in oncology, cardiology, psych, population risk) are:
-
multivariate signatures,
-
with modest effect sizes,
-
influenced by many unmeasured variables,
-
sitting on shifting clinical practice patterns.
For these, a “plausible mechanism” argument is inherently more tenuous. In fact, one could argue that over-reliance on mechanistic plausibility is exactly what burned us with many past biomarkers (your opening prompt), and this is especially true for diagnostics: lots of pretty ROC curves, little real-world impact.
So any “plausible mechanism pathway” for diagnostics should probably be limited to:
-
high-severity diseases,
-
tight causal anchors,
-
and highly specific analytes or readouts.
4.2. Volume and systemic risk
-
A base-editing therapy given to five children, with heavy follow-up, is a small population experiment.
-
A diagnostic test deployed across tens of thousands of patients, even with low per-patient risk, can cause large-system misclassification harms if calibration is off.
So regulators will rightly be more cautious about:
-
Using N-of-1 style reasoning to underpin a test that will be quickly scaled to broad use.
-
Allowing self-referential “patient as own control” logic to justify tests where subtle biases in sample handling, population drift, and reader behavior can meaningfully shift performance.
4.3. The CLIA / LDT overlay
In practice, a lot of US diagnostics already operate in what you could call a de facto plausible mechanism world:
-
Many LDTs are adopted based on:
-
analytic validity,
-
biologic plausibility,
-
retrospective or observational data.
-
If FDA now moves to regulate LDTs more strictly, you might expect pullback from that quasi-mechanistic model toward more formal clinical-utility evidence. That goes in the opposite direction of the Prasad–Makary deregulation argument.
So for diagnostics, the question isn’t just “Can we create a plausible mechanism pathway?” but:
-
How do you reconcile that with:
-
the ongoing LDT rule,
-
the desire to remove “shadow” test development, and
-
payer insistence (MolDx, etc.) on prospective utility data?
-
5. What a “Plausible Mechanism Diagnostic Pathway” might look like
If the FDA wanted to consciously import this philosophy into diagnostics, a reasonable, bounded version might look like:
Eligibility criteria
-
Disease:
-
high-severity, high-mortality, or devastating morbidity;
-
well-characterized natural history;
-
large unmet need, especially where diagnostic delay is lethal or disabling.
-
-
Biology / test:
-
clearly causal and proximate pathophysiology (e.g., monogenic variants with established function, canonical resistance genes);
-
very high analytical specificity and sensitivity;
-
standardized, GMP-ready platform.
-
-
Intended use:
-
test result is tied to a clearly defined, evidence-based management action (e.g., immediate transplant listing, use of specific targeted drug, prophylactic intervention).
-
Evidence package (pre-approval)
-
Analytical validation at the usual depth (LoD, reproducibility, interference, etc.).
-
Mechanistic argument connecting analyte → pathway → clinical phenotype.
-
Natural-history anchored reasoning:
-
small series (or even N-of-very-few) where the test:
-
makes an earlier or otherwise unattainable diagnosis, and
-
leads to a clinically plausible change in trajectory relative to what natural history predicts.
-
-
-
Synthetic or historical controls in lieu of full randomized utility trials, when feasibility is genuinely lacking.
Postmarketing
-
RWE-based obligations to:
-
track real-world sensitivity/specificity and calibration,
-
link test-driven management changes to patient outcomes,
-
adjust intended use, warnings, or decision thresholds as needed.
-
In other words: you’d accept more mechanistic inference and less classical clinical-outcomes evidence upfront, but you’d demand much more ongoing surveillance and adaptive correction.
6. Practical implications for diagnostic strategy (your world)
For companies and labs working in genomics, digital pathology, AI-enhanced diagnostics, etc., the takeaways are:
-
Anchor your claims as close to causal biology as possible.
That’s where a plausible mechanism–style argument is operationally strongest and most likely to resonate at FDA. -
Design platforms, not one-offs.
The regulatory logic is much friendlier if you can say:
“This is the base-editing of diagnostics—one robust platform, many bespoke readouts.” -
Pre-bake natural-history and RWE thinking.
Even if trials are small, plan from day one:-
which registries you’ll build,
-
which endpoints you’ll track,
-
how you’ll document that test-driven management really moves patients off their natural course.
-
-
Be honest about where plausible mechanism doesn’t save you.
For diffuse, modest-effect, high-volume risk scores and signatures, you still need the boring stuff:-
prospective studies,
-
adjudicated endpoints,
-
comparative effectiveness.
-
In short, Prasad & Makary are articulating, for drugs, a framework that diagnostics have partly lived in informally (via LDTs and mechanistic reasoning) but without the same explicit, principled structure. A future-facing FDA diagnostics framework could borrow the best of that logic—for tightly anchored, high-severity, platform-based tests with strong mechanistic plausibility—while still insisting on more formal evidence for the large universe of softer, multivariate biomarkers where “plausible mechanism” has historically been more mirage than map.