Tuesday, February 17, 2026

Chat GPT Explains What MolDx Is (in the Agentic AI Age)

For several years, I've occassionally asked CHAT GPT "what MolDx is."   Here's another version, showing the growing complexity of how AI replies, updated to February 2026.

Chat GPT used extensive self-guided research, which I put on a SIDEBAR.

####

"Explain in detail what MolDx is, its history, what it does, how its viewed, and any trends. I am a senior NYT health journalist."

####

What MolDx is (in plain terms)

MolDx (“Molecular Diagnostic Services Program”) is a coverage, coding, and payment-control program for molecular diagnostics that is run by Palmetto GBA, a Medicare Administrative Contractor (MAC). It functions as an overlay on standard Medicare claims processing: instead of Medicare paying a genetic/molecular test claim based only on a CPT/HCPCS code, MolDx requires test-level identification and often a pre-coverage evidentiary review before claims are paid. (Medicare)

At its core, MolDx tries to solve a classic molecular diagnostics problem: CPT codes often describe a method or analyte class, not a specific commercial test, while different labs’ LDTs can be meaningfully different (workflow, algorithm, intended use, validation, etc.). MolDx’s solution is to require a DEX Z-Code identifier that points to a specific test listing in a registry, enabling test-by-test adjudication rather than code-by-code adjudication.

Noridian Posts All Materials for September 2025 Advisory Panel on Esophageal Cancer Biomarkers

 MolDx has a non-coverage LCD for esophageal biomarkers to aid in detection of Barrett's esophagus and its progression.  On September 4, 2025, they held a multi-jurisdictional advisory panel on the topic (aka Contractor Advisory Committee, CAC).

At this point, all materials are posted:  The agenda (trivial), the panel questions (#17), and the bibliography (#23), as were available for the meeting.  And they've added a YouTube archive video of the meeting, and, an official 29-page transcript.

Find the materials all here:

https://med.noridianmedicare.com/web/jeb/policies/lcd/cac#upcomingmeetings

###

AI Corner

###

I asked Chat GPT to review the meeting transcript.   Note that I fed some of the materials into Chat GPR for its "insights" back in September, also, before the full materials were available online.  For example, then, I asked it to compare 2021 CAC questions with the newest 2025 CAC questions.

###

I'm printing the Chat GPT reading as it came out, but I think it's a little too conservative.  

Therefore, further below, I'm including a prompt asking Chat GPT to look at fact patterns and interpretations that are ultimately more favorable.   This created a second AI dialog.

The two dialogs would be of most use to a reader who knows the field well and can use the two Chat GPT outputs to judge, for himself/herself, which AI viewpoint is more convincing.

Monday, February 16, 2026

Can Chat GPT Find Medicaid Genetic Test Prices? (81420 trisomy, 81519 oncotype)

 It's easy to look up Medicare prices for genomic tests - there is one national Clinical Laboratory Fee Schedule CLFS.

It's hard to look up Medicaid prices - there are 50 states, plus, Medicaid transactions are often walled off inside HMO plans.

However, I asked Chat GPT to look up Medicaid prices for 81420 - maternal blood trisomy testing - and it did a pretty impressive job, working on its own for up to a couple minutes (called "agentic AI.")

In the first pass, I just asked it "Please use agentic research to learn Medicaid prices for CPT code 81420."  it came back with several reasonable paragraphs.

In the second past, I refined the request to, "Please research the eight largest states, you already have California. Please determine what the largest state states are, and then search their files as you mentioned above."

This time it thought for 1 minute 43 seconds (and by the way, on request [click] gives you every step of its thinking, which can be VERY long).  

And came up with this footnoted table.  Pretty good for a few seconds of work on MY part.

click to enlarge


Here's the same for Oncotype Dx Breast 81519
click to enlarge


##
In a separate experiment, I gave Chat GPT all 94,000 PAMA reported payments for BRCA 81211 in 2017.  The CMS fee schedule before PAMA was $2195 and after PAMA it rose to $2395.  

However, there was a wide dispersion of pricing from the lowest decile ($1104) to the highest decile ($3191), basically from half the CMS price to 50% higher than the CMS price.




##

This may work as an open link to the dialog:

https://chatgpt.com/share/6993682b-7cec-8005-9909-a79f531f935a

This is my private protected author link for me to get back to it if I need to

https://chatgpt.com/c/69914d2a-c3ac-8331-a102-b5dadbe72618


HHS Releases Massive Medicaid Database: Fighting Fraud, Other Uses

CMS has long released provider-level, CPT-coded utilization or payment data.  (See exactly which CPT codes your doctor was paid for in 2023...)    Medicaid has usually been a beast unto itself, in 50 silos in the 50 states.  But now, CMS releases a multi-gigabyte trove of Medicaid data.  One banner application is to let the public find fraud.

However, there may be a lot of commercial uses for the data.  You might be able to find who were the most active prostatectomy surgeons in Medicaid, or the most active at giving a particular IV cancer drug.

See FOOTNOTE for Medicaid and agentic AI.

See coverage at Axios here.  See coverage at Newsday here.  See some download sites at HHS here, here.  The granularity at the latter is listed as Medicaid, January 2018-December 2024, Provider NPI x HCPCS/CPT code x Month.  The format is 10 gigabyte files, so special data management skills are needed to use it.   (Medicare doctor-CPT-HCPCS data is offered in a cloud database here.)

Two Thoughts (1) Fraud Is Often Blatant.   (2) Commercial uses?

First, going back to when I was a Medicare medical director 2004-2008, fraud is often blatant.   (At the time, billing for 10 $200-500 medical diagnostic tests on the same day was an instant fraud flag and a major form of fraud.)   In the past years 2019-2025, there have been billions of dollars of genetic test fraud, but it was all blatantly obvious, like billing for six whole sequencing long rare genes for $8000 in a 90 year old in a nursing home and billing that same panel on 15,000 Medicare patients.   Recently, labs were caught billing insane multi-thousand dollar codes like 81408 by a MAC in one year, and the SAME insanely behaving lab was allowed to billing ten million dollars the NEXT year on a different, essentially ADJACENT, set of gene codes.  Here.  The point is, it didn't require super AI detection to find this stuff, a ten year old could find it in 2 minutes.  Another example was massively unbelievable billing of urinary catheters in incredible volumes.  Here, here.  While CMS did eventually pick that up (link previous),  not mentioned is that they got away with zillions of dollars, enough to upend ACO economics, before being picked up.

Second, I don't have the data skills to work with a 10 GB database, but this could have commercial uses as well, such as tracking top providers of HCPCS drug codes or specialty surgical procedures for free in Medicaid data.


###
Footnote
Recently I was curious about Medicaid prices for 81420 (trisomy, maternal blood).   I asked Chat GPT to do agentic internet research, and for the 10 largest states, look up Medicaid prices for 81420.   

Of course it figured out the 10 largest states right away.  It percolated away for a minute or two, and came back with referenced Medicaid prices for 81420 in about 6 states.   So that required about 6 seconds work for me and saved an hour or two of my own hunting and pecking (even with the result a necessariliy incomplete answer, remembering that sometimes the data just ain't online).  




https://chatgpt.com/share/6993682b-7cec-8005-9909-a79f531f935a

https://chatgpt.com/c/69914d2a-c3ac-8331-a102-b5dadbe72618

First pass

Please use agentic research to learn Medicaid prices for cpt code 81420.

(answer)

Second pass

Please research the eight largest states, you already have California. Please determine what the largest state states are, and then search their files as you mentioned above.

(answer included the table above)

Breaking the Biomarker Bottlenecks: Why CDx and Digital Pathology Just Got Much More Strategic

Breaking the Biomarker Bottlenecks: Why CDx and Digital Pathology Just Got Much More Strategic

Below, Chat GPT 5.2 summarizes a pair of articles in The Pathologist.  The experts are Jorge Reis-Filhos (AstraZeneca), Rob Monroe (Danaher-Leica) and Karan Arora (Leica).   For policy background, see this blog back on Feb 10 on AMA CPT struggles to code whole slide imaging AI diagnostics (here). As advanced CDx shift more and more to digital pathology, it gives the lab one more way to collide dysfunctionally with the CMS 14 Day Rule.   (See an interview with Reis-Filhos alone, a month earlier, here.)


CDx and Digital Pathology:
Infrastructure, Not Ornament

Companion diagnostics (CDx) and digital pathology are no longer niche technical topics. They are moving toward the center of oncology strategy. That shift matters even more given recent AMA CPT developments around the codability of whole slide imaging (WSI) and AI-based pathology services. As I discussed on February 10, the AMA CPT Editorial Panel appears prepared to route WSI/AI services into Category III, after years of ambiguity and effective moratoria on codability [DIHP Blog 2-10].

That coding posture does not guarantee payment. But it removes a structural barrier. And when you read recent industry discussions from AstraZeneca, Danaher, and Leica, it becomes clear that CDx and digital pathology are being repositioned as therapy infrastructure—not optional lab enhancements.

The timing is not coincidental.


CDx Is Becoming Computational Biology

Historically, companion diagnostics were binary gatekeepers. One biomarker, one cutoff, one therapy.

What is now being described is very different. CDx is evolving into a computational modeling layer that attempts to quantify tumor biology continuously, not categorically. AI-based quantitative scoring systems measure target expression, membrane localization, spatial relationships, and biologically relevant correlations. The example of AI-quantified Trop-2 expression for ADC selection is emblematic: the biomarker is no longer simply “present or absent,” but a mechanistically informed, quantitatively normalized signal intended to predict drug internalization and response.

That is a shift in kind, not degree.

Instead of:

  • “Does the patient meet threshold X?”

The emerging model asks:

  • “What is the biological state of this tumor, and how does that map to therapeutic response?”

This transforms CDx from a lab service into a decision model.

Saturday, February 14, 2026

Get It Here: Another Detailed RFK Jr Interview with Podcast Host

On February 1, I published a detailed review of a long interview between RFK Jr and podcast host Katie Miller (here).  Regular news covered the interview only for one or two soundbites.

On February 12, RFK Jr appears in a 1 hr 15 min interview with Theo Von, who interviews politicians, athletes, and celebrities.  I'm again including a detailed Chat GPT 5.2 review of the interview.

I provide below - Chat GPT 5.2's "one pager" of "policy takeaways."   Followed by a slower, bullet-by-bullet recap of the interview.  

Then, Chat GPT assesses similarities and differences in the two interviews.  

Then, Chat GPT assesses the building of, or reactions by, stakholder coalitions who hear these communications.


RFK Jr. on Theo Von: 

Policy Takeaways for a Health Policy Audience

Medical Journals, Evidence Credibility, and Open Science

RFK argues leading journals function as compromised channels influenced by pharma advertising and reprint distribution, undermining trust in published findings. His remedy is creation/endorsement of open-source publishing models where peer reviews are public and, when feasible, data are accessible, so outsiders can evaluate study quality and replicability. The operational thesis is that transparency and public critique better approximate “science as debate” than closed peer review plus paywalled data.

NIH, Research Integrity, and Replication

RFK argues NIH should be reoriented toward root causes of disease—“why we are so sick”—and away from what he describes as siloed, non-reproducible research culture. He claims too little NIH funding supports replication, creating incentives to publish positive results and “cheat” because null findings harm careers and are less publishable. He cites the amyloid/Alzheimer’s research arc as a cautionary tale of path dependency. He asserts a target on the order of ~20% of research spend for replication/verification to restore credibility and reduce dead-end investment.

FDA and Food Policy

RFK frames HHS’s central mission as reducing chronic disease by shifting the U.S. diet away from ultra-processed foods and reducing exposure to certain food additives. He points to Tennessee as a model for SNAP waivers that restrict purchase of soda/candy and adds thresholds tied to sugar and corn-syrup content, portraying this as a scalable lever for diet change in low-income populations. On additives, he says FDA is driving an industry transition away from nine synthetic food dyes, claiming the “worst four” are already banned and the remaining five are targeted for elimination by year-end, while FDA “rapidly approved” four vegetable-based dye alternatives to support reformulation. He emphasizes voluntary cooperation by major manufacturers as a key implementation strategy.

Fluoride Policy

RFK reiterates an anti-fluoridation stance, arguing water fluoridation reduces IQ in a dose-related fashion and has systemic harms (bone/thyroid), while any dental benefit is mainly topical and can be achieved through toothpaste/mouthwash. He endorses state actions like Tennessee’s prohibition on adding fluoride compounds, framing this as a public-health modernization based on contemporary exposure context (ubiquitous topical fluoride availability).

Agriculture, Herbicides, Pesticides, and “Off-Ramps”

Drawing on his Roundup litigation history, RFK uses glyphosate as an example of alleged regulatory capture and science suppression. His policy position is not immediate prohibition but creating an “off-ramp” to avoid destabilizing farm economics; he claims an outright ban would disrupt a large fraction of U.S. farms. He promotes emerging alternatives (e.g., laser weeding/robotic approaches) as the kind of technology HHS should help validate and accelerate—implicitly positioning HHS as part of a cross-agency transition strategy, not solely a consumer protection regulator.

CMS Program Integrity and Fraud Detection

RFK claims very large annual losses in Medicare/Medicaid from fraud, emphasizing growth of benefit categories beyond physician services (transportation, home care, administrative intermediaries) that create fraud surfaces. He describes shifting from “pay-and-chase” (pay claims, then attempt clawback) to pre-payment prevention, enabled by AI that flags suspect billing, prior fraud history, and provider anomalies. He distinguishes Medicare as more directly controllable federally, while Medicaid requires state “rail” cooperation; he suggests uneven state cooperation will affect implementation speed.

Health IT: FOIA, Records Access, and Interoperability

RFK presents a broad “transparency + AI” agenda. First, he claims HHS is using AI to accelerate FOIA responses to near-real-time, constrained mainly by reliable privacy redaction and statutory exemptions. Second, he says HHS convened major tech stakeholders to reduce information blocking so patients can access their medical records easily—ideally in a mobile-friendly format—supporting continuity of care and consumer control.

Payment Frictions: Prior Authorization

He says HHS has secured commitment from a large share of insurers (citing ~80%) to streamline or eliminate “unnecessary” prior authorization and provide point-of-care decisions—meaning patients and clinicians know coverage status before leaving the visit. He frames this as reducing delay, administrative burden, and patient uncertainty even if medical necessity standards remain.

Price Transparency Enforcement

RFK emphasizes hospital price transparency as a core consumer-empowerment policy. He claims prior transparency requirements were weakly enforced and says new regulations will impose substantial penalties, driving compliance by year-end. He describes a consumer-usable interface (procedure-level price comparison across hospitals) to create real market dynamics and constrain price dispersion.

Addiction Policy

He briefly describes addiction reform as an accountability problem—fragmented programs with misaligned incentives. He supports multi-agency pilots integrating outreach, treatment, housing, employment, and longitudinal follow-up under a single accountable trajectory manager. (The interview also contains frequent recovery/12-step references not summarized here.)


 

Friday, February 13, 2026

CMS Updates PAMA Reporting Page - Timelines are Short

 PAMA lab price reporting, for CMS median rate-setting, has been frozen since 2019.   Based on section 6226 of the February 3 federal budget, that is changing fast.

Labs will report 1H2025 commercial payment levels in May-June-July 2026.   CMS will publish consolidated median prices in the fall, which will be active on January 1, 2027 for three years.  

When a new PAMA price is lower than the current 2026 price, the 2026 price can fall by now more than 15% per year (until it matches the PAMA price sooner or later).

Track CMS instructions here:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting


Nerd Note

Some insights gleaned from the last cycle, 2017 data, short blog here.

Thursday, February 12, 2026

Podcast Scan: Owkin's Podcast with Jorge Reis-Filho

Owkin is a French company on the cutting edge of genomic/cellular/biomarker/AI discovery. They have 63 videos on YouTube, many of them videopodcasts. 

In this one, February 2026, CEO Thomas Clozel talks with Jorge Reis-Filho, who is Chief in the enterprise AI unit of AstraZeneca.

SEE ALSO an article about a panel on the future of digital pathology  including Jorge Reis-Filho here, leading to Pt 1 here; Pt 2 here. My blog on it here.

#####

Chat GPT 5.2

The episode’s core idea is that time should be treated as a first-class biomedical variable: models should learn from trajectories, not just snapshots. The guest argues that integrating multimodal data (omics, spatial/pathology, clinical) through a semantic layer, plus temporal embeddings, enables more informative predictions—especially for oncology regimens where sequence and timing matter. Spatial biology’s promise remains under-realized mainly due to insufficient scale and diversity of datasets, motivating consortia. 

The conversation stays practical: AI won’t bypass validation, near-term gains are likely in clinical development, and long-term breakthroughs require trials, culture change, and shared infrastructure.

CMS Gapfill: Spring 2026 Update & What Will Happen

Each fall, CMS determines which of 100-odd new lab codes are crosswalked and which are "gapfilled," meaning priced by MACs - for us, in 1Q2026.  

See Change Request CR14312, T13514, December 5, 2025.  By my count, in that public document, 40 codes are for contractor gapfill right now.


How They Do It - Novitas/FCSO MAC and NGS MAC 

NGS MAC and Novitas/FCSO MACs generally post a webpage notifying providers of the numbers of the new gapfill codes, and requesting information.  Novitas in recent years has used a "SurveyMonkey" (or similar) online web form for data entry.   MACs ask for CMS-specified gapfill information, such as test purpose, charges, prices of other payors, resources required, etc.  

I believe NGS MAC post the list of PLA codes and provides an email for receipt of your materials.

How They Do It - MolDx MACs (Palmetto, Noridian, CSG, WPS)

At least past years, nothing is posted publicly at MolDx  One of the medical directors there has told me that these gapfills could be finished with their internal expertise as MolDx lab experts, and they'd reach out to a lab about its code just if and when MolDx found it necessary.   

In the past several years, all the MolDx MACs offer the same price for every code being priced (molecular or not), and, the NGS MAC has matched the MolDx price for every code being priced.  In recent years the Novitas-FCSO gapfill prices have tracked close to the MolDx ones, sometimes identical, sometimes 20% higher or lower.

How to Gapfill (Bruce's View)

Amal Thommil Updates Decibio MRD Coverage (February 2026)

 Amal Thommil at DeciBio updates us on coverage for minimal residual disease testing (MRD).

See his chart here.

click to enlarge



AI VERSION: My Heart Skipped a Beat, CMS Proposes X1-X2 Modifiers for NCD 90.2 (NGS in Cancer)

This blog issued as AI version (below) and my original human written version .
###

Header:  CMS proposes X1/X2 modifiers to encode cancer stage on NGS claims—framing it as ICD-10 “clarity,” but potentially creating more complexity than it resolves.

###

My Heart Skipped a Beat: CMS Proposes X1/X2 Modifiers for NCD 90.2 (NGS in Cancer)

In 2017, CMS issued NCD 90.2 for next-generation sequencing (NGS) in cancer. It was not an accidental document. Senior HHS leadership got involved (the Trump 1 administration).  The policy was designed to do something specific and modern: create a clean, predictable incentive for FDA approval of comprehensive genomic profiling (CGP) tests.

  • The Trump HHS made the lab industry a bargain. 
    • If an NGS test received FDA approval as a companion diagnostic, it would be covered nationally and immediately—on the day of FDA approval
    • If the label expanded (new gene, new drug), coverage expanded the same day. No MAC-by-MAC delay. 

No prolonged local review cycles. A national coverage rule designed to match the pace of precision oncology.

That was the theory.

In practice, implementation has been anything but frictionless. The 14-day rule delays inpatient surgical specimens from being tested promptly. If testing waits for a later oncology visit, even more weeks can pass before genomic results guide therapy. Patients sometimes start urgent but nonspecific chemotherapy during these delays. And while the NCD promises its readers coverage “from the day of FDA approval,” operational reality often looks far worse. [See my blog about erroneous MAC delays in implementing new CMS NCD coverage.]

So one might reasonably ask: if implementation is already uneven, how do we improve it?

At the December 2025 HCPCS meeting (Agenda Item 41), CMS proposed creating two new Level II modifiers to “implement” NCD 90.2:

  • X1 – NGS for early-stage cancer (Stage I–II)

  • X2 – NGS for advanced (Stage III–IV), recurrent, relapsed, refractory, or metastatic cancer

CMS’s summary explains the rationale. NCD 90.2 does not cover early-stage cancers, while ICD-10 diagnosis codes do not distinguish stage. The proposed modifiers would “ensure greater clarity,” align submission with coverage requirements, and “maintain data integrity for oversight and claims analysis.”

That seems at first like a coherent administrative theory.  But it deserves scrutiny.  I'd argue it improves NONE of the real problems, and introduces a half-dozen new ones.

https://www.cms.gov/files/document/b2-2025-public-meeting-agenda-december-17-18-2025.pdf

click to enlarge

The decision isn't out yet, but could come out any week, with other non-drug HCPCS narrative decisions (see here, scroll down, right column).

HUMAN VERSION: My Heart Skipped a Beat: CMS Proposed X1, X2 Modifiers for NCD 90.2 NGS in Cancer

This blog issued as my human-written version here, and an AI-written version adjacent.

##

CMS has an NCD, 90.2, which was designed in 2017 (with help from senior policy folk at HHS) to provide an incentive for FDA approval of then-new NGS testing.   

  • FDA-approved NGS CDx tests were covered from the day of FDA approval, per the NCD
    •  And any updates (new gene, new drug) were ALSO covered from the of approval.

But CMS has a hodge-podge of barriers.   The 14-day rule delays inpatient surgical cases from getting comprehensive genomic profiling results (CGP) for weeks.  If the oncologist has to order it, when he sees the patient well after the surgery event, that adds additional weeks.   Too many patients get urgent but nonspecific chemo during the undue lags.    Even worse, MACs may not start new coverage on "the day of FDA approval" until a year or more later - another "OMG" moment.   (See here).

How could we make this mess worse?

At the December HCPCS meeting, CMS proposed new modifiers X1 and X2, to "implement" the NCD.  I only see new problems with a solution that solves no existing problems.

Details follow.

This May Save Your (Reimbursement Nerd) Life Someday: CMS Defines CR Implementation Date vs Effective Date!

Header:  CMS issues MAC instructions (e.g. for NCDs) by "Change Release" instructions, which are also public.  Every new requirement or coverage has an "effective date" (e.g. May 1) and an "implementation date" (e.g. July 1).   Contractors can code to the effective date as soon as they can, but MUST code to that instruction by the "implementation date."

Today, a client got a MAC email that the MAC understood the effective date, but could not observe it until the far-future implementation date.  

This is just wrong, but it's easy to get confused.  There's a published rule.  See a CMS instruction (from 2011, still  on the books in Manual #01, Ch 07, Part 50) that is 100% clear. Use it whenever you need to.

https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf





###

Chat GPT provided the correct reading of CMS rules, and upon a second query, found and provided the exact internet document PDF.

###

https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf

https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/ge101c07.pdf

##

Chat GPT Answer

##

CMS explicitly distinguishes Effective Date from Implementation Date in its transmittal instructions. In CMS Manual System Rev. 66 (CR Definitions), CMS states that the implementation date “is the date by which Medicare fee-for-service contractors and shared system maintainers shall apply all changes” and when all necessary updates must be “completed and operational,” making clear that it is a deadline for systems readiness, not a permission trigger

 (https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf). By contrast, CMS defines the effective date as the date the new policy becomes active and from which contractors must apply the new rules to claims (often specified as the date of service). This same distinction is reflected in the active Internet-Only Manual structure governing contractor operations (Pub. 100-04, Chapter 7 framework and related transmittal usage), where effective dates govern claims applicability and implementation dates govern contractor system updates

 (https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/ge101c07.pdf). In short, CMS policy architecture supports your long-standing interpretation: the effective date controls when the rule applies; the implementation date is the last date by which the contractor’s edits and processing systems must be fully in place.

##

I worked as a medical director 2004-2008, before this particular 2011 publication, but it was the same rule in 2004.

##

This should be a public link to my chat gpt dialog;

https://chatgpt.com/share/698dff2e-1d2c-8005-aef9-8a46b4ad06cd



Tuesday, February 10, 2026

Excellent MedTech Podcast: From R&D to FDA to Commercialization (Dr Doug Kelly, FDA)

Heard an excellent podcast today on MedtechTalk, featuring Douglas Kelly MD MBA, a physician and venture capitalist who served 2020-2025 as deputy Center Director for Science at CDRH.

Find the podcast online at this link (recorded March 2025):

https://medtechmvp.com/media/medtech-talk-podcast/douglas-kelly-on-the-value-of-innovation-and-breakthrough-designation


See Dr Kelly's Linked In page here.

And see his recent re-post of Niko Hems' and Kathryn Bowsher's Linked In articles, on why so many AI-based startups crash, sometimes afer pretty massive funding.

(And for more on his theme of the value of public service by working places like FDA, see Michael Lewis's book "Who is Government? The Untold Story of Public Service.")

##

Chat GPT 5.2 summarizing the one-hour podcast:

##

In this candid and substantive Med Tech Talk episode, Justin Klein, MD (Vensana Capital) interviews Douglas Kelly, MD, former Deputy Center Director for Science at FDA’s CDRH, for a behind-the-scenes tour of how device regulation actually works—and why so many promising technologies never reach widespread patient access

Kelly traces his path from venture investing to FDA, recounting the moment Jeff Shuren essentially challenged him to “stop complaining and come help fix it,” and uses analogies to explain how outsiders routinely misunderstand what FDA needs and how decisions get made. 

He then maps the internal structure of CDRH (the offices of health technology, review staff, policy, science/technology, emergency preparedness), emphasizing that the range of devices—spanning software, AI, implantables, diagnostics, and more—require CDRH to be “intellectually promiscuous,” with staffing and communication challenges that don’t resemble standard pharma-style review. 

The core of the conversation is Kelly’s strategic rationale for the TAP program (Total Product Life Cycle Advisory Program): not as an “FDA fast lane,” but as a deliberate attempt to replace the industry’s serial, disconnected failures (FDA authorization → CMS coverage → CPT code → payment → adoption) with an earlier, parallel, relationship-driven process that aligns evidence expectations across FDA, payers, and professional societies—so companies can fund the right studies once and investors can price risk realistically.  He actually took people to watch events like CPT and RUC meetings to get a downstream view of the commercialization of devices.

Along the way, Kelly offers a striking metric—only about 1.5% of breakthrough-designated efforts, in his interim analysis, achieve durable commercial success or broad adoption—arguing that the system burns enormous reviewer time on products that never see daylight, and that the next era needs clearer priority-setting and better FDA–CMS alignment rather than rigid “stay in your lane” silos. 

The discussion closes on today’s pressures—hiring constraints, the fragility of mission-driven expertise, and the risk that indiscriminate staffing cuts and return-to-office mandates will undercut precisely the industry-savvy talent needed to modernize review (including AI)—but ends on an optimistic call to action: Kelly urges more private-sector leaders to do public service at FDA, because the long-term fix is shared understanding, durable relationships, and evidence pathways that get genuinely valuable innovations to patients without compromising safety.

Big News for Digital Pathology: AMA Appears Set to Use CATEGORY III for Whole Slide Imaging / AI

It's no secret that AMA CPT PLA made a half-dozen codes for whole-slide imaging 3 to 4 years ago, and then seemingly imposed a "moratorium" on more PLA codes based on WSI.  There it sat for the past 2 years, since Summer 2024.

It's also no secret that last fall (public agendas and reports), the AMA CPT considered a MAAA application for WSI, and rejected that, too.

This just couldn't continue, as exciting things are happening with WSI and AI, even with "plain" H&E slides.  See the new Nature Medicine paper by Li et al on predictions in lung cancer, here.

Last Week's AMA:  H&E, WSI, AI, & Cat III

At last week's AMA CPT meeting, AMA had a long discussion and seemed prepared to accept WSI-AI services in the Category III category.   Getting to that place was kind of dizzying, involving detours around whether H&E AI-based services were "codifiable," what AI services do and don't require physician work, and how evolving use of AI in digital pathology would intersect with a moving target, revisions to how CPT as a whole handles different types of AI services.

So the take home is:  Look for AI & WSI distinct diagnostic services to land in Category III coding rules, for an indefinite time period.

###

AMA Updates Confidentiality Interpretations

At last week's meeting, an hour was devoted to new AMA rules, including confidentiality/nondisclosure.  

Some things are verboten:  

You can't record the meeting.   You can't take photos (or screenshots) of CPT text or other slides   You can't type and replicate CPT codes shown on slides.   You can't quote anybody's statements by name.  You can't say whether any particular tabs (items) passed or failed the committee vote - although you probably wouldn't know, since those are secret silent electronic ballots.

However, AMA also discussed what is allowable.  

You can repeat statements or positions voiced, as long as not identifiable to anyone.   You can make comments based on your opinion (example; "I thought the discussion of Tab 105 went well.")   Based on examples on slides and the AMA's discussion, I feel it was well within the statement boundaries to type my remarks above - letting us know that after a lot of debate and detours, the category of WSI AI seems to now fall in the domain of Category III.