Thursday, February 26, 2026

Register for April CPT in Boston; Lab Codes Posted (April 26)

AMA CPT ahs opened the registration page for the next Editorial Panel meeting, April 29-May 2, 2026 in Boston.  That's Westin, Copley Place.

Find it here:

https://www.ama-assn.org/membership/events/cpt-editorial-panel-meeting

##

The full public agenda for all codes will be released on March 6 (link above.)  Because they travel through several, serial subcommittees, Lab Codes are released for comment on April 26, link above.

####

LAB CODES here:  https://www.ama-assn.org/system/files/may-2026-lab-mopath-agenda.pdf

You can request code packets to common on starting NOW and must submit comments back by March 12.

There are a LOT of Category I & Category III codes, far more than typical.  This is the last cycle of new codes that will still enter the summer 2026 pricing process.

  • Cat 1, "POLE" gene
  • MAAA, multiple sclerosis
  • MAAA, CNS methylation
  • MAAA, revise text 81518 re breast cancer
  • MAAA, revise text 81542, prostate, remove "microarray"
  • Cat 1, testing for GFAP (glial protein)
  • Cat 1, pTau217/Abeta1-42 ratio
  • Cat 1, candida fungi PCR
  • Cat 1, 87800, 87801, add "single result" (pathogen)
  • Cat 1, rubeola by PCR
Digital pathology codes;
  • Cat 1, PAP test, Digital algorithmic classification
  • Cat 1, revise 88363 select specimen for [bladder prognostic testing] [sic]
  • Cat III, histomorphology augmentative software [AI], pancreatic cancer
  • Cat III, ex-vivo profiling of fresh tissue
  • Cat III, AI image analysis of breast cancer HER2-HR+
  • Cat III, AI image analysis prostate cancer
Re 88363 revision, this is a code for review of sections and blocks, for sendout for molecular studies (e.g. DNA from FFPE). It's not specific to bladder cancer now and I suspect this is just one application.

Re the "fresh tissue" code, see this from Chat GPT:
    This might refer to technology such as ex vivo confocal microscopy (EVCM) or closely related platforms (e.g., reflectance confocal microscopy of freshly excised tissue). These systems image fresh, unfixed tissue without freezing, sectioning, or routine H&E processing. In dermatology and surgical oncology, they are being positioned—carefully—as a potential alternative or adjunct to frozen section histology, especially in the context of Mohs surgery. 
    A  freshly excised tissue specimen is placed on a microscope stage, sometimes stained with a rapid fluorescent dye (e.g., acridine orange), and imaged optically using confocal principles. The system generates high-resolution images that approximate histologic sections—often digitally rendered to resemble H&E.

Re the POLE gene code, also Chat GPT:
It’s striking, though historically consistent, that POLE is only now surfacing in an application for a discrete CPT sequencing code at the American Medical Association (AMA CPT Panel). In the arc of colorectal cancer genetics, POLE is indeed a relative latecomer compared to classical genes like APC, which was identified in 1991 as the tumor suppressor underlying Familial Adenomatous Polyposis (FAP) and became foundational to the adenoma–carcinoma model. By contrast, POLE’s clinical importance emerged in the post–next-generation sequencing era around 2012, when large-scale genomic analyses revealed that mutations in its exonuclease proofreading domain produce ultramutated tumors with extremely high tumor mutational burden and distinct immunologic behavior. Germline mutations define Polymerase Proofreading–Associated Polyposis (PPAP), typically with fewer polyps than FAP but meaningful colorectal and endometrial cancer risk. For a decade, however, POLE has largely lived inside broad NGS panels rather than as a stand-alone clinical assay. A CPT application signals something different: that clinicians may now view POLE as sufficiently actionable to order explicitly, that payors might be expected to track and reimburse it directly, and that its immunotherapy-relevant biology has matured from genomic curiosity to structured clinical decision variable. In that sense, the coding move reflects a broader pattern in molecular oncology—discovery first, routine panel inclusion second, and only later formal recognition in coding and reimbursement once guideline integration and clinical demand reach critical mass.

####

You can register for in person or zoom attendance. 

In-person usually sells out (450 people for this upcoming session.) Even if it didn't sell out sooner, they list April 13 as the deadline for in person registration. You have to set up an email logon at AMA itself (free) to get into the reservation system for CPT meetings.





Wednesday, February 25, 2026

On a Lighter Note: JAMA Interviews Chatbot on History of Chatbots

JAMA consolidates AI-related articles across its journal family - here.  There's also a twice-a-month podcast, open access on podcast managers (Apple podcasts here).

For January 29, 2026, the topic is:   Chatting With a Chatbot: The History of the First Clinical Chatbots, Straight From an LLM.

https://edhub.ama-assn.org/jn-learning/audio-player/19034472

The 12- minute interview with ChatGPT 4o tracks AI in medicine back to the 1960s.  There's a transcript.

Fun fact: When I was in med school in the 1980s (Stanford), I took a one-hour-a-week elective on AI in medicine taught by Ted Shortliffe.  

Here's recent JAMA on over- and under-regulation of AI, link.  And, "The death of the consult note," link.  JAMA AI is edited by MGH psychiatrist Roy Perlis, heard on the interview above.

###

Chat GPT 5.2 summarizes the transcript.

###

JAMA+ AI Conversations: Chatting With a Chatbot — The Nearly Forgotten Origins of Clinical Chatbots

In a recent 12-minute episode of JAMA+ AI Conversations, Roy Perlis interviews not a historian, but a large language model, to revisit the early—and often oversimplified—history of clinical chatbots. The takeaway is that today’s debates about AI in mental health are not new. They trace directly back to the 1960s, when two pioneers, working in parallel, reached sharply different conclusions about whether computers should ever function as therapists.

ELIZA: The Famous Beginning

Most histories begin with ELIZA, created in the mid-1960s by MIT computer scientist Joseph Weizenbaum. ELIZA used simple pattern matching to simulate a Rogerian psychotherapist. It reflected users’ statements back as questions, creating the illusion of empathy. Despite its technical simplicity, users often experienced it as surprisingly human.

But the podcast argues that the story shouldn’t stop there.

Kenneth Colby: The Overlooked Pioneer

Before building his more famous chatbot, psychiatrist and computer scientist Kenneth Colby had already published a 1966 paper titled A Computer Method of Psychotherapy. In it, he described a scripted therapy program designed to guide structured therapeutic dialogue. Crucially, Colby envisioned these systems running on time-sharing computers—meaning a single machine could support multiple simultaneous “therapy-like” interactions. Even in 1966, he was thinking about scalability.

Colby did not argue that computers should replace therapists. Rather, he framed them as tools—adjunctive supports that might extend access or handle structured components of care.

PARRY: Modeling Psychopathology

In the early 1970s, Colby developed PARRY, designed to simulate a person with paranoid schizophrenia. Unlike ELIZA’s neutral therapeutic stance, PARRY incorporated rule-based representations of suspicion, persecutory beliefs, and guarded conversational patterns.

In informal Turing-style tests, psychiatrists were asked to distinguish PARRY from real patients with paranoid schizophrenia. Some struggled to tell the difference. For the era, this was remarkable.

In one of AI history’s more whimsical moments, ELIZA and PARRY were made to “talk” to each other. The result was absurd: ELIZA calmly reflecting statements, PARRY responding with paranoid suspicion. It was humorous—but also revealing. Even early chatbots could generate the illusion of personality through rules and framing alone.

The Philosophical Divide

The real tension lies not in the code, but in the ethics.

Weizenbaum later became sharply critical of applying computers to psychotherapy. In his book Computer Power and Human Reason, he warned against delegating deeply human emotional work to machines. He worried about misplaced trust, emotional attachment, and moral outsourcing.

Colby, by contrast, saw potential—if used thoughtfully and as augmentation rather than replacement.

Today’s debates about AI therapy apps, large language models in mental health, and scaling behavioral support echo this exact divide. The 2020s are replaying a 1960s argument—with far more powerful tools.

Lessons for Today’s Builders

The podcast closes with advice to psychiatry residents and computer science students who want to build the next generation of health chatbots:

  • Understand the ethical stakes. Weizenbaum’s cautions about over-reliance and loss of human connection remain relevant.

  • Think about augmentation, not replacement. Colby’s framing still provides a workable model.

  • Design for scale responsibly. The original promise—time-sharing therapy at scale—has now become global LLM deployment.

  • Preserve transparency. Early systems were rule-based and understandable; today’s black-box models demand even greater ethical rigor.

Bottom Line

Clinical chatbots did not begin with generative AI. They began with psychiatrists and computer scientists asking whether machines could structure, simulate, or extend therapeutic interaction. The core questions—empathy, trust, replacement vs. augmentation, scale vs. humanity—were already on the table in 1966.

What has changed is not the argument.

It is the power of the technology.

Trump State of Union, Healthcare Edition: Drug Prices High, Price Transparency Good, Insurers Bad

We gave Chat GPT 5.2 a link to the full transcript of the SotU, State of the Union speech.

###

Top-Line Summary

The 2026 State of the Union devoted meaningful time to healthcare, but almost entirely through the lens of prescription drug pricing and insurance affordability rhetoric, not medical technology or innovation policy. The President highlighted “most-favored nation” drug pricing reforms and criticized insurer profits, while pledging to protect Medicare and Medicaid. However, there was no discussion of diagnostics, digital pathology, FDA modernization, CMS payment reform, laboratory reimbursement, precision medicine, or AI in clinical care. Artificial intelligence appeared only in the context of energy infrastructure and youth education, not healthcare delivery. For medtech and genomics stakeholders, the speech offered little directional signal on regulatory reform or reimbursement modernization. In short, it was a pricing-focused address, not an innovation-focused one.

###

Evolving Importance of State Biomarker Laws: Guidance from AZ, GA, LA, OK

One of the major developments in genomics policy over the past five years has been the introduction of state mandates for biomarker test coverage, now active in about 20 states.  Typically, these bills peg private insurance coverage to Medicare coverage and guideline endorsement.   (Private insurance coverage is state regulated and generally excludes large-employer self-insurance plans aka ERISA plans).

See background at American Cancer Society here.  (Their map below).  See a 2024 article in JAMA by Lin, Coffman, Phillips here.  See a viewpoint from Avalon here.   See a recent press release re: Mississippi from ACS here.

ACS CAN


Several of the circa 20 states have issued some guidance to insurers about the laws:

AZ: Regulatory Bulletin 2026-01_Coverage for Biomarker Testing.pdf 

GA: https://oci.georgia.gov/press-releases/bulletins 

LA: bul2025-05-final.pdf

OK: BULLETIN NO. 2025-06 - Oklahoma Insurance Department


###

AI CORNER
###

I asked Chat GPT to review the four guidance documents.


CGP: An Evolving Definition

 I've been at several meetings recently where the topic turned to "defining CGP."   Is it 50 genes? 500 genes?  Ci Sure, it's indels and fusions, but does it require germline comparison?  If its LBx CGP, how important is WBC analysis for clonal hematopoiesis?  Ought we include a transcriptome?  If so, how much do we do with it?  (See, "The actionable transcriptome" here.)

The February issue of Precision Medicine Online addresses the topic directly.  Browse the February 2026 issue here.

##

AI CORNER

##

“Comprehensive Developments: Comprehensive genomic profiling plays an increasingly central role in personalized cancer treatment,” by Chris Anderson. Precision Medicine Online, February 2026.

In this February 2026 feature, Anderson argues that comprehensive genomic profiling (CGP) has evolved from broad gene panels into a foundational clinical strategy for biology-driven oncology, integrating large DNA/RNA hybrid panels, composite biomarkers such as TMB and HRD, and increasingly tumor-informed MRD applications. 

Drawing on data from Providence Health’s early, pathologist-directed 523-gene testing program, the article presents evidence that ordering CGP at diagnosis substantially increases actionable findings (67% vs. 33% with smaller panels), shifts patients toward targeted and immunotherapies, and may improve survival, while also highlighting persistent barriers in reimbursement and community adoption. For genomics stakeholders, the message is clear: CGP is less about a specific gene count and more about delivering comprehensive molecular context early enough to change therapeutic trajectories.

### DEEP DIVE

What exactly is “comprehensive genomic profiling” (CGP), and how far does it extend? From organ-based oncology to biology-driven care.


From Single Markers to Biology-Driven Oncology

Anderson opens by describing CGP as a natural progression from early single-gene tests to today’s multi-hundred gene hybrid panels. As next-generation sequencing (NGS) became cheaper and more powerful, and as actionable drivers expanded, panels grew accordingly. Modern CGP panels typically include:

  • DNA sequencing across hundreds of genes

  • RNA sequencing for fusion and splice detection

  • Complex biomarkers such as microsatellite instability (MSI) and tumor mutational burden (TMB)

  • Composite signatures like homologous recombination deficiency (HRD)

Rick Baehner (Exact Sciences) emphasizes that CGP is foundational to the shift from organ-based treatment decisions to biology-based treatment decisions. Common driver mutations—TP53, EGFR, KRAS, PIK3CA, APC—span tumor types. The logic of CGP is to match molecular profile to therapy rather than anatomical origin.

The article underscores a key principle: precision therapeutics are only as effective as precision diagnostics. Targeted drugs without robust profiling are, as Luca Quagliata (Thermo Fisher Scientific) notes, a missed opportunity.


Expanding Scope: Beyond SNVs

CGP has evolved beyond single nucleotide variants. Over the past decade, hybrid panels now capture:

  • Insertions/deletions (indels)

  • Copy number variants

  • Structural variants

  • RNA fusions and splice alterations

  • MSI and TMB

  • HRD and genomic-scar signatures

RNA sequencing is highlighted as particularly impactful. Baehner notes that adding RNA increases fusion detection by 15–20% and provides a more direct readout of tumor biology: “DNA tells you what could happen. RNA tells you what is happening.”

Ezra Cohen (Tempus) argues that combining tumor DNA, RNA, and germline testing improves diagnostic accuracy. Notably, Tempus data suggest approximately 9% of alterations are detectable only in ctDNA, supporting simultaneous tumor and blood testing.

For genomics readers, this section reinforces a critical theme: CGP is increasingly multi-analyte and multi-compartment. Tissue-only approaches may miss clinically relevant events.


Composite Biomarkers and Therapeutic Expansion

The article places special emphasis on HRD and PARP inhibitor expansion. Initially confined to ovarian cancer, PARP inhibitors are now used in breast and prostate cancer and are expanding further. HRD status—not just BRCA1/2 mutations—predicts benefit, meaning CGP must capture broader homologous recombination repair pathways (PALB2, ATM, genomic scars).

This reflects a broader evolution: integration of single genes into composite molecular signatures. The field is moving from binary mutation calls toward systems-level interpretation.


Early CGP: A Timing Paradigm Shift

The most compelling section for clinical policy readers is the Providence Health study.

Providence implemented pathologist-directed, upfront CGP at diagnosis using a 523-gene DNA/RNA hybrid panel, compared to a conventional 50-gene panel ordered later in the care pathway.

Key findings (first two years, 3,216 patients with advanced solid tumors):

  • 67% actionable alterations detected with broad CGP vs 33% with the 50-gene panel

  • TMB-high identified in 22% (not captured by smaller panels)

  • 52% received biomarker-informed therapy

  • Median overall survival:

    • 25 months (CGP-guided targeted therapy)

    • 17 months (chemotherapy)

  • In NSCLC:

    • 16 months median survival (CGP) vs 7 months (small panel)

Importantly, results were available 12 days before initial oncology consultation, meaning therapeutic decisions were made with genomic data in hand.

Carlo Bifulco frames this as correcting a timing flaw in oncology: genomic testing traditionally ordered too late, after treatment trajectories are set.

For genomics policy experts, this is significant. It reframes CGP not merely as a technical enhancement, but as a workflow redesign. It also strengthens arguments for early reimbursement parity.


Barriers: Reimbursement and Community Adoption

Despite compelling data, reimbursement remains the central obstacle. In the Providence study, testing was free. Real-world adoption hinges on payer coverage.

Community oncology presents additional friction:

  • 80% of cancer patients are treated outside academic centers.

  • Oncologists struggle to keep pace with rapidly evolving biomarkers.

  • Out-of-pocket cost concerns deter adoption.

NeoGenomics’ Warren Stone highlights peer-driven education as critical for broader uptake.

For reimbursement strategists, this section reinforces the familiar tension: clinical utility evidence is accumulating, but payer consensus lags behind guideline evolution.


CGP as Foundation for MRD

The article closes by linking early CGP to tumor-informed minimal residual disease (MRD) testing.

Broad initial profiling provides the mutation set required to design personalized ctDNA assays. Detectable ctDNA post-treatment signals high recurrence risk. This transforms CGP into the foundational “first pass” for chronic disease management rather than a one-time diagnostic.

This integration of baseline CGP and longitudinal liquid biopsy moves oncology toward:

  • Earlier detection of recurrence

  • Less invasive surveillance

  • Chronic disease framing of cancer care

For genomics readers tracking MRD reimbursement battles, this positioning is strategic: CGP is no longer just about initial therapy matching—it seeds future monitoring.


So What Is “CGP”?

Although the article does not provide a numeric definition (50 vs 500 genes), it implicitly defines CGP functionally:

CGP is:

  • Broad, multi-gene DNA profiling

  • Increasingly paired with RNA

  • Capable of composite biomarker signatures

  • Often integrated with germline and ctDNA

  • Delivered early in the care pathway

  • Linked to targeted therapy and immunotherapy decisions

  • Foundational for tumor-informed MRD

It is not simply a large panel; it is a clinical strategy centered on comprehensive molecular context.


Final Takeaway for Genomics Stakeholders

Anderson’s piece reflects where the field now stands:

  1. Technically mature hybrid panels exist.

  2. Clinical evidence for early use is accumulating.

  3. MRD integration strengthens longitudinal value.

  4. Reimbursement and community adoption remain bottlenecks.

  5. The definition of CGP is expanding toward multiomic integration.

For those of us debating whether CGP requires RNA, germline comparison, ctDNA, HRD signatures, or composite biomarkers—the article suggests the answer is evolutionary rather than binary. CGP is becoming less about panel size and more about biological completeness at clinically actionable depth.

In that sense, the “definition” of CGP may continue to expand as oncology shifts from mutation detection to full molecular state characterization.

Monday, February 23, 2026

Multiple Articles: FDA Launches Plans for "One Pivotal Trial, Not Two"

FDA drug studies have traditionally required two pivotal randomized controlled trials.   This makes the chance of a false positive very tiny.  (I also thought it might be because statute requires approval after well-controlled trialS, with an S).

We now have a packet of 5 FDA documents on scientific evaluations based on plausibility and drug approvals based on single trials.  

  • See the Feb 23 rare disease press release here.
  • See the Feb23 rare disease draft guidance document here. (Comment to ~ Apr 23).
    • See also a 2025 NEJM article by FDA on the general "plausible evidence" theme and how it might be applied in rare diseases here.

  • See the brand-new NEJM article on 1 trial approvals, here.  (Prasad & Makary Feb 18).
  • See a trade journal regarding the  new 1-trial NEJM article here.
  • Implications for Dx? See a GeneDx article on the "plausible" framework by Trish Brown here.
  • Coverage at Genomeweb here.

Making policy decisions often relies on a framework of facts.  (See also, how policy decisions are made for some diagnostics.)

Relevance to Diagnostics:

[Chat GPT] 
This evidentiary shift also has relevance for molecular diagnostics, where randomized controlled trials are often neither feasible nor ethically appropriate. In genomic testing, clinical utility frequently rests on strong biologic rationale, risk stratification accuracy, and concordance with outcomes — not on forcing artificial randomization. For example, it would be ethically untenable today to randomize women with a very low recurrence score on Oncotype DX to receive chemotherapy, or to deny chemotherapy to women with a very high score, simply to generate RCT purity. 

Instead, validation relied on retrospective-prospective analyses, biologic coherence, and concordance with clinical outcomes. In that sense, molecular diagnostics have long operated within a “plausible mechanism plus confirmatory evidence” paradigm. The FDA’s broader embrace of mechanism-anchored inference and one-trial flexibility (NEJM 2025; NEJM 2026) brings drug regulation somewhat closer to the evidentiary norms that diagnostics have navigated for years — particularly in oncology, where risk stratification tools guide treatment decisions without mandating ethically problematic randomization.

See also Nature on AI for rare diseases here


####
AI Corner  
Chat GPT 5.2 follows
####

Friday, February 20, 2026

AMA CPT Publishes "Summary of Panel Actions" for February Palm Springs Meeting

The most recent AMA CPT editorial meeting was February 5-6 in Palm Springs, and at day 14, February 20, they've released the Summary of Panel Actions.

Dozens of PLA codes were voted on, but for PLA, specific results and final text edits won't appear til April 1.

Here is the home page for Panel Actions:

https://www.ama-assn.org/about/cpt-editorial-panel/summary-panel-actions

And here is the February 2026 edition:

https://www.ama-assn.org/system/files/feb-2026-summary-of-panel-actions.pdf

####

DEEP DIVE [Chat GPT 5.2]

CPT February 2026: AI Is Still Getting Through — Carefully

The AMA CPT Editorial Panel met February 5–6 in Palm Springs, and the Summary of Actions now public. The agenda included 69 items. By my count, 27 were withdrawn, 5 postponed, and 4 rejected outright.

That distribution is typical of a mature coding environment. Withdrawals are not defeats; they are usually strategic pauses. Applicants often pull proposals to redesign descriptors, align specialty societies, clarify units of service, or reconsider whether PLA, Category III, or Category I is the correct pathway.  

Whole Slide Imaging AI: Category III Accepted

The most consequential action for digital pathology is Tab 45. Valar Labs received two new Category III codes (X568T, X569T) for AI-based prediction of immunotherapy response in non-muscle invasive bladder cancer.

This is a meaningful development. Over the past couple years, digital pathology-AI has been rejected from both PLA and MAAA categories. Some observers worried whether CPT would find a way to codify slide-based algorithmic services at all. This action demonstrates forward motion.

Category III status does not confer coverage, pricing, or payment certainty. But it does provide several critical foundations: a defined reportable service, a recognized unit of service, and a pathway to accumulate utilization and outcomes evidence. 

There is historical precedent for slide-based or analytic services being placed on the CLFS (about 8 PLA codes for dig path WSI), and I believe the same precedent ought to occur for the same service, just placed as a Cat III codes. We'll see how CMS handles these codes in the summer lab fee schedule "crosswalk-gapfill" meetings.

AI in Radiology: PET Heterogeneity Survives Debate

For different reasons, Tab 64 is equally instructive. Code X567T, describing AI analysis of heterogeneity of treatment response in metastatic cancer using serial PET imaging, was accepted after prolonged discussion.

The FDA-approved software uses AI to perform algorithmic tracking of multiple metastatic lesions across time, quantifying growth, shrinkage, and metabolic activity longitudinally. Radiologists do routinely compare scans to prior imaging. But they do not typically track dozens of lesions with systematic quantitative uniformity across timepoints. That distinction likely helped establish this service as separately reportable rather than merely an automated enhancement of existing physician work.  Potentially, this level of AI analysis makes the PET scans themselves more valuable to clinicians. 

Pattern Behind the Withdrawals?

Several AI-related proposals were withdrawn during this cycle. That clustering suggests recurring friction points in the CPT process. Possible issues could include:

  • Unclear unit of service (per slide, per case, per lesion, per episode?)

  • Ambiguity about who performs or reports the service

  • Insufficient differentiation from existing physician interpretation

  • Framing the value proposition around improved sensitivity rather than decision impact

The services that succeeded this cycle were tightly framed around defined analytic outputs with specific clinical contexts. That appears to be the emerging template.

Some of these issues are now put into a topic called "codafiability."   Use it in a sentence.  "I don't like this service, so it lacks codafiability."

PLA Activity Continues

The Q1 PLA consent agenda added 29 PLA codes and revised or deleted several others. PLA remains active but algorithmic or software-driven analytic services may start showing up in Cat III.

Accepted Category III codes in this cycle are scheduled for July 1, 2026 release and January 1, 2027 effectiveness. That timing will affect any effort to coordinate with CMS annual pricing cycles.

Strategic Implications

Watch for more activity in upcoming AMA meetings on "Appendix S" - the AI appendix, which is getting intensive re-writing month by month, and track the possibility of a new code category "CMAA" for software-intensive services.   I've argued that services like genomics + AI should just remain on the CLFS as they have been for several years.  

Tuesday, February 17, 2026

Chat GPT Explains What MolDx Is (in the Agentic AI Age)

For several years, I've occassionally asked CHAT GPT "what MolDx is."   Here's another version, showing the growing complexity of how AI replies, updated to February 2026.

Chat GPT used extensive self-guided research, which I put on a SIDEBAR.

####

"Explain in detail what MolDx is, its history, what it does, how its viewed, and any trends. I am a senior NYT health journalist."

####

What MolDx is (in plain terms)

MolDx (“Molecular Diagnostic Services Program”) is a coverage, coding, and payment-control program for molecular diagnostics that is run by Palmetto GBA, a Medicare Administrative Contractor (MAC). It functions as an overlay on standard Medicare claims processing: instead of Medicare paying a genetic/molecular test claim based only on a CPT/HCPCS code, MolDx requires test-level identification and often a pre-coverage evidentiary review before claims are paid. (Medicare)

At its core, MolDx tries to solve a classic molecular diagnostics problem: CPT codes often describe a method or analyte class, not a specific commercial test, while different labs’ LDTs can be meaningfully different (workflow, algorithm, intended use, validation, etc.). MolDx’s solution is to require a DEX Z-Code identifier that points to a specific test listing in a registry, enabling test-by-test adjudication rather than code-by-code adjudication.

Noridian Posts All Materials for September 2025 Advisory Panel on Esophageal Cancer Biomarkers

 MolDx has a non-coverage LCD for esophageal biomarkers to aid in detection of Barrett's esophagus and its progression.  On September 4, 2025, they held a multi-jurisdictional advisory panel on the topic (aka Contractor Advisory Committee, CAC).

At this point, all materials are posted:  The agenda (trivial), the panel questions (#17), and the bibliography (#23), as were available for the meeting.  And they've added a YouTube archive video of the meeting, and, an official 29-page transcript.

Find the materials all here:

https://med.noridianmedicare.com/web/jeb/policies/lcd/cac#upcomingmeetings

###

AI Corner

###

I asked Chat GPT to review the meeting transcript.   Note that I fed some of the materials into Chat GPR for its "insights" back in September, also, before the full materials were available online.  For example, then, I asked it to compare 2021 CAC questions with the newest 2025 CAC questions.

###

I'm printing the Chat GPT reading as it came out, but I think it's a little too conservative.  

Therefore, further below, I'm including a prompt asking Chat GPT to look at fact patterns and interpretations that are ultimately more favorable.   This created a second AI dialog.

The two dialogs would be of most use to a reader who knows the field well and can use the two Chat GPT outputs to judge, for himself/herself, which AI viewpoint is more convincing.

Monday, February 16, 2026

Can Chat GPT Find Medicaid Genetic Test Prices? (81420 trisomy, 81519 oncotype)

 It's easy to look up Medicare prices for genomic tests - there is one national Clinical Laboratory Fee Schedule CLFS.

It's hard to look up Medicaid prices - there are 50 states, plus, Medicaid transactions are often walled off inside HMO plans.

However, I asked Chat GPT to look up Medicaid prices for 81420 - maternal blood trisomy testing - and it did a pretty impressive job, working on its own for up to a couple minutes (called "agentic AI.")

In the first pass, I just asked it "Please use agentic research to learn Medicaid prices for CPT code 81420."  it came back with several reasonable paragraphs.

In the second part, I refined the request to, "Please research the eight largest states, you already have California. Please determine what the largest state states are, and then search their files as you mentioned above."

This time it thought for 1 minute 43 seconds (and by the way, on request [click] gives you every step of its thinking, which can be VERY long).  So we got a huge amount of diverse Chat GPT work out of a short simple two sentence prompt.

And came up with this footnoted table.  Pretty good for a few seconds of work on MY part.

click to enlarge


Here's the same for Oncotype Dx Breast 81519
click to enlarge


##
In a separate experiment, I gave Chat GPT all 94,000 PAMA reported payments for BRCA 81211 in 2017.  The CMS fee schedule before PAMA was $2195 and after PAMA it rose to $2395.  

However, there was a wide dispersion of pricing from the lowest decile ($1104) to the highest decile ($3191), basically from half the CMS price to 50% higher than the CMS price.

That was by decile of claims (9490 claims per decile, calculate average price).  Here is by dollar bins of $250 from $0 to $4000.

#

This may work as an open link to the dialog:

https://chatgpt.com/share/6993682b-7cec-8005-9909-a79f531f935a

This is my private protected author link for me to get back to it if I need to

https://chatgpt.com/c/69914d2a-c3ac-8331-a102-b5dadbe72618


HHS Releases Massive Medicaid Database: Fighting Fraud, Other Uses

CMS has long released provider-level, CPT-coded utilization or payment data.  (See exactly which CPT codes your doctor was paid for in 2023...)    Medicaid has usually been a beast unto itself, in 50 silos in the 50 states.  But now, CMS releases a multi-gigabyte trove of Medicaid data.  One banner application is to let the public find fraud.

However, there may be a lot of commercial uses for the data.  You might be able to find who were the most active prostatectomy surgeons in Medicaid, or the most active at giving a particular IV cancer drug.

See FOOTNOTE for Medicaid and agentic AI.

See coverage at Axios here.  See coverage at Newsday here.  See some download sites at HHS here, here.  The granularity at the latter is listed as Medicaid, January 2018-December 2024, Provider NPI x HCPCS/CPT code x Month.  The format is 10 gigabyte files, so special data management skills are needed to use it.   (Medicare doctor-CPT-HCPCS data is offered in a cloud database here.)

Two Thoughts (1) Fraud Is Often Blatant.   (2) Commercial uses?

First, going back to when I was a Medicare medical director 2004-2008, fraud is often blatant.   (At the time, billing for 10 $200-500 medical diagnostic tests on the same day was an instant fraud flag and a major form of fraud.)   In the past years 2019-2025, there have been billions of dollars of genetic test fraud, but it was all blatantly obvious, like billing for six whole sequencing long rare genes for $8000 in a 90 year old in a nursing home and billing that same panel on 15,000 Medicare patients.   Recently, labs were caught billing insane multi-thousand dollar codes like 81408 by a MAC in one year, and the SAME insanely behaving lab was allowed to billing ten million dollars the NEXT year on a different, essentially ADJACENT, set of gene codes.  Here.  The point is, it didn't require super AI detection to find this stuff, a ten year old could find it in 2 minutes.  Another example was massively unbelievable billing of urinary catheters in incredible volumes.  Here, here.  While CMS did eventually pick that up (link previous),  not mentioned is that they got away with zillions of dollars, enough to upend ACO economics, before being picked up.

Second, I don't have the data skills to work with a 10 GB database, but this could have commercial uses as well, such as tracking top providers of HCPCS drug codes or specialty surgical procedures for free in Medicaid data.


###
Footnote
Recently I was curious about Medicaid prices for 81420 (trisomy, maternal blood).   I asked Chat GPT to do agentic internet research, and for the 10 largest states, look up Medicaid prices for 81420.   

Of course it figured out the 10 largest states right away.  It percolated away for a minute or two, and came back with referenced Medicaid prices for 81420 in about 6 states.   So that required about 6 seconds work for me and saved an hour or two of my own hunting and pecking (even with the result a necessariliy incomplete answer, remembering that sometimes the data just ain't online).  




https://chatgpt.com/share/6993682b-7cec-8005-9909-a79f531f935a

https://chatgpt.com/c/69914d2a-c3ac-8331-a102-b5dadbe72618

First pass

Please use agentic research to learn Medicaid prices for cpt code 81420.

(answer)

Second pass

Please research the eight largest states, you already have California. Please determine what the largest state states are, and then search their files as you mentioned above.

(answer included the table above)

Breaking the Biomarker Bottlenecks: Why CDx and Digital Pathology Just Got Much More Strategic

Breaking the Biomarker Bottlenecks: Why CDx and Digital Pathology Just Got Much More Strategic

Below, Chat GPT 5.2 summarizes a pair of articles in The Pathologist.  

The experts are Jorge Reis-Filhos (AstraZeneca), Rob Monroe (Danaher-Leica) and Karan Arora (Leica).   For policy background, see this blog back on Feb 10 on AMA CPT struggles to code whole slide imaging AI diagnostics (here). As advanced CDx shift more and more to digital pathology, it gives the lab one more way to collide dysfunctionally with the CMS 14 Day Rule.   (See an interview with Reis-Filhos alone, a month earlier, here.)


CDx and Digital Pathology:
Infrastructure, Not Ornament

Companion diagnostics (CDx) and digital pathology are no longer niche technical topics. They are moving toward the center of oncology strategy. That shift matters even more given recent AMA CPT developments around the codability of whole slide imaging (WSI) and AI-based pathology services. As I discussed on February 10, the AMA CPT Editorial Panel appears prepared to route WSI/AI services into Category III, after years of ambiguity and effective moratoria on codability [DIHP Blog 2-10].

That coding posture does not guarantee payment. But it removes a structural barrier. And when you read recent industry discussions from AstraZeneca, Danaher, and Leica, it becomes clear that CDx and digital pathology are being repositioned as therapy infrastructure—not optional lab enhancements.

The timing is not coincidental.


CDx Is Becoming Computational Biology

Historically, companion diagnostics were binary gatekeepers. One biomarker, one cutoff, one therapy.

What is now being described is very different. CDx is evolving into a computational modeling layer that attempts to quantify tumor biology continuously, not categorically. AI-based quantitative scoring systems measure target expression, membrane localization, spatial relationships, and biologically relevant correlations. The example of AI-quantified Trop-2 expression for ADC selection is emblematic: the biomarker is no longer simply “present or absent,” but a mechanistically informed, quantitatively normalized signal intended to predict drug internalization and response.

That is a shift in kind, not degree.

Instead of:

  • “Does the patient meet threshold X?”

The emerging model asks:

  • “What is the biological state of this tumor, and how does that map to therapeutic response?”

This transforms CDx from a lab service into a decision model.

Saturday, February 14, 2026

Get It Here: Another Detailed RFK Jr Interview with Podcast Host

On February 1, I published a detailed review of a long interview between RFK Jr and podcast host Katie Miller (here).  Regular news covered the interview only for one or two soundbites.

On February 12, RFK Jr appears in a 1 hr 15 min interview with Theo Von, who interviews politicians, athletes, and celebrities.  I'm again including a detailed Chat GPT 5.2 review of the interview.

I provide below - Chat GPT 5.2's "one pager" of "policy takeaways."   Followed by a slower, bullet-by-bullet recap of the interview.  

Then, Chat GPT assesses similarities and differences in the two interviews.  

Then, Chat GPT assesses the building of, or reactions by, stakholder coalitions who hear these communications.


RFK Jr. on Theo Von: 

Policy Takeaways for a Health Policy Audience

Medical Journals, Evidence Credibility, and Open Science

RFK argues leading journals function as compromised channels influenced by pharma advertising and reprint distribution, undermining trust in published findings. His remedy is creation/endorsement of open-source publishing models where peer reviews are public and, when feasible, data are accessible, so outsiders can evaluate study quality and replicability. The operational thesis is that transparency and public critique better approximate “science as debate” than closed peer review plus paywalled data.

NIH, Research Integrity, and Replication

RFK argues NIH should be reoriented toward root causes of disease—“why we are so sick”—and away from what he describes as siloed, non-reproducible research culture. He claims too little NIH funding supports replication, creating incentives to publish positive results and “cheat” because null findings harm careers and are less publishable. He cites the amyloid/Alzheimer’s research arc as a cautionary tale of path dependency. He asserts a target on the order of ~20% of research spend for replication/verification to restore credibility and reduce dead-end investment.

FDA and Food Policy

RFK frames HHS’s central mission as reducing chronic disease by shifting the U.S. diet away from ultra-processed foods and reducing exposure to certain food additives. He points to Tennessee as a model for SNAP waivers that restrict purchase of soda/candy and adds thresholds tied to sugar and corn-syrup content, portraying this as a scalable lever for diet change in low-income populations. On additives, he says FDA is driving an industry transition away from nine synthetic food dyes, claiming the “worst four” are already banned and the remaining five are targeted for elimination by year-end, while FDA “rapidly approved” four vegetable-based dye alternatives to support reformulation. He emphasizes voluntary cooperation by major manufacturers as a key implementation strategy.

Fluoride Policy

RFK reiterates an anti-fluoridation stance, arguing water fluoridation reduces IQ in a dose-related fashion and has systemic harms (bone/thyroid), while any dental benefit is mainly topical and can be achieved through toothpaste/mouthwash. He endorses state actions like Tennessee’s prohibition on adding fluoride compounds, framing this as a public-health modernization based on contemporary exposure context (ubiquitous topical fluoride availability).

Agriculture, Herbicides, Pesticides, and “Off-Ramps”

Drawing on his Roundup litigation history, RFK uses glyphosate as an example of alleged regulatory capture and science suppression. His policy position is not immediate prohibition but creating an “off-ramp” to avoid destabilizing farm economics; he claims an outright ban would disrupt a large fraction of U.S. farms. He promotes emerging alternatives (e.g., laser weeding/robotic approaches) as the kind of technology HHS should help validate and accelerate—implicitly positioning HHS as part of a cross-agency transition strategy, not solely a consumer protection regulator.

CMS Program Integrity and Fraud Detection

RFK claims very large annual losses in Medicare/Medicaid from fraud, emphasizing growth of benefit categories beyond physician services (transportation, home care, administrative intermediaries) that create fraud surfaces. He describes shifting from “pay-and-chase” (pay claims, then attempt clawback) to pre-payment prevention, enabled by AI that flags suspect billing, prior fraud history, and provider anomalies. He distinguishes Medicare as more directly controllable federally, while Medicaid requires state “rail” cooperation; he suggests uneven state cooperation will affect implementation speed.

Health IT: FOIA, Records Access, and Interoperability

RFK presents a broad “transparency + AI” agenda. First, he claims HHS is using AI to accelerate FOIA responses to near-real-time, constrained mainly by reliable privacy redaction and statutory exemptions. Second, he says HHS convened major tech stakeholders to reduce information blocking so patients can access their medical records easily—ideally in a mobile-friendly format—supporting continuity of care and consumer control.

Payment Frictions: Prior Authorization

He says HHS has secured commitment from a large share of insurers (citing ~80%) to streamline or eliminate “unnecessary” prior authorization and provide point-of-care decisions—meaning patients and clinicians know coverage status before leaving the visit. He frames this as reducing delay, administrative burden, and patient uncertainty even if medical necessity standards remain.

Price Transparency Enforcement

RFK emphasizes hospital price transparency as a core consumer-empowerment policy. He claims prior transparency requirements were weakly enforced and says new regulations will impose substantial penalties, driving compliance by year-end. He describes a consumer-usable interface (procedure-level price comparison across hospitals) to create real market dynamics and constrain price dispersion.

Addiction Policy

He briefly describes addiction reform as an accountability problem—fragmented programs with misaligned incentives. He supports multi-agency pilots integrating outreach, treatment, housing, employment, and longitudinal follow-up under a single accountable trajectory manager. (The interview also contains frequent recovery/12-step references not summarized here.)


 

Friday, February 13, 2026

CMS Updates PAMA Reporting Page - Timelines are Short

 PAMA lab price reporting, for CMS median rate-setting, has been frozen since 2019.   Based on section 6226 of the February 3 federal budget, that is changing fast.

Labs will report 1H2025 commercial payment levels in May-June-July 2026.   CMS will publish consolidated median prices in the fall, which will be active on January 1, 2027 for three years.  

When a new PAMA price is lower than the current 2026 price, the 2026 price can fall by now more than 15% per year (until it matches the PAMA price sooner or later).

Track CMS instructions here:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting


Nerd Note

Some insights gleaned from the last cycle, 2017 data, short blog here.

Thursday, February 12, 2026

Podcast Scan: Owkin's Podcast with Jorge Reis-Filho

Owkin is a French company on the cutting edge of genomic/cellular/biomarker/AI discovery. They have 63 videos on YouTube, many of them videopodcasts. 

In this one, February 2026, CEO Thomas Clozel talks with Jorge Reis-Filho, who is Chief in the enterprise AI unit of AstraZeneca.

SEE ALSO an article about a panel on the future of digital pathology  including Jorge Reis-Filho here, leading to Pt 1 here; Pt 2 here. My blog on it here.

#####

Chat GPT 5.2

The episode’s core idea is that time should be treated as a first-class biomedical variable: models should learn from trajectories, not just snapshots. The guest argues that integrating multimodal data (omics, spatial/pathology, clinical) through a semantic layer, plus temporal embeddings, enables more informative predictions—especially for oncology regimens where sequence and timing matter. Spatial biology’s promise remains under-realized mainly due to insufficient scale and diversity of datasets, motivating consortia. 

The conversation stays practical: AI won’t bypass validation, near-term gains are likely in clinical development, and long-term breakthroughs require trials, culture change, and shared infrastructure.