Friday, February 13, 2026

CMS Updates PAMA Reporting Information - Timelines are Short

 PAMA lab price reporting, for CMS median rate-setting, has been frozen since 2019.   Based on section 6226 of the February 3 federal budget, that is changing fast.

Labs will report 1H2025 commercial payment levels in May-June-July 2026.   CMS will publish consolidated median prices in the fall, which will be active on January 1, 2027 for three years.  

When a new PAMA price is lower than the current 2026 price, the 2026 price can fall by now more than 15% per year (until it matches the PAMA price sooner or later).

Track CMS instructions here:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting


Nerd Note

Some insights gleaned from the last cycle, 2017 data, short blog here.

Thursday, February 12, 2026

Podcast Scan: Owkin's Podcast with Jorge Reis-Filho

Owkin is a French company on the cutting edge of genomic/cellular/biomarker/AI discovery. They have 63 videos on YouTube, many of them videopodcasts. 

In this one, February 2026, CEO Thomas Clozel talks with Jorge Reis-Filho, who is Chief in the enterprise AI unit of AstraZeneca.

#####

Chat GPT 5.2

The episode’s core idea is that time should be treated as a first-class biomedical variable: models should learn from trajectories, not just snapshots. The guest argues that integrating multimodal data (omics, spatial/pathology, clinical) through a semantic layer, plus temporal embeddings, enables more informative predictions—especially for oncology regimens where sequence and timing matter. Spatial biology’s promise remains under-realized mainly due to insufficient scale and diversity of datasets, motivating consortia. 

The conversation stays practical: AI won’t bypass validation, near-term gains are likely in clinical development, and long-term breakthroughs require trials, culture change, and shared infrastructure.

CMS Gapfill: Spring 2026 Update

Each fall, CMS determines which of 100-odd new lab codes are crosswalked and which are "gapfilled," meaning priced by MACs - for us, in 1Q2026.  

See Change Request CR14312, T13514, December 5, 2025.  By my count, in that public document, 40 codes are for contractor gapfill right now.


How They Do It - Novitas/FCSO MAC and NGS MAC 

NGS MAC and Novitas/FCSO MACs generally post a webpage notifying providers of the numbers of the new gapfill codes, and requesting information.  Novitas in recent years has used a "SurveyMonkey" (or similar) online web form for data entry.   MACs ask for CMS-specified gapfill information, such as test purpose, charges, prices of other payors, resources required, etc.  

I believe NGS MAC post the list of PLA codes and provides an email for receipt of your materials.

How They Do It - MolDx MACs (Palmetto, Noridian, CSG, WPS)

At least past years, nothing is posted publicly at MolDx  One of the medical directors there has told me that these gapfills could be finished with their internal expertise as MolDx lab experts, and they'd reach out to a lab about its code just if and when MolDx found it necessary.   

In the past several years, all the MolDx MACs offer the same price for every code being priced (molecular or not), and, the NGS MAC has matched the MolDx price for every code being priced.  In recent years the Novitas-FCSO gapfill prices have tracked close to the MolDx ones, sometimes identical, sometimes 20% higher or lower.

How to Gapfill (Bruce's View)

Amal Thommil Updates Decibio MRD Coverage (February 2026)

 Amal Thommil at DeciBio updates us on coverage for minimal residual disease testing (MRD).

See his chart here.

click to enlarge



AI VERSION: My Heart Skipped a Beat, CMS Proposes X1-X2 Modifiers for NCD 90.2 (NGS in Cancer)

Below, Chat GPT 5.2 rewrites a blog I wrote "by hand."
###

Header:  CMS proposes X1/X2 modifiers to encode cancer stage on NGS claims—framing it as ICD-10 “clarity,” but potentially creating more complexity than it resolves.

###

My Heart Skipped a Beat: CMS Proposes X1/X2 Modifiers for NCD 90.2 (NGS in Cancer)

In 2017, CMS issued NCD 90.2 for next-generation sequencing (NGS) in cancer. It was not an accidental document. Senior HHS leadership got involved (the Trump 1 administration).  The policy was designed to do something specific and modern: create a clean, predictable incentive for FDA approval of comprehensive genomic profiling (CGP) tests.

  • The Trump HHS made the lab industry a bargain. 
    • If an NGS test received FDA approval as a companion diagnostic, it would be covered nationally and immediately—on the day of FDA approval
    • If the label expanded (new gene, new drug), coverage expanded the same day. No MAC-by-MAC delay. 

No prolonged local review cycles. A national coverage rule designed to match the pace of precision oncology.

That was the theory.

In practice, implementation has been anything but frictionless. The 14-day rule delays inpatient surgical specimens from being tested promptly. If testing waits for a later oncology visit, even more weeks can pass before genomic results guide therapy. Patients sometimes start urgent but nonspecific chemotherapy during these delays. And while the NCD promises its readers coverage “from the day of FDA approval,” operational reality often looks far worse. [See my blog about erroneous MAC delays in implementing new CMS NCD coverage.]

So one might reasonably ask: if implementation is already uneven, how do we improve it?

At the December 2025 HCPCS meeting (Agenda Item 41), CMS proposed creating two new Level II modifiers to “implement” NCD 90.2:

  • X1 – NGS for early-stage cancer (Stage I–II)

  • X2 – NGS for advanced (Stage III–IV), recurrent, relapsed, refractory, or metastatic cancer

CMS’s summary explains the rationale. NCD 90.2 does not cover early-stage cancers, while ICD-10 diagnosis codes do not distinguish stage. The proposed modifiers would “ensure greater clarity,” align submission with coverage requirements, and “maintain data integrity for oversight and claims analysis.”

That seems at first like a coherent administrative theory.  But it deserves scrutiny.  I'd argue it improves NONE of the real problems, and introduces a half-dozen new ones.

https://www.cms.gov/files/document/b2-2025-public-meeting-agenda-december-17-18-2025.pdf

click to enlarge

The decision isn't out yet, but could come out any week, with other non-drug HCPCS narrative decisions (see here, scroll down, right column).

My Heart Skipped a Beat: CMS Proposed X1, X2 Modifiers for NCD 90.2 NGS in Cancer

CMS has an NCD, 90.2, which was designed in 2017 (with help from senior policy folk at HHS) to provide an incentive for FDA approval of then-new NGS testing.   

  • FDA-approved NGS CDx tests were covered from the day of FDA approval, per the NCD
    •  And any updates (new gene, new drug) were ALSO covered from the of approval.

But CMS has a hodge-podge of barriers.   The 14-day rule delays inpatient surgical cases from getting comprehensive genomic profiling results (CGP) for weeks.  If the oncologist has to order it, when he sees the patient well after the surgery event, that adds additional weeks.   Too many patients get urgent but nonspecific chemo during the undue lags.    Even worse, MACs may not start new coverage on "the day of FDA approval" until a year or more later - another "OMG" moment.   (See here).

How could we make this mess worse?

At the December HCPCS meeting, CMS proposed new modifiers X1 and X2, to "implement" the NCD.  I only see new problems with a solution that solves no existing problems.

Details follow.

This May Save Your (Reimbursement Nerd) Life Someday: CMS Defines CR Implementation Date vs Effective Date!

Header:  CMS issues MAC instructions (e.g. for NCDs) by "Change Release" instructions, which are also public.  Every new requirement or coverage has an "effective date" (e.g. May 1) and an "implementation date" (e.g. July 1).   Contractors can code to the effective date as soon as they can, but MUST code to that instruction by the "implementation date."

Today, a client got a MAC email that the MAC understood the effective date, but could not observe it until the far-future implementation date.  

This is just wrong, but it's easy to get confused.  There's a published rule.  See a CMS instruction (from 2011, still  on the books in Manual #01, Ch 07, Part 50) that is 100% clear. Use it whenever you need to.

https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf





###

Chat GPT provided the correct reading of CMS rules, and upon a second query, found and provided the exact internet document PDF.

###

https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf

https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/ge101c07.pdf

##

Chat GPT Answer

##

CMS explicitly distinguishes Effective Date from Implementation Date in its transmittal instructions. In CMS Manual System Rev. 66 (CR Definitions), CMS states that the implementation date “is the date by which Medicare fee-for-service contractors and shared system maintainers shall apply all changes” and when all necessary updates must be “completed and operational,” making clear that it is a deadline for systems readiness, not a permission trigger

 (https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf). By contrast, CMS defines the effective date as the date the new policy becomes active and from which contractors must apply the new rules to claims (often specified as the date of service). This same distinction is reflected in the active Internet-Only Manual structure governing contractor operations (Pub. 100-04, Chapter 7 framework and related transmittal usage), where effective dates govern claims applicability and implementation dates govern contractor system updates

 (https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/ge101c07.pdf). In short, CMS policy architecture supports your long-standing interpretation: the effective date controls when the rule applies; the implementation date is the last date by which the contractor’s edits and processing systems must be fully in place.

##

I worked as a medical director 2004-2008, before this particular 2011 publication, but it was the same rule in 2004.

##

This should be a public link to my chat gpt dialog;

https://chatgpt.com/share/698dff2e-1d2c-8005-aef9-8a46b4ad06cd



Tuesday, February 10, 2026

Excellent MedTech Podcast: From R&D to FDA to Commercialization (Dr Doug Kelly, FDA)

Heard an excellent podcast today on MedtechTalk, featuring Douglas Kelly MD MBA, a physician and venture capitalist who served 2020-2025 as deputy Center Director for Science at CDRH.

Find the podcast online at this link (recorded March 2025):

https://medtechmvp.com/media/medtech-talk-podcast/douglas-kelly-on-the-value-of-innovation-and-breakthrough-designation


See Dr Kelly's Linked In page here.

And see his recent re-post of Niko Hems' and Kathryn Bowsher's Linked In articles, on why so many AI-based startups crash, sometimes afer pretty massive funding.

(And for more on his theme of the value of public service by working places like FDA, see Michael Lewis's book "Who is Government? The Untold Story of Public Service.")

##

Chat GPT 5.2 summarizing the one-hour podcast:

##

In this candid and substantive Med Tech Talk episode, Justin Klein, MD (Vensana Capital) interviews Douglas Kelly, MD, former Deputy Center Director for Science at FDA’s CDRH, for a behind-the-scenes tour of how device regulation actually works—and why so many promising technologies never reach widespread patient access

Kelly traces his path from venture investing to FDA, recounting the moment Jeff Shuren essentially challenged him to “stop complaining and come help fix it,” and uses analogies to explain how outsiders routinely misunderstand what FDA needs and how decisions get made. 

He then maps the internal structure of CDRH (the offices of health technology, review staff, policy, science/technology, emergency preparedness), emphasizing that the range of devices—spanning software, AI, implantables, diagnostics, and more—require CDRH to be “intellectually promiscuous,” with staffing and communication challenges that don’t resemble standard pharma-style review. 

The core of the conversation is Kelly’s strategic rationale for the TAP program (Total Product Life Cycle Advisory Program): not as an “FDA fast lane,” but as a deliberate attempt to replace the industry’s serial, disconnected failures (FDA authorization → CMS coverage → CPT code → payment → adoption) with an earlier, parallel, relationship-driven process that aligns evidence expectations across FDA, payers, and professional societies—so companies can fund the right studies once and investors can price risk realistically.  He actually took people to watch events like CPT and RUC meetings to get a downstream view of the commercialization of devices.

Along the way, Kelly offers a striking metric—only about 1.5% of breakthrough-designated efforts, in his interim analysis, achieve durable commercial success or broad adoption—arguing that the system burns enormous reviewer time on products that never see daylight, and that the next era needs clearer priority-setting and better FDA–CMS alignment rather than rigid “stay in your lane” silos. 

The discussion closes on today’s pressures—hiring constraints, the fragility of mission-driven expertise, and the risk that indiscriminate staffing cuts and return-to-office mandates will undercut precisely the industry-savvy talent needed to modernize review (including AI)—but ends on an optimistic call to action: Kelly urges more private-sector leaders to do public service at FDA, because the long-term fix is shared understanding, durable relationships, and evidence pathways that get genuinely valuable innovations to patients without compromising safety.

Big News for Digital Pathology: AMA Appears Set to Use CATEGORY III for Whole Slide Imaging / AI

It's no secret that AMA CPT PLA made a half-dozen codes for whole-slide imaging 3 to 4 years ago, and then seemingly imposed a "moratorium" on more PLA codes based on WSI.  There it sat for the past 2 years, since Summer 2024.

It's also no secret that last fall (public agendas and reports), the AMA CPT considered a MAAA application for WSI, and rejected that, too.

This just couldn't continue, as exciting things are happening with WSI and AI, even with "plain" H&E slides.  See the new Nature Medicine paper by Li et al on predictions in lung cancer, here.

Last Week's AMA:  H&E, WSI, AI, & Cat III

At last week's AMA CPT meeting, AMA had a long discussion and seemed prepared to accept WSI-AI services in the Category III category.   Getting to that place was kind of dizzying, involving detours around whether H&E AI-based services were "codifiable," what AI services do and don't require physician work, and how evolving use of AI in digital pathology would intersect with a moving target, revisions to how CPT as a whole handles different types of AI services.

So the take home is:  Look for AI & WSI distinct diagnostic services to land in Category III coding rules, for an indefinite time period.

###

AMA Updates Confidentiality Interpretations

At last week's meeting, an hour was devoted to new AMA rules, including confidentiality/nondisclosure.  

Some things are verboten:  

You can't record the meeting.   You can't take photos (or screenshots) of CPT text or other slides   You can't type and replicate CPT codes shown on slides.   You can't quote anybody's statements by name.  You can't say whether any particular tabs (items) passed or failed the committee vote - although you probably wouldn't know, since those are secret silent electronic ballots.

However, AMA also discussed what is allowable.  

You can repeat statements or positions voiced, as long as not identifiable to anyone.   You can make comments based on your opinion (example; "I thought the discussion of Tab 105 went well.")   Based on examples on slides and the AMA's discussion, I feel it was well within the statement boundaries to type my remarks above - letting us know that after a lot of debate and detours, the category of WSI AI seems to now fall in the domain of Category III.


Monday, February 9, 2026

Medical History: Specialties Draw Opposite Conclusions from the Same Data

 I just came back  from the AMA CPT meeting, where dozens of topics are debated "pro and con" with opposing stakeholders.

I was reminded that I attended the 2008 CMS MEDCAC on whether to cover screening CT colonography (aka "virtual colonscopy.")  My memory is that the numerous radiology speakers were strongly in favor, and the service had big public health benefits.  And the numerous gastroenterology speakers were highly concerned the data were premature and many risks were too poorly understood and coverage would be a bad, bad idea.

Find the 2008 MEDCAC webpage here.  The 2009 NCD is here.

  • Follow up.  CT colography became a covered screening benefit in January 2025, based on regulatory changes in 2024 rulemaking for PFS CY2025 - here.

Below, I fed the transcript and speaker list to Chat GPT and asked for a review.  It produced the review below in less than 30 seconds.

After the read-out from Chat GPT 5.2, I asked it to write an essay about the day as a case study.  I print the case study first, and the (earlier) AI analysis  second.

###

 

A Case Study in Interpretive Pluralism: 

CT Colonography, Medicare, and the Social Life of Evidence

When historians of health policy look back at Medicare’s long and circuitous path toward covering CT colonography (CTC) for colorectal cancer screening, they may be tempted to frame it as a simple story of “evidence catching up with policy.” That narrative would be comforting—and largely wrong. The more instructive reading is that this episode reveals how the same empirical record can support sharply divergent conclusions, depending on professional identity, institutional incentives, and rhetorical framing. CT colonography is not merely a story about imaging technology; it is a case study in how evidence-based medicine operates in the real world, where evidence is interpreted, not discovered, and where policy decisions are forged through persuasion as much as data.

From the standpoint of raw evidence, the CT colonography literature circa the late 2000s was neither thin nor obviously defective. Large multicenter trials existed. Sensitivity for large adenomas and cancers approached that of optical colonoscopy under controlled conditions. Complication rates were demonstrably lower. Modeling studies suggested population-level benefits if screening adherence increased. Yet these same facts were marshaled to argue for coverage, non-coverage, and coverage-with-evidence-development, depending on who was speaking. This divergence was not accidental, nor was it merely cynical. It reflected different professional priors about what constitutes “sufficient” evidence, what risks matter most, and which uncertainties are tolerable in public programs.

Radiology-aligned stakeholders interpreted the evidence through a population-health and access lens. To them, CT colonography was a screening test, not a therapeutic intervention, and its value lay less in per-lesion perfection than in its ability to bring unscreened patients into the system. Missed small adenomas were contextualized against the known miss rates of optical colonoscopy itself. Radiation exposure was reframed as negligible in an older population. Extracolonic findings—often criticized as a source of downstream cost—were rhetorically repositioned as serendipitous early diagnoses. In this telling, the evidence demonstrated that CTC was “good enough,” safe enough, and mature enough to merit coverage now, with quality standards layered on over time.

Gastroenterology and endoscopy stakeholders looked at the same studies and reached nearly opposite conclusions. Their interpretive frame was clinical pathway integrity. From this perspective, uncertainty about the natural history of small polyps was not a minor gap but a central flaw. A screening modality that deferred or fragmented definitive intervention was seen as destabilizing rather than enabling. Referral thresholds, follow-up adherence, and the risk of false reassurance loomed large. Even extracolonic findings, celebrated by radiologists, were cast as potential sources of overdiagnosis and patient harm. Importantly, many GI leaders did not deny the technical performance of CT colonography; rather, they argued that the evidence base was insufficient to justify routine coverage in a national program without further real-world data. Hence the appeal of Coverage with Evidence Development—not as an endorsement, but as a brake.

What makes this episode especially instructive is that both sides were practicing evidence-based medicine, at least as they understood it. Each group emphasized different endpoints, different harms, and different definitions of benefit. Neither side fabricated data. Neither side ignored uncertainty. Instead, they weighted uncertainty differently. This should unsettle any naïve belief that evidence, once generated, speaks for itself. In practice, evidence is always filtered through professional norms about acceptable risk, clinical responsibility, and the moral obligations of a specialty to its patients.

The eventual shift in Medicare policy—decades after the original MEDCAC debate—did not occur because one side was finally “proven wrong.” Rather, it reflected changes in context: improved technology, broader acceptance of non-invasive screening, political attention to preventive care, and sustained rhetorical work by advocates who reframed CTC as complementary rather than competitive. Congress did not suddenly resolve the epistemic dispute; CMS reinterpreted the same general evidentiary landscape under new policy priorities.

The lesson here is not that all interpretations are equally valid, nor that expertise is illusory. It is that no single stakeholder group has a monopoly on correct interpretation, especially in areas where evidence is probabilistic, outcomes are long-term, and implementation matters as much as efficacy. Health policy is not physics. Facts and numbers do not announce their meaning independently of human judgment. They acquire meaning through argument, narrative, and institutional context.

For students of Medicare policy and evidence-based medicine, CT colonography offers a durable warning against epistemic arrogance. Advisory committees are not courts of law where evidence is weighed once and for all; they are arenas where professional cultures collide. Rhetoric—far from being the enemy of science—is the medium through which scientific claims are made legible to policymakers. The challenge is not to eliminate rhetoric, but to recognize it, interrogate it, and understand whose values it encodes.

In the end, the CT colonography story teaches us intellectual humility. Policy decisions emerge not from the triumph of facts over politics, but from negotiated interpretations of uncertain evidence. To take any one side too seriously—to believe that the numbers alone compel a single conclusion—is to misunderstand how medicine, policy, and human judgment actually work.



###

###

Chat GPT analysis of transcript (it did this first, and wrote the essay above second.)


Executive takeaway: Colography Transcript as Data

Radiology-aligned speakers and imaging manufacturers were uniformly supportive of Medicare coverage for CT colonography, while most gastroenterology and endoscopy society representatives opposed routine coverage or sought to sharply constrain it—often proposing Coverage with Evidence Development (CED) as a limiting compromise.

This MEDCAC is one of the clearest specialty-split advisory records CMS ever produced.


Categorization by speaker

Radiology & Imaging Industry — Supportive of Coverage

These speakers explicitly urged CMS to cover CT colonography or framed it as a mature, mainstream screening alternative.

Dr. Mark Klein (Radiology)
Strongly pro-coverage. He reframed extracolonic findings as a benefit rather than a harm, dismissed radiation risk as irrelevant in a Medicare population, and emphasized training and quality standards rather than restriction.

Dr. Charles (Dan) Johnson (Mayo Clinic / ACR)
Pro-coverage. Presented ACRIN trial results as validation of CT colonography, stressed reader training and technical maturity, and spoke explicitly on behalf of the American College of Radiology.

Dr. J.G. Fletcher (Mayo Clinic / ACR)
Pro-coverage. Focused on increased screening adherence and argued that CT colonography could improve population-level effectiveness.

Dr. Beth McFarland (ACR)
Pro-coverage. Supported accreditation, standards, and inclusion of CT colonography within Medicare screening options.

Dr. Donald Rucker (Siemens)
Strongly pro-coverage. Actively minimized radiation risk and argued that the benefit–risk balance overwhelmingly favored CT colonography in Medicare beneficiaries.

Dr. Robert Honinberg (GE Healthcare)
Strongly pro-coverage. Explicitly urged CMS to include CT colonography as a covered screening option, emphasizing technological maturity and NIH-validated evidence.


Gastroenterology & Endoscopy — Opposed or Restrictive

No gastroenterology society speaker endorsed straightforward national coverage.

Dr. Jason Dominitz (ASGE)
Opposed to routine coverage. Emphasized uncertainty in the management of small polyps, raised concerns about follow-up adherence and downstream harms, and did not endorse CT colonography as a screening substitute.

Dr. Douglas Rex (ACG)
Opposed or highly skeptical. Focused on false positives, downstream colonoscopy burden, and concerns about missed lesions relative to the endoscopic standard of care.

Dr. Brooks Cash (Navy / AGA)
Restrictive and skeptical. Highlighted uncertainty in clinical pathways and emphasized GI-centric quality and performance concerns.

Dr. Joel Brill (American Gastroenterological Association)
Conditional and restrictive. This is the key nuance. The AGA did not support routine coverage and instead proposed Coverage with Evidence Development, framing CT colonography as promising but not ready for unrestricted Medicare screening. In practical CMS terms, this functioned as a containment strategy rather than endorsement.

Dr. Amy Patrick (Colon Health Center)
Skeptical. Focused on unresolved clinical uncertainties and downstream consequences, without advocating coverage.

Dr. Robert Smith (American Cancer Society)
Cautious and non-endorsing. Reflected ACS’s neutral-to-skeptical position at the time and did not press CMS for coverage.


The pattern you remembered — confirmed

The MEDCAC record shows a stark and consistent divide:

  • Radiology framed CT colonography as a technologically mature, patient-friendly screening modality that could expand access and compliance, minimized harms, and portrayed extracolonic findings as an added benefit.

  • Gastroenterology framed CT colonography as clinically destabilizing, insufficiently validated in real-world practice, and threatening to established endoscopic pathways—using uncertainty and calls for CED as structural brakes on adoption.

This was not a subtle disagreement. It was a classic specialty turf conflict, expressed through evidentiary and patient-safety rhetoric.


Why this MEDCAC still matters

This meeting remains a canonical example inside CMS of how:

  • “Reasonable and necessary” determinations are shaped by specialty politics as much as evidence

  • Advisory panels surface professional boundary disputes under the guise of uncertainty

  • Coverage with Evidence Development can operate as de facto non-coverage

Your recollection aligns extremely closely with the documentary record.

###

###

Bonus

###

How would you explain this course of events, to a third-grader?

###

How Two Kinds of Doctors Look at the Same Facts and Disagree

Imagine two people looking at the same set of facts, like a picture or a chart.

One says,

“This shows a good way to help more people.”

The other says,

“This shows something we shouldn’t use yet.”

The facts didn’t change.
What changed is what each person cares about most.

That’s what happened with these two kinds of doctors.


The Two Groups

Radiology doctors use imaging machines to find problems.
Gastroenterology doctors use scopes to look directly inside the colon and fix problems right away.

Both want to prevent cancer.
They just do different jobs, so they focus on different risks.


The Same Data, Different Questions

The data said:

  • The test finds many cancers

  • It’s safer and easier for patients

  • It might miss very small problems

  • It doesn’t treat anything directly

Those facts don’t tell you what to do.
They wait for someone to decide which of them matters most.


How Each Group Interprets the Data

Radiology doctors ask:

“Will this help more people get screened at all?”

They worry most about people who never get tested.
To them, a test that is “good enough” and easier to accept can save more lives.

Gastroenterology doctors ask:

“Will this ever delay or miss something important?”

They worry most about certainty and follow-through.
To them, a test that doesn’t diagnose and treat in one step feels risky.


Why This Isn’t About Right vs. Wrong

No one changed the facts.
No one ignored the evidence.

Each group:

  • Weighted the risks differently

  • Focused on different harms

  • Defined “good medicine” in different ways

That’s why the same data led to opposite conclusions.


What We Learn From This

This case teaches a simple but uncomfortable lesson:

Facts and numbers don’t speak for themselves. People speak for them.

Professional roles, daily experience, and values shape how evidence is understood.
That doesn’t mean anyone is acting in bad faith — it means medicine is human.

If we remember that, we’re less likely to assume one side has a monopoly on truth, and more likely to understand why disagreement can be sincere, informed, and inevitable.


 

AMA CPT: Important Clarifications to Confidentiality Policy

At the February 5-6, 2026 AMA CPT meeting, the first couple hours are devoted to "new news" rather than code review.

There was an important discussion of new rules (or interpretations) of the confidentiality document.  I had noticed some revisions but didn't grasp the full meaning.

Basically, and I'm neither a lawyer nor a stenographer, these are my take-aways.   

  • NO NO NO:  Recording and/or pictures or screen shots are strictly off limits.   
  • But it is NOT  restricted to convey one's opinions of what was said at the meeting.  
    • "The discussion of Tab 101 seemed to be favorable." 
Here are some more points:
  • It IS restricted to quote anyone who spoke by name.
  • It IS restricted to convey any screenshots of any AMA CPT copyright text shown at the meeting.   
  • It IS restricted to announce how the panel voted.
    • However, this is generally not possible because the panel votes silently and electronically on code measures.    
    • It IS allowed to say "Panel discussion seemed favorable [unfavorable]" based on one's opinion as an attendee.   
  • Some specific mini case studies were presented as examples of the above.
  • These topics were discussed in the first and administrative session on Thursday.
Summary of panel actions is posted a couple weeks after each editorial panel meeting.

Sunday, February 8, 2026

OIG Report for Lab Spending 2024 Omits "81479" As Line Item

Header:  OIG Report on 2024 Lab Spending Omits 81479, the Biggest MoPath Code

In January, the OIG released a required annual report about year-to-year changes in laboratory test spending at Medicare - entry point here.  The annual report helps Congress monitor the impact of major CLFS payment changes which Congress legislated in 2014, affecting payments in 2018 and later.

  • Like recent past reports, the just-released report for CY2024 omits 81479 as a specific liine-item code.    
  • The justification seems to be that 81479 is nota code priced on the CLFS.  (It rates $0 on the CLFS and is locally priced when paid.)  If you know what you're looking for you can glean this from the text.

###

Here I compare top codes in the lab industry in the Fall release of 2024 data, and the OIG analysis just published.  The OIG analysis includes hospital labs, but since nearly all of the top codes are proprietary, they are not billed by general hospitals.   

The one code that could be billed by hospitals, 87798, $442M, other pathogen, had very little billing by hospitals (in table below, it's $442M in the OIG report minus $437M in the earlier Part B only report, or a delta of $5M.   That means about 1% of 87798 was billed by hospitals, and 99% of the $442m was billed by a few private labs in Florida and Texas.)  

Another dubious code, 81419, epilepsy panel, was only billed by labs in Florida and Texas in Part B data, and OIG's data of both hopsital and independent lab billing confirms that 0% of the 81419 billing was from the nation's hospital genetic labs.


Reminder - Among Non Proprietary codes in 2024 SEVERAL were dubious

First, here's what I got in fall 2025 for CY2024,  when I looked at the data set of 812nn 813nn 814nn 815nn.(omitting PLA codes and omitting Microbio codes, which include giant code 87798).


For the moment ignoring 87798 and PLA, 81479 was 42% of spending - nearly half - of non proprietary genomic codes.

Of the other 7 codes shown above, 3 are flaky as hell in the elderly Medicare population - 81419 epilepsy, 81440 mitochrondria, and 81443 - rare inherited conditions.

In OIG's table of top-25 lab industry codes, OIG only shows one of these non proprietary genetic codes, 81419, where OIG reported $73M billing in hospitals and independent labs together, whereas I reported $73M billing from independent labs alone (thus we infer 0% from hospitals).

So we knew last fall dubious codes seemed to be billed only by labs in Texas and Florida.  If we add in nationwide hospital billing, as OIG does, it adds only negligibly or not at all to the dollar volume of the dubious codes.

Lab-Level Spending on 81479 Would Have Rated a Row in OIG Report

While 81479 billing is omitted by design in the OIG CLFS report, 81479 was substantially both in aggregate and when parsed down to specific recipients. This is the breakdown for 2023 (2024 breakdown not released yet).

81479 Was Omitted from 2024 OIG Report (Jan 2026); Here, 2023 Data


Thursday, February 5, 2026

Legislation Delays PAMA, Sets New Base Year

Header:  PAMA Cycle Delayed to 2027, With New Base Year 2025

Background

PAMA is legislation passed in 2014, one part of which planned for the Clinical Laboratory Fee Schedule to be repriced every 3 years, based on surveys of payor prices in the insured marketplace.  However, the triennial process only happened once - when payments from 2016 were surveyed in 2017 to set a new fee schedule in 2018.   The survey has been delayed annually since 2020.   This means 2019 data was frozen from 2020 reporting, and frozen from 2021 reporting, 2022 reporting, etc.  

What's New

On February 3, 2026, the new budget bill gave new PAMA benchmarkets (Section 6226).   The next reporting period will be in >>> May-June-July 2026, reporting data from 1H2025, to reset a new triennial schedule for 2027, 2028, 2029.

The base year has been updated from 2019 to 2025.  This means hundreds of "new" PLA codes (from 2019 to 2025) will get repriced in the "next" (rather than second-next) PAMA cycle.

Only labs paid over $12500 from Medicare in 1H2025 need report.

There are some quirks (see Nerd Notes below).

The New Future

The plan would be, for data from 2028 to be reported in 2029 for a new schedule in 2030, 2031, 2032.

For More

See news at ACLA.  ACLA continues to seek a larger-scale legislative fix, currently as legislation titled "RESULTS."

Nerd Notes - Take a Deep Breath

By March 2026, CMS will release a list of reported lab codes, which will be all lab codes active during 1H2025 (For example, codes coming active on or after July 1, 2025, will not be surveyed).

The first PAMA survey in 2017 included about 5 administrative MAAA codes and about 15 Category I MAAA codes (81501 etc), but no PLA codes.

It's massively different today.  This survey will include over 500 PLA codes, up to but not including those starting on and after July 1, 2025.

Only labs with $12500 in Medicare payments in 1H2025 are reporting labs.  

In the weeds.  If a lab had a PLA code in 1H2025, but it got less than $12500 Medicare payments (and possibly 0) such a code will fall outside the PAMA rate setting survey.  So regulation says that CMS is supposed to crosswalk or gapfill it.  CMS would do that "crosswalk or gapfill on code lacking PAMA data" in the rapid time window between the reporting period (through July 31, 2026) and the start of 2027.   

The easiest way for CMS to maneuver this period would be to have the (potentially several hundred PLA codes) with no CMS reporting, simply crosswalked by fiat to their current price (set by crosswalk or gapfill from 2018 to 2025).  However, CMS could review the several hundred codes with no PAMA data on a one-by-one gapfill basis or some novel crosswalk (seems infeasable).   

0551U was the last code to be made effective on April 1, 2025, and therefore "effective" during 1H2025 (at least during one quarter (Q2) of 1H2025).   0530U was the last code to be made effective on January 1, 2025, and thus active during the entire half-year of 1H2025.

##

81479

Remember that PAMA does not set prices for 81479, so it won't collect data on this code.  But 81479 is $600M a year in CY2024, a substantial part of all MoPath spending.   Here's the most recent breakdown, for CY2023 (CY2024 to release around May-June 2026).



##

One criterion for reporting lab data is being paid more than $12500 in 1H2025.  While exactly data isn't public, for back-of-the-envelope, you can look up lab payments for 2023 and count how many labs were paid more than $25000 in part B CLFS in all of CY2023, which would approximate the results for 1H CY 2025 $12500.




Wednesday, February 4, 2026

Google & ICER (Humor)

I was on a call where someone mentioned the ICER health econ organization.  I haven't heard of them for a few months.  I googled ICER.   

The first sponsored hit was, an organization, that helps you "fight" ICER health appraisals.  "Challenge ICER Reports!"



Department of Justice and $50M Florida Genetics Case (Rolls Royce edition)

Header:  Florida Laboratory Owner Pleads Guilty to $52M Medicare Fraud Scheme Involving Genetic Tests

Read all about it;

https://www.justice.gov/usao-sdfl/pr/florida-laboratory-owner-pleads-guilty-52m-medicare-fraud-scheme-involving-genetic