Friday, February 13, 2026

CMS Updates PAMA Reporting Information - Timelines are Short

 PAMA lab price reporting, for CMS median rate-setting, has been frozen since 2019.   Based on section 6226 of the February 3 federal budget, that is changing fast.

Labs will report 1H2025 commercial payment levels in May-June-July 2026.   CMS will publish consolidated median prices in the fall, which will be active on January 1, 2027 for three years.  

When a new PAMA price is lower than the current 2026 price, the 2026 price can fall by now more than 15% per year (until it matches the PAMA price sooner or later).

Track CMS instructions here:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting


Nerd Note

Some insights gleaned from the last cycle, 2017 data, short blog here.

Thursday, February 12, 2026

Podcast Scan: Owkin's Podcast with Jorge Reis-Filho

Owkin is a French company on the cutting edge of genomic/cellular/biomarker/AI discovery. They have 63 videos on YouTube, many of them videopodcasts. 

In this one, February 2026, CEO Thomas Clozel talks with Jorge Reis-Filho, who is Chief in the enterprise AI unit of AstraZeneca.

#####

Chat GPT 5.2

The episode’s core idea is that time should be treated as a first-class biomedical variable: models should learn from trajectories, not just snapshots. The guest argues that integrating multimodal data (omics, spatial/pathology, clinical) through a semantic layer, plus temporal embeddings, enables more informative predictions—especially for oncology regimens where sequence and timing matter. Spatial biology’s promise remains under-realized mainly due to insufficient scale and diversity of datasets, motivating consortia.
The conversation stays practical: AI won’t bypass validation, near-term gains are likely in clinical development, and long-term breakthroughs require trials, culture change, and shared infrastructure.

#####

Owkin’s Podcast, Episode Notes:
When “Time” Becomes a First-Class Biomarker

The episode opens the way a lot of real conversations now start: a jet-lagged hello in San Francisco, an offhand confession about fasting, and a quick detour into a very American object lesson—Function Health and its borderline-comic volume of blood draw. It’s an oddly perfect cold open for what follows, because the subtext is modern biomedicine in miniature: lots of measurements, imperfectly interpreted, chasing something that feels like truth.

From there, the conversation snaps into focus around a theme that’s both technical and strangely intuitive: we’ve spent decades making “static” predictions in biology, and we may be approaching the point where that looks as antique as measuring a movie by a single still frame. The guest’s core claim is that the next leap won’t come from “more data” in the generic sense, but from integrating modalities through a semantic layer and embedding time as a native dimension—so the models can reason not just about what is, but where it’s going.

Directionality beats the “right number”

One of the most grounded moments is the wearables example. The guest doesn’t pretend Whoop (or VO₂ max estimates) are perfectly accurate; he basically says the opposite. The point is that in wellness and (eventually) medicine, directionality and time series can matter more than absolute values. If your wearable’s VO₂ max is “wrong,” but the trajectory is consistently improving or deteriorating, that trend can still be meaningful—especially once models treat time as more than a footnote.

That framing is useful because it politely calls out a familiar failure mode in biopharma analytics: we love single timepoint snapshots because they’re tidy and publishable, but living systems are rarely tidy. The episode argues—without hype—that temporal embeddings change what can be learned, because biology isn’t just state; it’s dynamics.

Multimodal + temporal + semantic: the three-legged stool

The guest comes back repeatedly to a three-part architecture:

(1) Multimodal integration (omics, pathology/spatial, clinical, outcomes, etc.)
(2) Temporal embeddings (timepoints, sequences, directionality)
(3) A semantic layer that makes the modalities interoperable so the models can “read” across them

It’s a serious point disguised as a conversational riff: without a semantic layer, you can assemble an impressive warehouse of data and still end up with models that behave like tourists—lots of photos, little comprehension.

What’s notable here is the ambition level. They’re not describing a modest dashboard upgrade; they’re describing a shift in what counts as an “input” to biological inference. The “semantic layer” idea is really about translation—turning incompatible datasets into something a reasoning system can navigate, query, and generalize from.

Static targets vs rational regimens

Where the conversation gets especially relevant for oncology strategy is the argument that future breakthroughs may be less about finding one perfect target and more about building combination regimens with rational sequencing.

The guest makes a practical observation: historically, the industry has been constrained by the need for large enough populations to justify development, and target selection has leaned on what was measurable or conveniently “expressed.” But if the future is combination therapy (and it already is, in many areas), then sequence becomes part of the biology. Two drugs need not be given simultaneously; the order and the timing may matter as much as the pairing. That’s where time series thinking stops being a modeling flourish and becomes a clinical development design principle.

Spatial biology: “technology of the year”… still waiting for its moment

A particularly interesting stretch is the guest’s take on spatial biology. He acknowledges the field’s status—celebrated, funded, lionized—and then lands a critique that will resonate with anyone who has watched a hot platform plateau: the unrealized potential is partly a scale problem.

Spatial data are granular; granular data require robust feature extraction; robust feature extraction increasingly points toward foundation-model-like approaches. But foundation models demand volume and diversity, and (right now) spatial datasets often don’t have it—at least not at the “one or two orders of magnitude more” level the guest argues is needed.

That’s where the episode’s collaborative instinct shows up. Instead of “my platform will win,” the guest leans into consortia and shared infrastructure—including mention of a multi-center effort (“MOSAIC”) that’s already trying to push scale. The thesis is blunt: if we want models that generalize, we need data that generalize.

The near-term promise: not science fiction, but clinical development plumbing

The conversation also avoids a common trap: pretending the biggest value is always the most glamorous. The guest draws a line between:

  • Near-term (2–3 years): measurable impact in clinical development, especially patient selection and multimodal integration

  • Longer-term: novel target discovery and deeper translation, because—even if you automate everything—you still have to run trials and validate hypotheses

That distinction matters. It treats AI less like a magic wand and more like an engineering discipline, where certain bottlenecks yield first and others remain stubbornly physical, regulatory, and time-bound.

“No poetic license to bypass validation”

One of the cleanest, most quote-worthy assertions in the episode is that using cutting-edge AI does not grant permission to skip validation. The guest cites biomarker guidance (he mentions ESMO) to underline a pragmatic hierarchy: explainability is desirable; independent validation is essential.

That’s a subtle but important stance. It signals to a scientifically literate audience that the speaker isn’t selling vibes. He’s describing a world where models may detect nonlinear patterns “above and beyond” classic causality hunting—but where translation still lives or dies on reproducibility, benchmarking, and external datasets.

Pharma’s strategic tension: quarterly gravity vs foundation-building

The interviewer pushes on a real executive dilemma: public-company time horizons reward visible pipeline value quickly, while genuine AI infrastructure and data ecosystems take longer. The guest’s answer is essentially: do both—but be honest about which parts pay back when.

What’s refreshing is that the conversation names a missing piece in many pharma AI narratives: not “AI will change everything,” but AI-enabled innovation that creates new biology and new pipeline value in the next 2–4 years—not just speedups of existing workflows. That’s a higher bar than “we automated literature review,” and it implicitly challenges leaders to demand examples that unlock something biologically non-obvious.

Who wins in an era of “democratized intelligence”?

The guest frames the current moment as an inflection point: intelligence used to be scarce; now it’s increasingly democratized. From that premise, he proposes a winner’s formula that feels almost annoyingly sensible:

  • Domain knowledge

  • Technical expertise

  • Infrastructure (compute, models/agents)

  • And, most importantly: data

He also adds the part that’s easiest to say and hardest to do: success requires solving people, process, and culture, not just technology. That line will land with anyone who has watched “AI transformation” fail because the org chart didn’t transform.

A physician’s urgency is not a slogan

Late in the episode, the tone shifts in a way I appreciated. The interviewer asks whether being a physician changes how the guest leads. The guest answers that the urgency is different: patients don’t care if the drug was “developed by AI”; they care that it arrives faster, works better, and combines efficacy without multiplying toxicity. He contrasts big-number biostatistics with the irreducible fact that each patient has a story.

It’s not sentimental, and it’s not performative. It’s a reminder that “outcomes” are not an abstraction—especially for clinicians who’ve had to look patients and families in the eye while medicine did what it could and then… didn’t.

Recruiting for the mission—and for “thinking through AI”

There’s also a practical leadership segment about hiring. The guest says, plainly, that their oncology ambition is to eliminate cancer as a cause of death (a huge statement, but presented as a recruiting filter rather than a marketing tagline). More interestingly, he distinguishes between people who “use AI” and people who can reimagine processes through AI—“thinking through AI, not with AI.”

That phrasing is a bit gnomic, but the intent is clear: AI isn’t a bolt-on tool; it’s a lens that changes what workflows should look like in the first place.

What he’s betting on

When pressed for “next breakthroughs,” the guest offers two bets:

  1. In oncology: data-driven rational regimens with markedly better response rates and durability—especially through better understanding of immune memory, fueled by spatial data plus temporal assessments (not just spatial snapshots).

  2. Outside oncology: lifestyle therapeutics (weight management and related agents) as genuinely transformational—impacting inflammation, disease prevalence curves, and what population health looks like by 2035.

The interviewer adds an important counterweight: the future may improve in some dimensions while worsening in others (environmental pressures, microplastics, shifting cancer epidemiology). The exchange avoids both techno-utopianism and doom; it’s more like: the prevalence table will look different—and we should plan accordingly.

The quiet takeaway

If I had to distill the episode into one unflashy, high-consequence point, it’s this: time is becoming a first-class citizen in biomedical AI, not an afterthought. Once you embed temporality—across modalities, across spatial context, across patient trajectories—you’re no longer asking “What is this?” but “What is this becoming?”

That’s a different kind of question. And it demands a different kind of data strategy, partnership strategy, and validation discipline than the industry has typically shown.

Owkin’s series is at its best when it holds two ideas at once: ambition about what’s possible and sobriety about what it takes—data scale, semantic interoperability, external validation, and the slow, stubborn reality check of clinical trials. This episode sits squarely in that lane.

CMS Gapfill: Spring 2026 Update

Each fall, CMS determines which of 100-odd new lab codes are crosswalked and which are "gapfilled," meaning priced by MACs - for us, in 1Q2026.  

See Change Request CR14312, T13514, December 5, 2025.  By my count, in that public document, 40 codes are for contractor gapfill right now.


How They Do It - Novitas/FCSO MAC and NGS MAC 

NGS MAC and Novitas/FCSO MACs generally post a webpage notifying providers of the numbers of the new gapfill codes, and requesting information.  Novitas in recent years has used a "SurveyMonkey" (or similar) online web form for data entry.   MACs ask for CMS-specified gapfill information, such as test purpose, charges, prices of other payors, resources required, etc.  

I believe NGS MAC post the list of PLA codes and provides an email for receipt of your materials.

How They Do It - MolDx MACs (Palmetto, Noridian, CSG, WPS)

At least past years, nothing is posted publicly at MolDx  One of the medical directors there has told me that these gapfills could be finished with their internal expertise as MolDx lab experts, and they'd reach out to a lab about its code just if and when MolDx found it necessary.   

In the past several years, all the MolDx MACs offer the same price for every code being priced (molecular or not), and, the NGS MAC has matched the MolDx price for every code being priced.  In recent years the Novitas-FCSO gapfill prices have tracked close to the MolDx ones, sometimes identical, sometimes 20% higher or lower.

How to Gapfill (Bruce's View)

At a recent meeting, I heard a CMS medical diirector comment that gapfill pricing was very difficult, and it was hard to get the necessary information.  I can see that.   For what it's worth, leveraging my decades of experience, here are some ideas.

1.  Low hanging fruit Some of the gapfill codes will be very similar to an existing priced code and a price can be matched.   (Gapfill rules do allow use of similar-test prices in this way at the gapfill stage, even if they were not crosswalkd the prior autumn.)

2.  Take what comes.  Some companies know what they're doing.  At least some companies will know the MACs are gapfilling and will submit materials.  Use that.

_______

3.  Check if that lab presented in June public comment meeting.  (Agenda online at CMS CLFS page).

4.  July Advisors.  Check the CMS website for the July advisors meeting of the prior summer.  See what crosswalks they recommended.  (If they recommended gapfill, it doesn't help you, but at least you checked).

_______

5.  Reach out to the lab.   I see two angles:
(A) Email an information box or contact box.   (B) Call and ask for the CEO.   (This assumes the missing code is probably from a small company, not Roche or Quest).   

Log the above by date.  If you haven't heard in 7 days, go to #5.

6.  Look at the website and see if they list publications, if so, pull down one or two and use the methods section.  Work from that.  

7.  Similar to #5 but layer on AI.  Use Chat GPT and ask if to use the internet to find the methods used for [test name] [company name].  However, you're responsible for checking what comes back against original source and make no reliance on the AI.   It's just pointing you a public sounce you didnt' come across on Google and it's up to you from there. 

_____

Obviously you can put 6&7 before 5 if you prefer. 

_____

8.
If NONE of the above work,
then you are stuck with the test name and code text.   

You have to pick a reasonable price from that.   Remember you may hear more in the summer (2026) public comment period after you release your draft price.  My advice, your result won't be any more accurate at this stage if you work for 10 minues or 60 or 120.  So give it 10.

___

For a blog this week, I needed a link to where CMS defines terms like "effective date" and "implementation date."   I had Chat GPT look for it rather than me. It did a perfect job of finding something ultra obscure in a few seconds.  I was basically using it as a nuclear powered Google.  Of course, I had to verify the obscure document it found was right.   



Amal Thommil Updates Decibio MRD Coverage (February 2026)

 Amal Thommil at DeciBio updates us on coverage for minimal residual disease testing (MRD).

See his chart here.

click to enlarge



AI VERSION: My Heart Skipped a Beat, CMS Proposes X1-X2 Modifiers for NCD 90.2 (NGS in Cancer)

Below, Chat GPT 5.2 rewrites a blog I wrote "by hand."
###

Header:  CMS proposes X1/X2 modifiers to encode cancer stage on NGS claims—framing it as ICD-10 “clarity,” but potentially creating more complexity than it resolves.

###

My Heart Skipped a Beat: CMS Proposes X1/X2 Modifiers for NCD 90.2 (NGS in Cancer)

In 2017, CMS issued NCD 90.2 for next-generation sequencing (NGS) in cancer. It was not an accidental document. Senior HHS leadership got involved (the Trump 1 administration).  The policy was designed to do something specific and modern: create a clean, predictable incentive for FDA approval of comprehensive genomic profiling (CGP) tests.

  • The Trump HHS made the lab industry a bargain. 
    • If an NGS test received FDA approval as a companion diagnostic, it would be covered nationally and immediately—on the day of FDA approval
    • If the label expanded (new gene, new drug), coverage expanded the same day. No MAC-by-MAC delay. 

No prolonged local review cycles. A national coverage rule designed to match the pace of precision oncology.

That was the theory.

In practice, implementation has been anything but frictionless. The 14-day rule delays inpatient surgical specimens from being tested promptly. If testing waits for a later oncology visit, even more weeks can pass before genomic results guide therapy. Patients sometimes start urgent but nonspecific chemotherapy during these delays. And while the NCD promises its readers coverage “from the day of FDA approval,” operational reality often looks far worse. [See my blog about erroneous MAC delays in implementing new CMS NCD coverage.]

So one might reasonably ask: if implementation is already uneven, how do we improve it?

At the December 2025 HCPCS meeting (Agenda Item 41), CMS proposed creating two new Level II modifiers to “implement” NCD 90.2:

  • X1 – NGS for early-stage cancer (Stage I–II)

  • X2 – NGS for advanced (Stage III–IV), recurrent, relapsed, refractory, or metastatic cancer

CMS’s summary explains the rationale. NCD 90.2 does not cover early-stage cancers, while ICD-10 diagnosis codes do not distinguish stage. The proposed modifiers would “ensure greater clarity,” align submission with coverage requirements, and “maintain data integrity for oversight and claims analysis.”

That seems at first like a coherent administrative theory.  But it deserves scrutiny.  I'd argue it improves NONE of the real problems, and introduces a half-dozen new ones.

https://www.cms.gov/files/document/b2-2025-public-meeting-agenda-december-17-18-2025.pdf

click to enlarge

The decision isn't out yet, but could come out any week, with other non-drug HCPCS narrative decisions (see here, scroll down, right column).


CMS’s Framed Problem: ICD-10 Lacks Stage

CMS’s core claim is that ICD-10 does not reliably encode cancer stage, creating ambiguity in claims submission. That is true. ICD-10 codes generally don't specify Stage I vs Stage IV.

But this has been true across oncology for decades....and:

  • Chemotherapy billing does not require a Stage III modifier.
  • Radiation therapy billing does not require a Stage IV modifier.
  • Surgical oncology claims do not append stage-specific modifiers.

Why does one lab test uniquely in need of a stage-specific HCPCS architecture?

NCD 90.2 has been in effect for eight years. CMS has not tried to assert that ICD-10 stage ambiguity has produced systematic payment error under this NCD. MACs already adjudicate based on diagnosis codes, documentation, coverage articles, and other clinical indicators. The absence of stage coding in ICD-10 is not a new discovery.  Deniable claims for Stage 1-2 are already supposed to require a modifier, GZ.

If the problem were widespread miscoding or improper payment, CMS would typically cite the data. The proposal cites none.


Structural and Practical Problems

Even if one accepts CMS’s framing, the proposed solution raises multiple operational and policy concerns.

1. Staging does not map cleanly across all cancers.
Leukemias, lymphomas, and many brain tumors do not follow AJCC Stage I–IV schemas. How are those claims coded? These problems have been elevated to CMS since 2018 with no change.

2. Adjuvant Stage II disease is not simple.
Under MolDx policies, certain Stage II colon and breast cancers qualify for adjuvant chemotherapy decision-making. Clinically, per MolDx, these cases behave as “advanced” for therapeutic purposes. Do they receive X1 (for stage 1 or 2) or X2 (for MolDx-defined "advanced")?

3. Recurrent disease straddles both modifiers.
A recurrent Stage II melanoma may still technically be Stage II but clinically advanced due to "recurrence."  X1 or X2? Both?  The modifier language introduces interpretive ambiguity rather than eliminating it.

4. Modifier stacking confusion (X1/X2 with GA/GZ).
Most Stage I–II cancers are non-covered under NCD 90.2. Today, such claims would typically involve:

  • GZ (expected denial, no ABN), or

  • GA (ABN on file).

If X1 is appended, does that replace GZ?
Are both required?
Is X1 informational only?

The proposal does not clarify interaction with established waiver-of-liability logic.

5. RNA sequencing remains excluded under the NCD.
NCD 90.2 excludes RNA sequencing. Yet RNA-based profiling is increasingly central to oncology practice. X1/X2 do nothing to modernize this scientific mismatch when NGS is used for RNA-SEQ.

6. MRD asymmetry.
NGS-based minimal residual disease (MRD) testing would fall within this modifier framework. Increasingly common digital PCR MRD would not. Same clinical intent, different molecular platform, different coding logic.

7. No demonstrated adjudication crisis.
NCD 90.2 has functioned nationally since 2018. CMS has not shown systemic payment errors attributable to stage ambiguity in claims.

8. Administrative burden without therapeutic benefit.
Every new modifier introduces:

  • Education requirements

  • Audit exposure

  • Software updates

  • Claim edit revisions

  • Mistakes and confusion

None of these accelerate patient access to precision therapy.


A Curious Coding Note

A scan of existing HCPCS modifier tables suggests that “X1” and “X2” have appeared in other contexts (e.g., “broad” and “narrow” service distinctions in MIPS). Repurposing modifier codes can generate unintended confusion across systems and historical data.


The Larger Policy Context

NCD 90.2 was designed as a major new pro-innovation lever. FDA approval would trigger immediate national coverage. That principle aligned regulatory science with reimbursement policy.

If implementation delays undermine that principle, the solution lies in:

  • Addressing MAC lag and other causes of delayed coverage,

  • Revisiting the 14-day rule’s impact on genomic testing,

  • Updating the NCD to reflect RNA-based oncology and other staging systems,

  • Clarifying recurrence and adjuvant logic within coverage text.

Just appending X1 and X2 to claims does not accelerate coverage. It does not modernize the NCD. It does not resolve scientific evolution.  Presumably 99% of the claims are X2, anyway.

It encodes cancer stage at the claim line level for one specific technology while leaving the broader oncology billing ecosystem untouched.

That asymmetry and the unintended consequences are what made my heart skip a beat.

If the goal is clarity, CMS should articulate the measurable problem it seeks to solve. If the goal is a capricious choice for data capture, say so. But if the goal is improving patient access to precision oncology, and solving large real problems, this proposal appears to miss the target entirely.

 

 


My Heart Skipped a Beat: CMS Proposed X1, X2 Modifiers for NCD 90.2 NGS in Cancer

CMS has an NCD, 90.2, which was designed in 2017 (with help from senior policy folk at HHS) to provide an incentive for FDA approval of then-new NGS testing.   

  • FDA-approved NGS CDx tests were covered from the day of FDA approval, per the NCD
    •  And any updates (new gene, new drug) were ALSO covered from the of approval.

But CMS has a hodge-podge of barriers.   The 14-day rule delays inpatient surgical cases from getting comprehensive genomic profiling results (CGP) for weeks.  If the oncologist has to order it, when he sees the patient well after the surgery event, that adds additional weeks.   Too many patients get urgent but nonspecific chemo during the undue lags.    Even worse, MACs may not start new coverage on "the day of FDA approval" until a year or more later - another "OMG" moment.   (See here).

How could we make this mess worse?

At the December HCPCS meeting, CMS proposed new modifiers X1 and X2, to "implement" the NCD.  I only see new problems with a solution that solves no existing problems.

Details follow.

##

Here's the PDF of the December HCPCS meeting:

https://www.cms.gov/files/document/b2-2025-public-meeting-agenda-december-17-18-2025.pdf

Here's page 96, agenda 41, the item that impacts NCD 90.2:

click to enlarge

Decision Expected January 2026

Basically, CMS proposes to create a modifer X1, for Stage 1-2 cancers, and a modifier X2, for stage 3-4 cancers.   In the December meeting, HCPCS staff say they plan to go ahead with this, expecting to release a decision in January 2026 and implement the coding on April 1.




But - Decision Not Out Yet

The January 2026 promised decision isn't out yet.  Looking at dates, it may be that non-drug decisions come out slower (several months) than drug decisions (several weeks).

For me, as of 2/12/2026, I see the December 2025 final drug hcpcs narrativedecisions, but nothing yet in a separate column for December 2025 non-drug decisions.  

Curiousity: X1 X2 In Use?

On the complete tables of all existing codes, I did notice that X1 and X2 seemed to be already in use for something called "broad" and "narrow" services.  Here.

PROBLEMS WITH X1-X2 as PROPOSED

  • Nothing Good: This accomplishes nothing, and is not known to be a problem.
  • Staging Glitches:  The X1-X2 language for staging doesn't apply to whole ranges of cancers (leukemia, lymphom, brain cancers, to name just 3 examples.)
    • These apply to the NCD language and the X1-X2 language; groups have complained to CMS for a decade.
  • Adjuvant Stage 2 Well Known:  MolDx covers NGS for adjuvant therapy in select stage 2 cases of colon and breast cancer to name just 2 examples).  Because if a cancer needs chemotherapy [adjuvant], it is "advanced.")
  • Fit Both Codes: Some cancers will fit both codes (recurrent skin cancer may be stage 1 or 2, not stage 3 or 4. X1=Stage 2, X2=Recurrent).
  • RNA SEQ: The NCD excludes NGS for RNA SEQ, which is increasingly common.  The modifiers make no such allowance.
  • ddPCR:  MRD cases using NGS fall under these modifiers, but digital PCR MRD, which is increasingly common, don't.
  • Like Useless KX Modifier: This accomplishes nothing.  It's like the KX modifier, often used in DME, which says nothing but affirms the claim complies with rules.  (General instructions ALREADY require that submitted service comply with rules.)   A fraudulent provider would use -KX as easily as a legit one.
  • X1 and GZ: Most any claim for stage 1 or 2 is nonpayable under the NCd, so it should already have a GZ modifier (this service or claim not reasonable and necessary.)  Does CMS need both GZ and X1?  Does X1 mean GZ isn't needed?  Where does it say that/
  • Side note:  it looked to me like codes X1, X2, were already in use for something else.


This May Save Your (Reimbursement Nerd) Life Someday: CMS Defines CR Implementation Date vs Effective Date!

Header:  CMS issues MAC instructions (e.g. for NCDs) by "Change Release" instructions, which are also public.  Every new requirement or coverage has an "effective date" (e.g. May 1) and an "implementation date" (e.g. July 1).   Contractors can code to the effective date as soon as they can, but MUST code to that instruction by the "implementation date."

Today, a client got a MAC email that the MAC understood the effective date, but could not observe it until the far-future implementation date.  

This is just wrong, but it's easy to get confused.  There's a published rule.  See a CMS instruction (from 2011, still  on the books in Manual #01, Ch 07, Part 50) that is 100% clear. Use it whenever you need to.

https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf





###

Chat GPT provided the correct reading of CMS rules, and upon a second query, found and provided the exact internet document PDF.

###

https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf

https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/ge101c07.pdf

##

Chat GPT Answer

##

CMS explicitly distinguishes Effective Date from Implementation Date in its transmittal instructions. In CMS Manual System Rev. 66 (CR Definitions), CMS states that the implementation date “is the date by which Medicare fee-for-service contractors and shared system maintainers shall apply all changes” and when all necessary updates must be “completed and operational,” making clear that it is a deadline for systems readiness, not a permission trigger

 (https://www.cms.gov/Regulations-and-Guidance/Guidance/Transmittals/downloads/R66GI.pdf). By contrast, CMS defines the effective date as the date the new policy becomes active and from which contractors must apply the new rules to claims (often specified as the date of service). This same distinction is reflected in the active Internet-Only Manual structure governing contractor operations (Pub. 100-04, Chapter 7 framework and related transmittal usage), where effective dates govern claims applicability and implementation dates govern contractor system updates

 (https://www.cms.gov/regulations-and-guidance/guidance/manuals/downloads/ge101c07.pdf). In short, CMS policy architecture supports your long-standing interpretation: the effective date controls when the rule applies; the implementation date is the last date by which the contractor’s edits and processing systems must be fully in place.

##

I worked as a medical director 2004-2008, before this particular 2011 publication, but it was the same rule in 2004.

##

This should be a public link to my chat gpt dialog;

https://chatgpt.com/share/698dff2e-1d2c-8005-aef9-8a46b4ad06cd



Tuesday, February 10, 2026

Excellent MedTech Podcast: From R&D to FDA to Commercialization (Dr Doug Kelly, FDA)

Heard an excellent podcast today on MedtechTalk, featuring Douglas Kelly MD MBA, a physician and venture capitalist who served 2020-2025 as deputy Center Director for Science at CDRH.

Find the podcast online at this link (recorded March 2025):

https://medtechmvp.com/media/medtech-talk-podcast/douglas-kelly-on-the-value-of-innovation-and-breakthrough-designation


See Dr Kelly's Linked In page here.

And see his recent re-post of Niko Hems' and Kathryn Bowsher's Linked In articles, on why so many AI-based startups crash, sometimes afer pretty massive funding.

(And for more on his theme of the value of public service by working places like FDA, see Michael Lewis's book "Who is Government? The Untold Story of Public Service.")

##

Chat GPT 5.2 summarizing the one-hour podcast:

##

In this candid and substantive Med Tech Talk episode, Justin Klein, MD (Vensana Capital) interviews Douglas Kelly, MD, former Deputy Center Director for Science at FDA’s CDRH, for a behind-the-scenes tour of how device regulation actually works—and why so many promising technologies never reach widespread patient access

Kelly traces his path from venture investing to FDA, recounting the moment Jeff Shuren essentially challenged him to “stop complaining and come help fix it,” and uses analogies to explain how outsiders routinely misunderstand what FDA needs and how decisions get made. 

He then maps the internal structure of CDRH (the offices of health technology, review staff, policy, science/technology, emergency preparedness), emphasizing that the range of devices—spanning software, AI, implantables, diagnostics, and more—require CDRH to be “intellectually promiscuous,” with staffing and communication challenges that don’t resemble standard pharma-style review. 

The core of the conversation is Kelly’s strategic rationale for the TAP program (Total Product Life Cycle Advisory Program): not as an “FDA fast lane,” but as a deliberate attempt to replace the industry’s serial, disconnected failures (FDA authorization → CMS coverage → CPT code → payment → adoption) with an earlier, parallel, relationship-driven process that aligns evidence expectations across FDA, payers, and professional societies—so companies can fund the right studies once and investors can price risk realistically.  He actually took people to watch events like CPT and RUC meetings to get a downstream view of the commercialization of devices.

Along the way, Kelly offers a striking metric—only about 1.5% of breakthrough-designated efforts, in his interim analysis, achieve durable commercial success or broad adoption—arguing that the system burns enormous reviewer time on products that never see daylight, and that the next era needs clearer priority-setting and better FDA–CMS alignment rather than rigid “stay in your lane” silos. 

The discussion closes on today’s pressures—hiring constraints, the fragility of mission-driven expertise, and the risk that indiscriminate staffing cuts and return-to-office mandates will undercut precisely the industry-savvy talent needed to modernize review (including AI)—but ends on an optimistic call to action: Kelly urges more private-sector leaders to do public service at FDA, because the long-term fix is shared understanding, durable relationships, and evidence pathways that get genuinely valuable innovations to patients without compromising safety.

Big News for Digital Pathology: AMA Appears Set to Use CATEGORY III for Whole Slide Imaging / AI

It's no secret that AMA CPT PLA made a half-dozen codes for whole-slide imaging 3 to 4 years ago, and then seemingly imposed a "moratorium" on more PLA codes based on WSI.  There it sat for the past 2 years, since Summer 2024.

It's also no secret that last fall (public agendas and reports), the AMA CPT considered a MAAA application for WSI, and rejected that, too.

This just couldn't continue, as exciting things are happening with WSI and AI, even with "plain" H&E slides.  See the new Nature Medicine paper by Li et al on predictions in lung cancer, here.

Last Week's AMA:  H&E, WSI, AI, & Cat III

At last week's AMA CPT meeting, AMA had a long discussion and seemed prepared to accept WSI-AI services in the Category III category.   Getting to that place was kind of dizzying, involving detours around whether H&E AI-based services were "codifiable," what AI services do and don't require physician work, and how evolving use of AI in digital pathology would intersect with a moving target, revisions to how CPT as a whole handles different types of AI services.

So the take home is:  Look for AI & WSI distinct diagnostic services to land in Category III coding rules, for an indefinite time period.

###

AMA Updates Confidentiality Interpretations

At last week's meeting, an hour was devoted to new AMA rules, including confidentiality/nondisclosure.  

Some things are verboten:  

You can't record the meeting.   You can't take photos (or screenshots) of CPT text or other slides   You can't type and replicate CPT codes shown on slides.   You can't quote anybody's statements by name.  You can't say whether any particular tabs (items) passed or failed the committee vote - although you probably wouldn't know, since those are secret silent electronic ballots.

However, AMA also discussed what is allowable.  

You can repeat statements or positions voiced, as long as not identifiable to anyone.   You can make comments based on your opinion (example; "I thought the discussion of Tab 105 went well.")   Based on examples on slides and the AMA's discussion, I feel it was well within the statement boundaries to type my remarks above - letting us know that after a lot of debate and detours, the category of WSI AI seems to now fall in the domain of Category III.


Monday, February 9, 2026

Medical History: Specialties Draw Opposite Conclusions from the Same Data

 I just came back  from the AMA CPT meeting, where dozens of topics are debated "pro and con" with opposing stakeholders.

I was reminded that I attended the 2008 CMS MEDCAC on whether to cover screening CT colonography (aka "virtual colonscopy.")  My memory is that the numerous radiology speakers were strongly in favor, and the service had big public health benefits.  And the numerous gastroenterology speakers were highly concerned the data were premature and many risks were too poorly understood and coverage would be a bad, bad idea.

Find the 2008 MEDCAC webpage here.  The 2009 NCD is here.

  • Follow up.  CT colography became a covered screening benefit in January 2025, based on regulatory changes in 2024 rulemaking for PFS CY2025 - here.

Below, I fed the transcript and speaker list to Chat GPT and asked for a review.  It produced the review below in less than 30 seconds.

After the read-out from Chat GPT 5.2, I asked it to write an essay about the day as a case study.  I print the case study first, and the (earlier) AI analysis  second.

###

 

A Case Study in Interpretive Pluralism: 

CT Colonography, Medicare, and the Social Life of Evidence

When historians of health policy look back at Medicare’s long and circuitous path toward covering CT colonography (CTC) for colorectal cancer screening, they may be tempted to frame it as a simple story of “evidence catching up with policy.” That narrative would be comforting—and largely wrong. The more instructive reading is that this episode reveals how the same empirical record can support sharply divergent conclusions, depending on professional identity, institutional incentives, and rhetorical framing. CT colonography is not merely a story about imaging technology; it is a case study in how evidence-based medicine operates in the real world, where evidence is interpreted, not discovered, and where policy decisions are forged through persuasion as much as data.

From the standpoint of raw evidence, the CT colonography literature circa the late 2000s was neither thin nor obviously defective. Large multicenter trials existed. Sensitivity for large adenomas and cancers approached that of optical colonoscopy under controlled conditions. Complication rates were demonstrably lower. Modeling studies suggested population-level benefits if screening adherence increased. Yet these same facts were marshaled to argue for coverage, non-coverage, and coverage-with-evidence-development, depending on who was speaking. This divergence was not accidental, nor was it merely cynical. It reflected different professional priors about what constitutes “sufficient” evidence, what risks matter most, and which uncertainties are tolerable in public programs.

Radiology-aligned stakeholders interpreted the evidence through a population-health and access lens. To them, CT colonography was a screening test, not a therapeutic intervention, and its value lay less in per-lesion perfection than in its ability to bring unscreened patients into the system. Missed small adenomas were contextualized against the known miss rates of optical colonoscopy itself. Radiation exposure was reframed as negligible in an older population. Extracolonic findings—often criticized as a source of downstream cost—were rhetorically repositioned as serendipitous early diagnoses. In this telling, the evidence demonstrated that CTC was “good enough,” safe enough, and mature enough to merit coverage now, with quality standards layered on over time.

Gastroenterology and endoscopy stakeholders looked at the same studies and reached nearly opposite conclusions. Their interpretive frame was clinical pathway integrity. From this perspective, uncertainty about the natural history of small polyps was not a minor gap but a central flaw. A screening modality that deferred or fragmented definitive intervention was seen as destabilizing rather than enabling. Referral thresholds, follow-up adherence, and the risk of false reassurance loomed large. Even extracolonic findings, celebrated by radiologists, were cast as potential sources of overdiagnosis and patient harm. Importantly, many GI leaders did not deny the technical performance of CT colonography; rather, they argued that the evidence base was insufficient to justify routine coverage in a national program without further real-world data. Hence the appeal of Coverage with Evidence Development—not as an endorsement, but as a brake.

What makes this episode especially instructive is that both sides were practicing evidence-based medicine, at least as they understood it. Each group emphasized different endpoints, different harms, and different definitions of benefit. Neither side fabricated data. Neither side ignored uncertainty. Instead, they weighted uncertainty differently. This should unsettle any naïve belief that evidence, once generated, speaks for itself. In practice, evidence is always filtered through professional norms about acceptable risk, clinical responsibility, and the moral obligations of a specialty to its patients.

The eventual shift in Medicare policy—decades after the original MEDCAC debate—did not occur because one side was finally “proven wrong.” Rather, it reflected changes in context: improved technology, broader acceptance of non-invasive screening, political attention to preventive care, and sustained rhetorical work by advocates who reframed CTC as complementary rather than competitive. Congress did not suddenly resolve the epistemic dispute; CMS reinterpreted the same general evidentiary landscape under new policy priorities.

The lesson here is not that all interpretations are equally valid, nor that expertise is illusory. It is that no single stakeholder group has a monopoly on correct interpretation, especially in areas where evidence is probabilistic, outcomes are long-term, and implementation matters as much as efficacy. Health policy is not physics. Facts and numbers do not announce their meaning independently of human judgment. They acquire meaning through argument, narrative, and institutional context.

For students of Medicare policy and evidence-based medicine, CT colonography offers a durable warning against epistemic arrogance. Advisory committees are not courts of law where evidence is weighed once and for all; they are arenas where professional cultures collide. Rhetoric—far from being the enemy of science—is the medium through which scientific claims are made legible to policymakers. The challenge is not to eliminate rhetoric, but to recognize it, interrogate it, and understand whose values it encodes.

In the end, the CT colonography story teaches us intellectual humility. Policy decisions emerge not from the triumph of facts over politics, but from negotiated interpretations of uncertain evidence. To take any one side too seriously—to believe that the numbers alone compel a single conclusion—is to misunderstand how medicine, policy, and human judgment actually work.



###

###

Chat GPT analysis of transcript (it did this first, and wrote the essay above second.)


Executive takeaway: Colography Transcript as Data

Radiology-aligned speakers and imaging manufacturers were uniformly supportive of Medicare coverage for CT colonography, while most gastroenterology and endoscopy society representatives opposed routine coverage or sought to sharply constrain it—often proposing Coverage with Evidence Development (CED) as a limiting compromise.

This MEDCAC is one of the clearest specialty-split advisory records CMS ever produced.


Categorization by speaker

Radiology & Imaging Industry — Supportive of Coverage

These speakers explicitly urged CMS to cover CT colonography or framed it as a mature, mainstream screening alternative.

Dr. Mark Klein (Radiology)
Strongly pro-coverage. He reframed extracolonic findings as a benefit rather than a harm, dismissed radiation risk as irrelevant in a Medicare population, and emphasized training and quality standards rather than restriction.

Dr. Charles (Dan) Johnson (Mayo Clinic / ACR)
Pro-coverage. Presented ACRIN trial results as validation of CT colonography, stressed reader training and technical maturity, and spoke explicitly on behalf of the American College of Radiology.

Dr. J.G. Fletcher (Mayo Clinic / ACR)
Pro-coverage. Focused on increased screening adherence and argued that CT colonography could improve population-level effectiveness.

Dr. Beth McFarland (ACR)
Pro-coverage. Supported accreditation, standards, and inclusion of CT colonography within Medicare screening options.

Dr. Donald Rucker (Siemens)
Strongly pro-coverage. Actively minimized radiation risk and argued that the benefit–risk balance overwhelmingly favored CT colonography in Medicare beneficiaries.

Dr. Robert Honinberg (GE Healthcare)
Strongly pro-coverage. Explicitly urged CMS to include CT colonography as a covered screening option, emphasizing technological maturity and NIH-validated evidence.


Gastroenterology & Endoscopy — Opposed or Restrictive

No gastroenterology society speaker endorsed straightforward national coverage.

Dr. Jason Dominitz (ASGE)
Opposed to routine coverage. Emphasized uncertainty in the management of small polyps, raised concerns about follow-up adherence and downstream harms, and did not endorse CT colonography as a screening substitute.

Dr. Douglas Rex (ACG)
Opposed or highly skeptical. Focused on false positives, downstream colonoscopy burden, and concerns about missed lesions relative to the endoscopic standard of care.

Dr. Brooks Cash (Navy / AGA)
Restrictive and skeptical. Highlighted uncertainty in clinical pathways and emphasized GI-centric quality and performance concerns.

Dr. Joel Brill (American Gastroenterological Association)
Conditional and restrictive. This is the key nuance. The AGA did not support routine coverage and instead proposed Coverage with Evidence Development, framing CT colonography as promising but not ready for unrestricted Medicare screening. In practical CMS terms, this functioned as a containment strategy rather than endorsement.

Dr. Amy Patrick (Colon Health Center)
Skeptical. Focused on unresolved clinical uncertainties and downstream consequences, without advocating coverage.

Dr. Robert Smith (American Cancer Society)
Cautious and non-endorsing. Reflected ACS’s neutral-to-skeptical position at the time and did not press CMS for coverage.


The pattern you remembered — confirmed

The MEDCAC record shows a stark and consistent divide:

  • Radiology framed CT colonography as a technologically mature, patient-friendly screening modality that could expand access and compliance, minimized harms, and portrayed extracolonic findings as an added benefit.

  • Gastroenterology framed CT colonography as clinically destabilizing, insufficiently validated in real-world practice, and threatening to established endoscopic pathways—using uncertainty and calls for CED as structural brakes on adoption.

This was not a subtle disagreement. It was a classic specialty turf conflict, expressed through evidentiary and patient-safety rhetoric.


Why this MEDCAC still matters

This meeting remains a canonical example inside CMS of how:

  • “Reasonable and necessary” determinations are shaped by specialty politics as much as evidence

  • Advisory panels surface professional boundary disputes under the guise of uncertainty

  • Coverage with Evidence Development can operate as de facto non-coverage

Your recollection aligns extremely closely with the documentary record.

###

###

Bonus

###

How would you explain this course of events, to a third-grader?

###

How Two Kinds of Doctors Look at the Same Facts and Disagree

Imagine two people looking at the same set of facts, like a picture or a chart.

One says,

“This shows a good way to help more people.”

The other says,

“This shows something we shouldn’t use yet.”

The facts didn’t change.
What changed is what each person cares about most.

That’s what happened with these two kinds of doctors.


The Two Groups

Radiology doctors use imaging machines to find problems.
Gastroenterology doctors use scopes to look directly inside the colon and fix problems right away.

Both want to prevent cancer.
They just do different jobs, so they focus on different risks.


The Same Data, Different Questions

The data said:

  • The test finds many cancers

  • It’s safer and easier for patients

  • It might miss very small problems

  • It doesn’t treat anything directly

Those facts don’t tell you what to do.
They wait for someone to decide which of them matters most.


How Each Group Interprets the Data

Radiology doctors ask:

“Will this help more people get screened at all?”

They worry most about people who never get tested.
To them, a test that is “good enough” and easier to accept can save more lives.

Gastroenterology doctors ask:

“Will this ever delay or miss something important?”

They worry most about certainty and follow-through.
To them, a test that doesn’t diagnose and treat in one step feels risky.


Why This Isn’t About Right vs. Wrong

No one changed the facts.
No one ignored the evidence.

Each group:

  • Weighted the risks differently

  • Focused on different harms

  • Defined “good medicine” in different ways

That’s why the same data led to opposite conclusions.


What We Learn From This

This case teaches a simple but uncomfortable lesson:

Facts and numbers don’t speak for themselves. People speak for them.

Professional roles, daily experience, and values shape how evidence is understood.
That doesn’t mean anyone is acting in bad faith — it means medicine is human.

If we remember that, we’re less likely to assume one side has a monopoly on truth, and more likely to understand why disagreement can be sincere, informed, and inevitable.


 

AMA CPT: Important Clarifications to Confidentiality Policy

At the February 5-6, 2026 AMA CPT meeting, the first couple hours are devoted to "new news" rather than code review.

There was an important discussion of new rules (or interpretations) of the confidentiality document.  I had noticed some revisions but didn't grasp the full meaning.

Basically, and I'm neither a lawyer nor a stenographer, these are my take-aways.   

  • NO NO NO:  Recording and/or pictures or screen shots are strictly off limits.   
  • But it is NOT  restricted to convey one's opinions of what was said at the meeting.  
    • "The discussion of Tab 101 seemed to be favorable." 
Here are some more points:
  • It IS restricted to quote anyone who spoke by name.
  • It IS restricted to convey any screenshots of any AMA CPT copyright text shown at the meeting.   
  • It IS restricted to announce how the panel voted.
    • However, this is generally not possible because the panel votes silently and electronically on code measures.    
    • It IS allowed to say "Panel discussion seemed favorable [unfavorable]" based on one's opinion as an attendee.   
  • Some specific mini case studies were presented as examples of the above.
  • These topics were discussed in the first and administrative session on Thursday.
Summary of panel actions is posted a couple weeks after each editorial panel meeting.

Sunday, February 8, 2026

OIG Report for Lab Spending 2024 Omits "81479" As Line Item

Header:  OIG Report on 2024 Lab Spending Omits 81479, the Biggest MoPath Code

In January, the OIG released a required annual report about year-to-year changes in laboratory test spending at Medicare - entry point here.  The annual report helps Congress monitor the impact of major CLFS payment changes which Congress legislated in 2014, affecting payments in 2018 and later.

  • Like recent past reports, the just-released report for CY2024 omits 81479 as a specific liine-item code.    
  • The justification seems to be that 81479 is nota code priced on the CLFS.  (It rates $0 on the CLFS and is locally priced when paid.)  If you know what you're looking for you can glean this from the text.

###

Here I compare top codes in the lab industry in the Fall release of 2024 data, and the OIG analysis just published.  The OIG analysis includes hospital labs, but since nearly all of the top codes are proprietary, they are not billed by general hospitals.   

The one code that could be billed by hospitals, 87798, $442M, other pathogen, had very little billing by hospitals (in table below, it's $442M in the OIG report minus $437M in the earlier Part B only report, or a delta of $5M.   That means about 1% of 87798 was billed by hospitals, and 99% of the $442m was billed by a few private labs in Florida and Texas.)  

Another dubious code, 81419, epilepsy panel, was only billed by labs in Florida and Texas in Part B data, and OIG's data of both hopsital and independent lab billing confirms that 0% of the 81419 billing was from the nation's hospital genetic labs.


Reminder - Among Non Proprietary codes in 2024 SEVERAL were dubious

First, here's what I got in fall 2025 for CY2024,  when I looked at the data set of 812nn 813nn 814nn 815nn.(omitting PLA codes and omitting Microbio codes, which include giant code 87798).


For the moment ignoring 87798 and PLA, 81479 was 42% of spending - nearly half - of non proprietary genomic codes.

Of the other 7 codes shown above, 3 are flaky as hell in the elderly Medicare population - 81419 epilepsy, 81440 mitochrondria, and 81443 - rare inherited conditions.

In OIG's table of top-25 lab industry codes, OIG only shows one of these non proprietary genetic codes, 81419, where OIG reported $73M billing in hospitals and independent labs together, whereas I reported $73M billing from independent labs alone (thus we infer 0% from hospitals).

So we knew last fall dubious codes seemed to be billed only by labs in Texas and Florida.  If we add in nationwide hospital billing, as OIG does, it adds only negligibly or not at all to the dollar volume of the dubious codes.

Lab-Level Spending on 81479 Would Have Rated a Row in OIG Report

While 81479 billing is omitted by design in the OIG CLFS report, 81479 was substantially both in aggregate and when parsed down to specific recipients. This is the breakdown for 2023 (2024 breakdown not released yet).

81479 Was Omitted from 2024 OIG Report (Jan 2026); Here, 2023 Data


Thursday, February 5, 2026

Legislation Delays PAMA, Sets New Base Year

Header:  PAMA Cycle Delayed to 2027, With New Base Year 2025

Background

PAMA is legislation passed in 2014, one part of which planned for the Clinical Laboratory Fee Schedule to be repriced every 3 years, based on surveys of payor prices in the insured marketplace.  However, the triennial process only happened once - when payments from 2016 were surveyed in 2017 to set a new fee schedule in 2018.   The survey has been delayed annually since 2020.   This means 2019 data was frozen from 2020 reporting, and frozen from 2021 reporting, 2022 reporting, etc.  

What's New

On February 3, 2026, the new budget bill gave new PAMA benchmarkets (Section 6226).   The next reporting period will be in >>> May-June-July 2026, reporting data from 1H2025, to reset a new triennial schedule for 2027, 2028, 2029.

The base year has been updated from 2019 to 2025.  This means hundreds of "new" PLA codes (from 2019 to 2025) will get repriced in the "next" (rather than second-next) PAMA cycle.

Only labs paid over $12500 from Medicare in 1H2025 need report.

There are some quirks (see Nerd Notes below).

The New Future

The plan would be, for data from 2028 to be reported in 2029 for a new schedule in 2030, 2031, 2032.

For More

See news at ACLA.  ACLA continues to seek a larger-scale legislative fix, currently as legislation titled "RESULTS."

Nerd Notes - Take a Deep Breath

By March 2026, CMS will release a list of reported lab codes, which will be all lab codes active during 1H2025 (For example, codes coming active on or after July 1, 2025, will not be surveyed).

The first PAMA survey in 2017 included about 5 administrative MAAA codes and about 15 Category I MAAA codes (81501 etc), but no PLA codes.

It's massively different today.  This survey will include over 500 PLA codes, up to but not including those starting on and after July 1, 2025.

Only labs with $12500 in Medicare payments in 1H2025 are reporting labs.  

In the weeds.  If a lab had a PLA code in 1H2025, but it got less than $12500 Medicare payments (and possibly 0) such a code will fall outside the PAMA rate setting survey.  So regulation says that CMS is supposed to crosswalk or gapfill it.  CMS would do that "crosswalk or gapfill on code lacking PAMA data" in the rapid time window between the reporting period (through July 31, 2026) and the start of 2027.   

The easiest way for CMS to maneuver this period would be to have the (potentially several hundred PLA codes) with no CMS reporting, simply crosswalked by fiat to their current price (set by crosswalk or gapfill from 2018 to 2025).  However, CMS could review the several hundred codes with no PAMA data on a one-by-one gapfill basis or some novel crosswalk (seems infeasable).   

0551U was the last code to be made effective on April 1, 2025, and therefore "effective" during 1H2025 (at least during one quarter (Q2) of 1H2025).   0530U was the last code to be made effective on January 1, 2025, and thus active during the entire half-year of 1H2025.

##

81479

Remember that PAMA does not set prices for 81479, so it won't collect data on this code.  But 81479 is $600M a year in CY2024, a substantial part of all MoPath spending.   Here's the most recent breakdown, for CY2023 (CY2024 to release around May-June 2026).



##

One criterion for reporting lab data is being paid more than $12500 in 1H2025.  While exactly data isn't public, for back-of-the-envelope, you can look up lab payments for 2023 and count how many labs were paid more than $25000 in part B CLFS in all of CY2023, which would approximate the results for 1H CY 2025 $12500.




Wednesday, February 4, 2026

Google & ICER (Humor)

I was on a call where someone mentioned the ICER health econ organization.  I haven't heard of them for a few months.  I googled ICER.   

The first sponsored hit was, an organization, that helps you "fight" ICER health appraisals.  "Challenge ICER Reports!"



Department of Justice and $50M Florida Genetics Case (Rolls Royce edition)

Header:  Florida Laboratory Owner Pleads Guilty to $52M Medicare Fraud Scheme Involving Genetic Tests

Read all about it;

https://www.justice.gov/usao-sdfl/pr/florida-laboratory-owner-pleads-guilty-52m-medicare-fraud-scheme-involving-genetic