Wednesday, April 1, 2026

Quick Access to Top Providers of any "Popular" Part B CPT Code. OpenMedicare.US

 CMS group "Data.CMS.Gov" has an elaborate website for payments to providers by CPT code.  It's a little tricky to use but powerful.  Find it here:

https://data.cms.gov/provider-summary-by-type-of-service/medicare-physician-other-practitioners/medicare-physician-other-practitioners-by-provider-and-service

There's a nonprofit website OpenMedicare.US, that lets you look up the providers of any CPT code in a simple way.

https://www.openmedicare.us/procedures/81479

You can go here:

https://www.openmedicare.us/procedures

and then scroll down to "search by code or description."  Note, however, they only provide "top 500 codes" and that's among all procedures, not just lab procedures.  

For example, 81479 (unlisted code) makes the top 500, as does 88342 (IHC), while 81162 (BRCA) does not.  When you can't get granular listings, it still offers you the national spend.  


Here's top data for 88342 IHC:




AMA CPT Publishes Latest Batch of Quarterly PLA Codes

 On April 1, 2027, AMA CPT released the results on the latest batch of quarterly PLA codes.  These codes were applied for around December 10, published today, and active July 1

AMA summarizes there were 2 revisions, 4 deletions, and 29 new codes 0631-659U.

https://www.ama-assn.org/system/files/cpt-pla-codes-long.pdf

Note that AMA includes a number of important instructions for acceptable or disallowed PLA codes, in the two page prolog to the code list.  E.g. "PLA codes do not have a physician work component," which the PLA committee will review rigorously.

##

The next PLA date is April 14, when they will post new code applications for comment, being codes applied-for around March 10.  Those will be hustled through the system from the April 21 public comment date to the Aprl 30 CPT editorial voting date.  The codes should appear in the June and July CMS pricing meetings.





Will CMS Nationalize MOLDX? Coverage at 360DX

A month ago, CMS announced a major new anti-fraud initiative, called CRUSH.  It had two main targets: DME fraud and genomics fraud.  The comment period closed on March 30, and journalists are sorting through the comments received.


See coverage by Adam Bonislawski here (subscription).   (See my CMS comment here).

I see a Regulations.gov posting that 768 comments were received but I haven't found the "search them" link yet.  

Here's the 18-page comment from ACLA - the fact it runs 18 pages alone, suggests they are taking this very seriously.

Here's a 120 word AI overview of the 2000 page full article.

  • Stakeholder comments on the CMS fraud RFI show a divided but nuanced response to possible nationwide MolDX expansion. Lab groups and consultants generally agree that fraud in molecular testing is a real problem and that clearer front-end controls could help. However, their comments emphasize that MolDX also brings slower coverage timelines, heavier documentation demands, and uncertainty for new test launches. 
  • ACLA stressed delays and stalled coverage requests; NILA argued CMS should focus more on inappropriate ordering than on labs alone; consultants noted MolDX can improve predictability once coverage is secured, but at the cost of greater upfront burden. 
  • The overall tone of stakeholder comment was not anti-oversight, but cautionary: many support stronger anti-fraud tools, yet want CMS to avoid replacing one problem—improper payments—with another—bureaucratic delay and reduced patient access.
And here's a longer summary of the open-access ACLA comment:

ACLA’s March 23, 2026 comments on the CMS CRUSH RFI take a careful middle position: the association supports stronger efforts against fraud, waste, and abuse, but argues that CMS must avoid treating the growth of molecular and genetic testing as if it were itself evidence of fraud. ACLA emphasizes that lab spending remains a very small share of total Medicare spending and that increased use of molecular testing also reflects real scientific progress, broader guideline support, and expanding clinical integration of NGS, PCR, MRD, companion diagnostics, germline testing, and neurodiagnostics. In ACLA’s framing, CMS should distinguish sham labs and false claims from legitimate laboratories furnishing medically appropriate tests ordered by clinicians.

On anti-fraud tools more broadly, ACLA proposes several alternatives or complements to MolDX expansion. These include requiring accreditation for certain higher-complexity labs, using data analytics to detect suspicious ordering and billing patterns, making it easier for clinicians to transmit documentation supporting medical necessity, promoting more specific coding, reforming overused Tier 2 molecular pathology codes, recognizing PLA codes more consistently, and strengthening Medicare enrollment screening. ACLA’s overall message is that CMS should target genuinely high-risk actors rather than use broad-brush tools that create administrative burdens for compliant laboratories.

MolDX is where ACLA becomes especially nuanced. ACLA does not reject MolDX outright, but it pushes back on the idea that mere registration in MolDX or DEX is, by itself, a strong anti-fraud solution. The comments note that the DEX Diagnostics Exchange Registry mainly supports coverage, coding, and pricing for molecular diagnostics. It includes basic lab information and test-level details such as description, FDA status, specimen type, and performance site. But ACLA argues that much of the basic laboratory information in DEX overlaps with information already available in the CLIA database, so DEX is not “broadly useful” as a standalone fraud-fighting tool.

At the same time, ACLA acknowledges an important pro-MolDX point: when MolDX eventually issues LCDs, those policies tend to contain specific medical necessity requirements, and that specificity can improve appropriate claims filing. By contrast, ACLA says some non-MolDX jurisdictions, notably First Coast and Novitas, sometimes lack equally specific medical necessity requirements. In that respect, ACLA suggests that the real value is not registration alone, but the existence of clear coverage policies with explicit medical-necessity rules across all MAC jurisdictions. That, in ACLA’s view, may help reduce fraudulent claims.

But ACLA’s central criticism of MolDX is speed and access. It says the program is often challenged to give timely attention to coverage requests and foundational LCDs, with some requests stalled for more than two years. Even more significantly, under MolDX a test is effectively non-covered until an LCD exists, leaving no claim-by-claim reimbursement pathway in the interim and creating serious access problems for new tests. ACLA also ties this delay to revenue-cycle consequences: in another section, it notes that MolDX technical assessments can take six to twelve months, and a shortened claims-filing deadline could prevent submission of those pending claims altogether.

So ACLA’s bottom line is not “anti-MolDX,” but rather: MolDX is imperfect as an anti-fraud tool, useful when it yields clear LCDs, and problematic when delay turns new tests into non-covered services for long periods. CMS, in ACLA’s view, should borrow MolDX’s clarity on medical necessity without assuming that DEX registration alone solves fraud.

##

Other viewpoints include the AAMC, and AHA.

Tuesday, March 31, 2026

New to Me: AMA Comment Deadlines are NOON CENTRAL on the Day

 New to me, so I'm just flagging this.

The last few weeks, comments have been accepted for the April 30 - May 1 AMA CPT meeting.  One important topic is revisions to Appendix S, a major ongoing issue that affects AMA CPT policy for software-intensive services (e.g. AI).

The comment deadline is March 31 2026, I had that right.

I for one, had not noticed the specific deadline is 11:59 am Central Time (e.g. NOON on the deadline day.)

Something to keep track of.  This applies to pathology comments, non pathology comments, PLA comments, etc.






Monday, March 30, 2026

Horizons in Diagnostics Value: Case Study: Rethinking Value for Infection Diagnostics

Here's a paper that is worth discussion, and potentially applicable to many areas of diagnostics, not just infection.

In a 2025 paper at Open Forum Infectious Diseases, some excellent thought capital is created by Claeys, Prinzi, and Timbrook.  Here.



It's also a great example of a good abstract - I can't do better than quoting it.

  • Evaluating the clinical impact of in vitro diagnostic tests (IVDs) for infectious diseases is complex given their effectiveness depends on context, implementation, and provider behavior. 
  • Traditional methodologies for therapy interventions do not adequately capture this complexity, necessitating novel analytical approaches and study designs. 
  • This review highlights methodological considerations for improving evidence generation for infectious diseases IVDs. 
    • Design and analysis challenges leading to bias and related solutions are reviewed such as the target trial framework. 
    • Moreover, novel frameworks such as Benefit–Risk Evaluation of Diagnostics: A Framework, Desirability of Outcome Ranking Management of Antimicrobial Therapy, and Desirability of Outcome Ranking and study designs such as hybrid effectiveness–implementation designs are discussed which allow for holistic ways to assess real-world outcomes.
  •  By evaluating IVDs with practical, real-world evidence, tests can better inform clinical decision making, policy, and ultimately patient outcomes.

###

I had an amateur experiment in this type of thinking in 2025.  Over the years I'd heard on and off about "Shannon Information Theory," and how it is fundamental to modern communications.  I asked Chat GPT to think hard about Shannon Information Theory, and then explore how it might provide new insights into the value and valuation of Point of Care Tests.   I thought the results were, at least, thought-provoking, and an interesting use of AI to explore the horizon of ideas.

Here's some of what I wrote last fall:

Within one blog, I asked Chat GPT AI to debug a Windows problem - it did.  And to think about Shannon-Meets-Dx - it did.  See the blog here and focus on the second half:

https://www.discoveriesinhealthpolicy.com/2025/12/two-fascinating-ai-tasks-windows-repair.html

More detail about the Shannon Project,

https://bqwebpage.blogspot.com/2025/11/ai-poct-and-shannon-info-theory-in.html

###

I asked Chat GPT to compare the two projects (Claeys and Shannon).

###

Both Claeys et al. and the November 2025 Shannon essay try to rethink diagnostics from first principles. 

Claeys argues that diagnostics should not be evaluated like drugs, because their value depends on implementation, stewardship, workflow, and provider behavior, not accuracy alone. 

The Shannon essay makes a parallel conceptual move, arguing that POCT changes the information architecture of care by reducing delay, noise, memory loss, and failed follow-up.

 Together, the two pieces are synergistic: Claeys offers the modern methods for proving diagnostic value in real-world settings, while Shannon explains more deeply why rapid, well-embedded diagnostics can create more usable clinical value.

###

Claeys et al. move the diagnostics-value discussion beyond accuracy. Their central argument is that infectious disease diagnostics should not be judged the way drugs are judged. A drug acts directly; a diagnostic acts indirectly, through clinician interpretation, implementation, stewardship, workflow, and local practice patterns. That means a test with excellent analytical performance may still show weak or inconsistent clinical impact if the surrounding care system is poorly designed. In their framing, the real object of study is not just the assay, but the assay embedded in a care pathway.

That is highly relevant to readers focused on value. Claeys et al. are effectively saying that value is produced by a chain: test result, interpretation, treatment change, timing, downstream outcomes, and local implementation. They explicitly argue that accuracy alone is not enough, and that reimbursement, guideline adoption, and market access require evidence about patient outcomes and real-world use. They also emphasize diagnostic stewardship and implementation science as integral, not decorative, parts of the evidence package.

Methodologically, the paper is sophisticated and unusually practical. It urges baseline local data before launching outcomes studies, because a test cannot show much benefit if the clinical opportunity for improvement is already small. It recommends explicit PICOTS framing, avoiding subjective adjudicated primary outcomes when reliability is poor, and using causal tools such as DAGs rather than loose, stepwise model-building. It also stresses the target trial framework for observational studies, in part to reduce familiar biases like immortal time bias and conditioning on future events. That is a very modern message: diagnostics studies should stop being casual before-after exercises and start behaving like careful causal inference.

  • PICOTS - Population, Intervention, Comparison, Outcome, Timing, and Setting.  DAG, Directed Acyclic Graph. It is a causal diagram: boxes for variables, arrows showing which things may cause which other things,

Claeys also makes a subtle but important point about heterogeneity. A diagnostic RCT does not settle the matter once and for all, because the effect of the test varies by center, prescribing culture, epidemiology, business-hours coverage, stewardship maturity, and user trust. Their discussion of the ADEQUATE trial is revealing: overall benefit may appear modest, yet center-level effects can range from strong benefit to no benefit to paradoxical worsening. For diagnostics, that is not a nuisance variable. It is part of the biology of value creation.

The paper’s alternative frameworks are especially important for value-oriented readers. Claeys et al. discuss BED-FRAME, DOOR-MAT, and DOOR because conventional endpoints often miss what diagnostics actually do. A panel may have similar positive percent agreement to a comparator but produce materially different antimicrobial decisions; DOOR-MAT is meant to capture that downstream therapeutic desirability. DOOR then broadens to patient-level ranked outcomes. In other words, the field is trying to measure not just whether the test is “right,” but whether it drives better management in context.

Your November 2025 Shannon essay attacks the same problem from a different angle. It argues that POCT changes the information architecture of care. The classic central-lab pathway is described as delayed, noisy, and erasure-prone: the clinician’s memory of the original encounter degrades, the patient may no longer be reachable in a high-bandwidth way, and much of the potential value leaks out between result release and successful action. 

POCT, by contrast, turns testing into a real-time, feedback-enabled dialogue in which the result can immediately reshape questioning, examination, explanation, and next-step action.

This is where the two publications are genuinely synergistic. Claeys gives the methodological and evidentiary scaffolding; Shannon gives the deeper theory of why those methods matter. Claeys says clinical impact depends on context, implementation, and provider behavior. Shannon explains that this is because the diagnostic is part of a communication-and-control system, not a stand-alone object. The test is valuable insofar as it increases usable information at the right moment, reduces transmission loss, and changes decisions before biological thresholds are crossed. Your essay therefore supplies a conceptual physics for the empirical observations that Claeys catalogs.

One powerful overlap is the idea of stewardship as channel management. Claeys emphasizes diagnostic stewardship and antimicrobial stewardship because a result only matters if used by the right clinician, in the right patient, at the right point in the pathway. Shannon reframes this elegantly: stewardship is the design of an improved, lower-noise, lower-erasure channel from assay output to clinical action. That is a more fundamental statement than “stewardship improves adoption.” It says stewardship is part of the information yield of the test itself.

A second overlap is sequentiality. Claeys criticizes simplistic diagnostic studies and points toward designs that respect timing, care pathways, and real-world decisions. Your Shannon paper says POCT converts diagnosis from a one-pass process into an adaptive experiment, where one result prompts new questions, focused examination, or second-line testing. That suggests a next-generation value framework: diagnostics should sometimes be valued not only by the information content of the first result, but by how well the result orchestrates the next decision node. That is very close to Claeys’ broader concern with pathways and downstream management, but Shannon sharpens it by showing why same-result/same-accuracy can still mean different total information harvested from the encounter.

A third overlap is timing relative to disease kinetics. Claeys repeatedly treats timing in the care pathway as central. Your paper goes further and says the same bit of information has different control value depending on whether it arrives before or after an irreversible biological threshold. In infectious disease, that is especially potent: hours matter in sepsis, escalation matters in resistant pathogens, and early de-escalation matters for stewardship harms. Claeys provides the outcomes-research toolkit; Shannon explains why a same-day result can have disproportionate value rather than merely incremental convenience.

A fourth overlap is system entropy. Claeys hints that local care systems, mixed-methods factors, and implementation conditions shape outcomes. Your Shannon essay makes the systems claim explicit: delayed clarification increases downstream uncertainty, congestion, misrouting, and rework, while same-visit clarification reduces the entropy of care pathways. 

For hospital leaders, this is gold. It links the familiar operational language of length of stay, isolation decisions, unnecessary admissions, and routing errors to a deeper theory of information flow. That could make a diagnostics value story more legible to administrators and payers, not just ID specialists.

So the combined message is stronger than either paper alone. Claeys et al. say: study diagnostics as complex interventions whose impact depends on implementation, stewardship, causal design, and downstream management. Your Shannon piece says: yes, because diagnostics create value by changing the fidelity, timing, bandwidth, and feedback properties of the clinical information channel. Put together, they imply that the “value of diagnostics” is not merely sensitivity plus specificity plus turnaround time. It is the amount of clinically usable information successfully transmitted into action, at the right moment, within a real care network.

If I were turning this into a DIHP-style synthesis, the headline would be something like this: Claeys provides the methods for proving that diagnostics matter in the real world; Shannon provides the theory for understanding why rapid, well-embedded diagnostics can extract more value from the same raw assay performance. The synergy is that one paper tells us how to measure value beyond accuracy, and the other tells us what value actually is.

###

Both Claeys and Shannon are examples of, as Apple said, "Think Different."

###

Exactly.

Both pieces are trying to escape the default mental model. The default model says a diagnostic is basically a little machine for generating an answer, and then we judge it mainly by accuracy, maybe with some secondary nod to speed. Both of these works push back on that.

Claeys et al. think different methodologically. They are saying: stop treating diagnostics like miniature drugs. A diagnostic does not “work” in isolation. It works only through behavior, workflow, timing, stewardship, and implementation. So the study design has to think at the level of the care system, not just the test cartridge.

Your Shannon piece thinks different conceptually. It says: stop treating a diagnostic as a detached fact-generator. Think of it as part of an information channel inside a live clinical encounter. The issue is not just whether the result is correct, but whether the result arrives with high enough fidelity, low enough noise, and early enough control value to actually change what happens.

So yes—same spirit. Claeys breaks out of the therapeutic-trial mindset. Shannon breaks out of the lab-benchmark mindset. One reimagines how we prove value; the other reimagines what value is.

That is why they feel so compatible. They are each saying, in different dialects, that diagnostics are being undervalued because we are asking the wrong first question.

A nice one-line formulation might be:

  • Claeys says diagnostics should be studied as complex interventions. 
  • Shannon says diagnostics should be understood as complex information events.

That is very much a “Think Different” move.



Saturday, March 28, 2026

Chris Klomp, Health Policy Expert for CMS and HHS - Some Notes on His Experience

Over the last few weeks, shake-ups at HHS have brought Chris Klomp to the #2 position next to Secretary Kennedy.  See news reports here; see an annotated one-hour interview with Klomp here.

I asked Chat GPT to discuss his educational background and professional experience through the lens of his current top-level health policy roles.

Endpoints discusses Klomp on AI, Klomp on biotech/China, Klomp on TrumpRx.

Friday, March 27, 2026

Korie et al. 2026: What Drives Next Gen Sequencing Denials at Yale Pathology?

Header:  A Yale pathology study presented at USCAP 2026 shows that NGS reimbursement denials are less about overuse and more about administrative failure—especially ICD-10 miscoding. Only 20% of cases were denied (275/1392), and most denials occurred despite guideline-concordant testing. The authors conclude, the fix is operational, not clinical.




Reimbursement Denials for NGS:
A Systems Problem, Not a Clinical One

[By Chat GPT 5.4]

At the March 2026 USCAP meeting, Korie et al. (Yale Pathology) presented a timely analysis of reimbursement denials for next-generation sequencing (NGS) in solid tumors:

Link (abstract PDF):
https://www.laboratoryinvestigation.org/action/showPdf?pii=S0023-6837%2825%2901936-1

The study evaluated 1,392 NGS tests performed between 2022–2023 at a large academic center. Of these, 275 cases (20%) were denied—a meaningful but not overwhelming fraction. That denominator matters: the system is not broadly failing, but the failures are highly patterned and correctable.

Register for AMA Meeting on Coding & AI: "Appendix S Revisions" - April 16

 AMA has big, big plans for changing how it handles AI services (potentially affecting digital pathology and genomics) in terms of policy and coding, possibly even with whole new classes of codes.  

These come under the headline of "Revising Appendix S," which has been a topic for several AMA CPT meetings in a row.   

You can register with AMA to view and comment on Appendix S plans, under the heading "Tab 67" of the next AMA CPT meeting.  Instructions here.  

New News: April 16:

AMA has just announced a special public meeting on Thursday April 16, from 430-600pm Central Time (530-700 ET, 230-400 PT).

Here's the AMA text and links.   Further below, I give you a very short AI summary of Appendix S.

See an essay from AMA policy participant Richard Frank MD - here

Thursday, March 26, 2026

AI, Advanced Software, and AMA CPT Policy: Deadline March 31: Appendix S for Upcoming CPT Meeting

 For several quarters, AMA CPT has been debating major amendments to the AMA CPT "Appendix S," which may have enormous implications for how AI- or software-dominant healthcare services are reimbursed.

At the upcoming AMA CPT meeting in Chicago [virtual registration still available], a new round of revisions to Appendix S will be debated.  You can sign up now to read the current revisions and make public comment.  Debate was vigorous at the AMA CPT last September and this past February.  

Revisions to Appendix S may be followed by creating a new coding section called "CMAA," Clinically Meaningful Algorithmic Analyses.   

The deadline to comment is Tuesday, March 31.

My main concern is, they'll bring in policies adapted for radiology, cardiology, etc, and they may be a poor fit for genomics, which makes universal heavy use of extremely sophisticated software including AI and which already does not require "physician work" as its main input.

Here's how to comment:

First, go to the online PDF agenda for the April CPT meeting:

https://www.ama-assn.org/system/files/cpt-panel-may-2026-agenda.pdf

Click on the boldface link for INTERESTED PARTY COMMENT.  This should take you to AMA website here, but click on the PDF's link if the one below doesn't work.

https://cptsmartapp.ama-assn.org/ipdashboard

You may need to email register with AMA to access AMA functions like this comment dashboard.

When you get to the AMA CPT Smart App, be sure and click the tab near the top for "INTERESTED PARTY" view. Scroll down to bottom.


Note that for Tab 67, Appendix S, it sends you to the "Ballot" option (far right column) which is where you find the actual markup version of a new Appendix S.

Use the progress button near the bottom to scroll ahead to Agenda 67 (Appendix S).


So you've tapped Interested Party Portal, advanced to where you find Appendix 67.   There are four columns:

  • IP Interested Party Access (to CPT application and supporting documents like publications)
  • IP Comment (you get a fixed form on which to write your comments)
  • View Comments
  • BALLOT (in the case of Appendix S, you gotta get this, the actual 4-page appendix)
Appendix S is damn hard to read - it's nearly entirely fields of struck-out text and inserted text from beginning to end.   But it's important.

See snapshot of the heavy edits throughout:





Comment on CRUSH, CMS Policy, Genomic Testing: Due Monday, March 30

On February 27, 2026, CMS announced a vigorous plan for anti-waste, anti-fraud measures in Medicare, with strong highlighting given to two areas: (1) durable medical equipment DME, and (2) genomic testing.   The initiative, abbreviated CRUSH ("crush fraud") is open for public comment until Monday, March 30, 2026.

See my blog and links here:

https://www.discoveriesinhealthpolicy.com/2026/02/cms-issues-rfi-on-fraud-highlighting.html

Genomics fraud includes highly improper billing of hundreds of millions of dollars for genetic testing that is impossible or pointless in a Medicare population.   (These occurences were vastly dominated by the states of Florida and Texas where Medicare payment controls were amazingly weak, for years.) An example of a $52M genetic test scheme is here.

There have been 183 comments to date, but comments often pour in on the final day.   See the policy discussion here and find a "submit a public comment" checkbox.  Anti-fraud options discussed include nationalizing MolDx.

https://www.federalregister.gov/documents/2026/02/27/2026-03968/request-for-information-rfi-related-to-comprehensive-regulations-to-uncover-suspicious-healthcare



Tuesday, March 24, 2026

Nerd Note: 2017 PAMA Raw Data File is Still Posted

Header:  CMS still stores publicly available cloud data on lab test pricing surveyed in 2017 and representing CY2016.

###

 Congress and CMS are re-activating the PAMA reporting process.  For reporting laboratories who had >$12500 in Medicare payments in 1H 2025, they will report data on all claims paid by commercial payors in 1H2025, and they will report in May-June-July 2025.  See websites and announcements at CMS.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting

The prior survey reporting 1H2016, reported and posted in 2017.  This set a new fee schedule for 2018 forward.

See the 2016/2017 Cloud Data 

At the time, CMS published a gigantic cloud database of reported prices.  I thought that was no longer available, but it seems it is.   The data can be pretty interesting.

On this page:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs

see "CLFS Applicable Raw Data File from 2017 Reporting."


That sends you here:

https://data.cms.gov/provider-characteristics/hospitals-and-other-facilities/medicare-clinical-laboratory-fee-schedule-private-payer-rates-and-volumes


For example, if you search 81211 (a popular BRCA code in 2016), you get 374 rows of data.   

I wasn't sure how that squares with a contemporary old 2017 data file I havea for PAMA 81211, which has 2364 rows of pricing data for 81211. From 2364 (my file) to 374 (online file today), about 85% of the rows are msising.  This is because the current cloud data leaves out all price reporting with less than 10 units per line, and my old file with 2364 lines includes many lines with only 1 or 2 payments at that price.  The overall shape of the data would be the same, just scaled down.

For example, the 2016 data I have (with 2364 rows) for BRCA 81211 shows a large price peak around $2900, which I assume reflect Myriad payments (this was not long after the BRCA Supreme Court case), and relatively few commercial payments at the CMS rate back then (around $2200) but another peak of payments around $1800, which was 85% of the CLFS at the time.   So I inferred (this is just armchair guessing) that Myriad was cruising along with numerous legacy contracts in the $3000 range for BRCA 81211, while new entrants were entering the newly opened BRCA market often at 85% of the CLFS.   



It was also notable that the thin tail of cheapest payments went below $100 and thin thin top end had the rare payment over $6000.

81455

AMA CPT created code 81445 (5-50 tumor genes) and 81455 (51+ tumor genes) at the same time.  81445 was gapfill priced to about $600, but 81455 was not priced by MACs.   However, in 2016 PAMA claims reported in 2017, the (current) database shows 53 claims.  (Recall it excludes any line with 9 or less units).

For 81455, there were 14 claims at $9,900, 11 claims at $1,600, 18 claims at $3579, and 10 claims at $4500.   From this, you'd expect the median was $3579, but it was actually $2916 when all claims (including those prices used 9 times or less) were counted.   We learned from BRCA claims for 81211 that the exclusion of price levels with 9 or less paid claims omitted over 80% of the total data.  

Monday, March 23, 2026

AI Experiment: How Alex Dickinson Describes the CARIS MCED ACHEIVE Report

In March, Caris released top-line results of its ACHIEVE study, testing its MCED test in real cases.  Press release here.  Active Linked In author Alex Dickinson wrote a set of 5 articles about the results.  One, two, three, four, five.

Out of curiousity, I asked what Chat GPT could make out of the six documents.



AI CORNER

###

Overview

Caris reports striking interim performance for its Detect MCED assay using deep whole-genome sequencing, with unexpectedly strong early-stage sensitivity in common cancers. However, enriched cohorts, limited follow-up, and incomplete blinded validation constrain interpretation. Dickinson’s analyses highlight a differentiated WGS multi-signal strategy with potential advantages over methylation-first approaches.


Consolidated Article (Caris + Dickinson)

Focusing first on the press release, the key point is that Caris reported an interim analysis, not a completed prospective screening validation. The Achieve 1 dataset includes 2,122 subjects (1,505 undiagnosed; 617 cancers), but the undiagnosed group is enriched, not general-population screening. 

Only 22.5% had ~1-year follow-up, with ~7% later diagnosed with cancer—again indicating high-risk enrichment. About 865 samples remain in blinded validation, so current results are signal-generating, not definitive.

The reported performance is notable. Stage-specific sensitivity was 56.8% (I), 70.1% (II), 77.1% (III), 99.1% (IV), with 61.3% for stage I–II. Early-stage sensitivity in key cancers included 53% breast, 78.9% prostate, 86.7% lung, and 62.2% colorectal. Specificity was 99.1% in a small asymptomatic subset (n=121) and 95.3% in the broader undiagnosed cohort. These are the central empirical results.

An Expert Discusses The Data

Dickinson’s posts provide useful context. He frames Caris as entering MCED from a position of scale and infrastructure—large tumor databases, clinical profiling, and sequencing capacity—suggesting Detect is an extension of an existing oncology data platform rather than a stand-alone assay.

Scientifically, Dickinson highlights the assay design: ~250x whole-genome sequencing of plasma with paired buffy coat sequencing to remove CHIP, extracting mutational, fragmentomic, and nucleosome-positioning signals for ML classification. This multi-signal WGS framework plausibly explains the observed sensitivity pattern.

His most provocative point concerns tumor-type performance, especially breast and prostate. He argues methylation-first MCED approaches may underperform in these high-incidence cancers, making Caris’s relatively strong early-stage sensitivity clinically meaningful. This reframes MCED evaluation toward incidence-weighted performance, not aggregate sensitivity alone.

Key Limitations

However, key limitations remain. The 99.1% specificity estimate rests on only 121 asymptomatic subjects. The broader 95.3% specificity is more relevant operationally but less reassuring for screening. The cohort is enriched, follow-up incomplete, and blinded validation pending—raising the risk of optimistic interim estimates.

Caris also claims superiority over methylation approaches, but provides no head-to-head comparison, making this an interpretive rather than evidentiary claim. Cross-study comparisons, including Dickinson’s, remain hypothesis-generating.

Strategically, Dickinson emphasizes Caris’s ability to pursue a sequencing-intensive design due to its scale. He notes current costs may be high (e.g., overlapping 2x150 reads), but could fall with alternative platforms. This underscores that economic feasibility will be as important as analytical performance.

Finally, Caris signals future expansion to whole transcriptome integration, reinforcing its view of MCED as a multimodal inference problem, though at the cost of added complexity.

Bottom line: Detect represents a biologically coherent and potentially important MCED approach with strong interim early-stage signals. But given cohort design, small screening subsets, and pending validation, the appropriate stance is serious interest with disciplined caution, not definitive conclusion.



CMS Reports on First Rural Health Summit under Tech Transformation

One of the lesser-known outcomes of last summer's budget bill was $50B for rural health transformation (RHT) focused on technology.  Last fall CMS established the Office of RHT and in December $50B of awards were announced.  Home page here.

On March 18, CMS convened leaders from all 50 states to discuss.  Find the online report here:

https://www.cms.gov/newsroom/press-releases/readout-cms-convenes-first-rural-health-transformation-summit-advance-state-led-innovation



AI CORNER

Here's a 100 word summary.

CMS convened its first Rural Health Transformation Summit on March 18, 2026, bringing leaders from all 50 states together to advance a $50 billion initiative created under recent federal legislation. The program aims to strengthen rural health systems through 

  • expanded access to care, 
  • workforce development, 
  • technology modernization, and 
  • innovative payment models. 

Discussions focused on five priorities: 

  • prevention and chronic disease, 
  • sustainable access, 
  • workforce pipelines, 
  • value-based care, and 
  • health IT. 
States shared early strategies such as telehealth, mobile units, and regional partnerships. CMS emphasized aligning efforts with Medicare and Medicaid financing and embedding long-term workforce and infrastructure solutions to ensure durable improvements in rural health outcomes.

Saturday, March 21, 2026

Journal Club: Value of WGS in Real-World Cancers (Van Putten, Nat Med)

 What's the value of going upscale to whole genome sequencing (WGS) in solid cancers?  Van Putten et al. assemble date from their experience with 888 solid cancers.  The work is from Hartwig Medical Foundation / Netherlands Cancer Institute.

Find the paper here and a Linked In essay here by Joseph Steward.  And here by Alex Dickinson.  Dr. Cuppen here, scientific director of Hartwig Foundation.


Most samples in this study were frozen tissue (89% success rate), but they remark that when archived samples were used, they had the same success rate (90%).

Chat GPT Discusses the Paper:

Friday, March 20, 2026

Can AI Re-Think Health Policy? Example Using WSJ Policy Essay (& MolDx)

Can AI read an article and project its possible applications into a different field?  That's today's question. 

Starting point: WSJ runs an essay by Harvard economics professor and Manhattan Institute authority Roland Fryer.  Fryer here, essay here.   


While his article was on "regulating AI," it clearly had ramifications or applications in other policy domains.  I asked Chat GPT 5 to read the essay and discuss its projection onto healthcare policy such as CMS.   I deliberately left my main initial request vague.   

At bottom, I asks it some Q&A, including how this applies to MolDx.

Here comes the initial response to my request, "apply Fryer's thinking to healthcare policy."

Thursday, March 19, 2026

NCCN Recommends NGS in All Stages of Pancreatic Cancer: Direct Conflict with Outdated Medicare NCD

Tuesday, I was in a webinar where stakeholders were discussing the badly-outdated Medicare NCD for NGS testing in cancer.  Thursday of the same week, more proof of the problem hit my inbox.

See the March 18, 2026, release of new pancreatic cancer management guidelines from the National Cancer Center Network - NCCN.   Here.   See coverage in Genomeweb, here.



Tuesday, March 17, 2026

Waiv (Former Owkin Dx) Brings In $33M For AI Precision Diagnostics

It's not every day you hear of a new $33M going into spatial diagnostics - but it happened this week.

WAIV Diagnostics, Paris (former Owkin Dx) is now a spinout from Owkin and pulling in its own investors and funding. 


https://wearewaiv.com/news/waiv-secures-33-million-to-scale-ai-precision-testing

See also a post from Meriem Sefta, WAIV CEO.

https://www.linkedin.com/posts/meriemsefta_waiv-waiv-secures-33-million-to-scale-share-7437851429651935232-WHmk/

"We exist to catalyze precision medicine with clinical-grade, AI-powered tests designed to detect biomarkers, predict outcomes, and better understand treatment response in oncology."

###

It's a fit with the Bloodpac webinar and white paper, "it's not just DNA mutations anymore" - here.

https://www.discoveriesinhealthpolicy.com/2026/03/bloodpac-releases-webinar-and-white.html


Illumina Publishes "Diagnostics Year in Review" CY2025 (48pp)

 Once again Illumina has published its Diagnostics Year in Review.   See CY2025 in a birdseye view, edited by Mara Aspinall, 48pp.

https://www.illuminaventures.com/wp-content/uploads/2026/02/Diagnostics-Year-in-Review-2025-Version-for-Distribution.pdf


##
AI CORNER
##

TL;DR ? Should you read this 48-page “Diagnostics Year in Review”?

Yes—if you care about where genomics, AI diagnostics, and reimbursement policy are actually heading, not just headlines. This is one of the clearest “industry synthesis” decks of the year, and it quietly encodes several policy-relevant signals that CMS/FDA watchers will recognize immediately.

The report argues that 2025 marks a shift from post-COVID recovery to a true “diagnostics renaissance.” That’s not hype—it’s grounded in three converging forces:

  • Regulatory relief (LDT rule vacated → innovation unlocked)

  • Platform shift (tests → data/AI-driven clinical decision engines)

  • Capital re-consolidation (massive M&A + “Terrific Ten” winners emerging)


BLOODPAC Releases Webinar and White Paper: New Frontiers in Therapy Selection / Beyond DNA Mutations

BLOODPAC offered a multi speaker seminar last fall on the topic, New Frontiers in Therapy Selection: Beyond DNA Mutations.  Find the online resources now.



Here's the home page: link.

Find the two-hour YouTube webinar here: https://www.youtube.com/watch?v=yuYdhbdVcpU

As you scroll the home page, you'll also reach the 37 page white paper.

###

Saturday, March 14, 2026

Chris Klomp Now Near Top of HHS; See His One-Hour Recent Interview

Wall Street Journal, Politico, Washington Post have all been covering the shake-up in senior management at HHS - here.    Chris Klomp rises to #2 at HHS.  Here's a profile of Klomp from several news articles.  Here's an interview with Klomp on China & Biotech.

Which gives extra importance to a one-hour interview that Paragon Institute posted just a few weeks ago.  

  • Find the text here
  • the YouTube archive here.  
  • He's interviewed by Brian Blase, President of Paragon Institute, and policymaker Demetrios Kouzakas.

Here's an AI article based on the interview transcript. [Chat GPT 5.4]

###

Chris Klomp’s Policy Playbook: 

Markets, Incentives, and the Power to Convene at CMS

In a wide-ranging Paragon interview, new HHS deputy Chris Klomp outlines a Medicare strategy built on incentives, market signals, and stakeholder convening rather than regulation, offering insight into emerging federal health policy direction. (January 27, 2026).



The Strange Place of FIT Testing Between FDA Label and Medicare Screening

 Header:  CMS is reviewing its coverage standards for CRC screening biomarkers - stool, blood, etc.   But CMS explicitly will  ignore fitting FIT testing into the new system.   (FIT testing will remain untouched and as-is.)

There's quite a story there - FIT testing is FDA-regulated much differently than Cologuard, Shield, etc.  Here's an essay planned by me, but written by Chat GPT 4 in a few seconds.

###

FIT Testing: Why It Has a Special, Hands-Off Status in the Screening NCD

Chat GPT 5.4

CMS’s current proposal on non-invasive colorectal cancer biomarker tests is notable not only for what it addresses, but also for what it leaves untouched. CMS is proposing new evidence standards for emerging biomarker tests, but it is not reopening its longstanding coverage of FIT and guaiac FOBT under the colorectal cancer screening benefit. That omission is interesting—at least to CMS policy nerds—because FIT looks simple from a distance but becomes surprisingly complicated if one tries to revisit the benefit rigorously. CMS may have decided it is better not to stir that particular hornet’s nest. [1][2]

The key issue is regulatory. FIT is not regulated like Cologuard. 

WSJ: White House Shakes Up HHS Management

 Per the WSJ March 13 and 14, White House has multiple pathways for "shaking up HHS" at the top management level.

https://www.wsj.com/politics/policy/white-house-pushes-shake-up-at-hhs-ahead-of-midterms-6ad882a5      and also  https://www.wsj.com/politics/policy/trump-rfk-jr-hhs-midterm-elections-cef51179

See also WaPo and Politico.

Chris Klomp, November




  • White House installs Chris Klomp as HHS No. 2 under Kennedy. (Jim O'Neill displaced) (General Counsel Mike Stuart is out.).

  • See my detailed article on a recent one-hour interview with Klomp - here.

  • Klomp to oversee operations, messaging, and management coordination.

  • Three senior counselors added across CMS and FDA leadership.

  • Shake-up aims to speed execution of “Make America Healthy Again.”

  • Leadership changes follow operational problems, including reversed grant cancellations.

  • Administration seeks disciplined messaging ahead of healthcare-focused midterm elections.

Friday, March 13, 2026

Mapping the Colorectal Cancer Screening Proposal: Why Use an Efficiency Frontier

CMS has a current NCD for biomarker CRC screening, using 74% sensitivity and 90% specificity as a benchmark.  This means you pick up about 3/4 of cancers (relative to colonoscopy) and you send about 10 patients per 100 to a false positive based colonoscopy.

Here I expand on a prior blog and show the two new CMS options graphically.

We can show the statistical space on a probability chart.  The vertical axis is specificity (and also shows "FP per 100").   The horizontal axis is the inverse of sensitivity.  It also shows "cancers missed per 100."   The IDEAL PLACE to be is the far upper left corner.


Since the required conditions are expressed as ≥, the look like an x,y point but define a rectangular solution space.  Any given clinical trial will represent a point with a cloud for SD (such as 90% spec +-2, 85% spec +- 3).

AI History You Can Use: MACs, BCBS Plans, Corporate Structures in Review

There are many complex relationships among Medicare contractors, Blues plans, holding entities, novelty names (Elevance), and more.   Chat GPT works hard to sort it all out.

I've read it all and it's directionally correct and consistent with what I know - but don't guarantee every word is correct.  It's a for-example of what AI research and AI writing can create, as of 3/2026.

###

The Blue System, Medicare Contractors, and the Curious Case of MolDX

At first glance, entities such as Novitas, First Coast Service Options (FCSO), Palmetto GBA, CGS, and MolDX can look like a tangle of shells, aliases, and contractual masks. In reality, the structure is more intelligible than it first appears, though still sufficiently layered to invite confusion. The key is to distinguish among three different kinds of relationships: first, the relationship between the Blue Cross Blue Shield Association (BCBSA) and local or regional Blue plans; second, the relationship between those Blue plans and their government-services subsidiaries or affiliates; and third, the difference between a corporate entity and a programmatic framework such as MolDX. Once those distinctions are kept in view, the web of Novitas, FCSO, Palmetto, CGS, WPS, Noridian, Florida Blue, South Carolina Blues, Anthem, and Elevance becomes much easier to parse.[1][2] (Blue Cross Blue Shield Association)

Thursday, March 12, 2026

AI History You Can Use: Relive Amazing 2007-2010 Debates about FDA, LDT, CDX

In the past couple years, we've lived through FDA regulation of LDTs, court cases, and expanding capabilities of genomic diagnostics, many of them LDTs. 

However, it's worth while to recall a period 2007-2010, when a PGx test to predict rituximab responders led to pushback from Genentech against LDTs, a Citizen's Petition to FDA, and a National Academies review in 2010. The PGX FCGR rituximab test largely sank out of view by then, and later meta-analysis were negative. Here is a retelling of the whole story from Chat GPT 5.4.

The article below is written entirely by Chat GPT and as a side bar I provide a link to the whole Chat GPT dialog in its original form of prompts and answers:  Here.

It would have taken me hours to research and write this essay by hand.  With AI, it took a half hour from my first vague prompt about a half-remembered something.

###

 


Tuesday, March 10, 2026

CMS Posts New Idea for Colorectal Screening Biomarkers - But Should Use an Equivalence Frontier

Update - I walk through the  old and new SENS-SPEC spaces, graphically, here

###

On March 10, 2026, CMS released a new proposal for covering non-invasive CRC screening tests.   Currently, CMS uses a threshold of sensitivity 74% (picking up about 3/4 of all colon cancers relative to colonoscopy) and a specificity of 90% (sending about 1 in 10 patients to colonoscopy due to a false positive FP biomarker.)  CMS uses one NCD for DNA FIT testing (Cologuard) and another NCD for blood-based CRC screening.    

In its opening of the NCD revision six months ago, CMS proposed to change the title to "Non-Invasive Biomarker Tests," suggesting they could merge coverage of blood-based and stool-based tests.  That is what they are in fact attempting to do.  CMS expects to issue its final version June 8, 2026.

However, it looks like CMS is making a cognitive error.  Although they seem to understand there is a continuous tradeoff between SENS and SPEC (just by  sliding the cut point up and down) they proposed to allow only two particular "bins" for coverage - SENS 90, SPEC87, or else option two, which is SENS  79, SPEC  90.   

(Pick up 90% of cancers, while sending 13 patients to a false positive FP colonoscopy; or pick up 79% of of the cancers, while sending only 10 patients to a FP colonoscopy).

Clearly, you should be able to pick up 89% of cancers, but send 11 patients to colonoscopy.  But that would fail.  You'd fail he 90% rule of option 1, you'd fail the 10 rule of option 2. 

The probably is, companies can get preliminary data, set predetermined cutpoints to meet one or the other bin, and then "miss" the two bins, although actually having a more accurate test that the NCD requires.  That is, the test performs well against a continous quality frontier.  

This is not hard to express algebraically, and CMS could use a simple formula by which anyone could tell in 30 seconds if a test meets the true (frontier-based) performance or not.   This is also much less wasteful, since you don't have to discard super-costly trials that miss a "bin" while exceeding the implied accuracy frontier.

I'll let Chat GPT explain it.

The CRUSH Initiative and Medicare's Bone-Headed Stupid Payments for 81408 and Other Insane Codes

In June 2023, OIG published that Medicare's highest paid genomic test code, 81408, was likely unbelievable and fraudulent from day 1.   Here.  Practically a billion dollars had gone out from 2018 to 2022, when Medicare payments for 81408 were stopped  The code was never billed in the NGS MAC and MolDx regions, and nearly all payments were in Texas (Novitas MAC) and Florida (FCSO MAC).

Payments look like this:


If you know that 81408 is medically unbelievable in a Medicare population, let's add that these labs billed and were paid 81408 in units of 2 per patient, 81407 in units of 1 per patient, and 81406 in units of 2 per patient.   So patients actually had not 1, but 5 or more unbelievable codes ON EACH CLAIM.

I first referred to 81408 as the "fraudomatic code" in the fall of 2020Here.  Over the next 5 years, I published about a dozen follow up blogs.

Four more insights into the MAC insanity here:

NEW INSANITY #1

Some labs in Florida had huge payments under 81408 in 2022.  When that gusher of money stopped in 2023 (bar chart above), the SAME LABS just switched to other costly, unbelievable codes like 81419 (epilepsy gene panel).   Here.  OMG.

 NEW INSANITY #2

More on the codes that were switched to.

Despite seeing the massive risks of uncontrolled, costly genetic codes in Texas and Florida by 2022, and publishing on this in 2023, the same insane explosive growth continued in Texas and Florida in 2024, on the codes 81419 (epilepsy $2449), 81440 (mitochondrial $3324), and 81443 (Ashkenazi Panel $2449).


  Whereas the natural (original) spending on these codes in Medicare should be and is, close to zero, the 2024 spending was $161M.

Puzzle - which is worse, DME fraud or Genetics fraud?  I would argue DME fraud bills for implausible volumes of services, while this genetics fraud bills for impossible types of services, which should be easier to detect.  

NEW INSANITY #3

The older rate for 87798, other pathogen, $35, was significant in 2019, at $100M.  But nothing can explain the skyrocketing value adding $200M from 2022 to 2024.  The extra and sudden $200M was similar to booming rates for inexplicable codes 81408, 81419, 81440, 81443.   


Billing by LabCorp and Quest was about nil.  But in Texas and Florida...watch out.   (MolDx largely cut off 87798 payments by around 2022.)

NEW INSANITY #4

This whole time, the program integrity people at CMS left the "Medically unlikely edit" at N=2 for code 81408.   If it had been reset to "1", which would have taken five minutes, in 2019 or 2020, CMS would have saved $400M.   

Even today, March 2026, the medically reviewed and passable units per claim on 81408 is ... TWO.  Here.  This is a supervised edit.  Someone had to look at this and decide the allowable edits were TWO.  

And even after it was a top fraud investigation - surely, by 2022, based on the 2023 OIG publication - nobody at OIG, or a MAC, or a UPIC, or the big CMS program integrity group, could be bothered to reset the MUE units to 0 or 1, saving hundreds of millions of dollars.  

In March 2026, it's still...medically allowable as TWO units.


So when I'm interviewed about CMS fraud, like the new CRUSH initative, I say you don't need a supercomputer and the idiocy is a mile deep.

###

Related:

Ought of curiousity, i asked "Chat GPT" to write an essay about potential adverse events from CRUSH in the 'legit' lab industry.



Monday, March 9, 2026

TriCon: Cutting Edge Conference in SF, May 4-5, 2026

The conference TRICON is in its 33rd year, and will be held in San Francisco May 4-5, 2026.  The conferences has three main tracks, "Diagnostics Innovation," "Artificial Intelligence," and "Precision Medicine."  

(And it comes right on the heels of Dark Report Pathology War College in New Orleans, April 27-29, and AMA CPT in Chicago, April 30-May 1.)

Find the conference website here:

https://www.triconference.com/

I gave the agenda(s) to Chat GPT and asked for a write-up.



####

AI CORNER

####

Summary:
The 2026 TRI-CON Precision Medicine conference highlights the rapid convergence of AI, multi-omic diagnostics, and digital pathology. Across three coordinated tracks—Artificial Intelligence, Diagnostics Innovation, and Precision Medicine—the meeting reflects a field moving toward AI-enabled interpretation of complex biological data and decentralized deployment of advanced molecular testing. For molecular pathologists and precision medicine specialists, the program signals a transition from isolated diagnostic tests to integrated computational systems guiding clinical decision-making.

------------------

The 33rd Annual TRI-CON Precision Medicine conference, returning to San Francisco in May 2026, brings together leaders in biotechnology, diagnostics, and computational medicine to explore how emerging technologies are reshaping healthcare. Organized around three overlapping tracks—Artificial Intelligence, Diagnostics Innovation, and Precision Medicine—the program illustrates how the next generation of diagnostics will increasingly depend on the integration of genomics, pathology, imaging, and clinical data within AI-driven analytical frameworks.

A dominant theme across the conference is the emergence of AI-driven multimodal biomarkers. Sessions in the Artificial Intelligence track explore how machine learning models can combine histopathology images, genomic sequencing data, radiology signals, and real-world clinical outcomes to improve biomarker discovery and therapeutic targeting. Digital pathology and computational pathology play a central role in this transformation, with speakers describing how foundation models trained on histology data may enable new biomarker strategies and accelerate clinical trial design. Several presentations also emphasize “agentic AI” systems, in which multiple AI models coordinate across datasets and clinical guidelines to support oncologists and multidisciplinary teams in treatment selection and clinical trial enrollment.

The Diagnostics Innovation track highlights another major shift: the migration of testing from centralized laboratories toward point-of-care and at-home diagnostic environments. New molecular technologies—including CRISPR-based detection platforms and portable multiplex testing systems—are being developed to bring complex molecular assays closer to the patient. However, presenters note that reimbursement policy, site-of-service restrictions, and regulatory frameworks remain major barriers to broader adoption of decentralized molecular diagnostics.

Meanwhile, the Precision Medicine program focuses heavily on liquid biopsy technologies, particularly minimal residual disease (MRD) testing and multi-cancer early detection. These sessions emphasize the growing importance of multi-omic signals—DNA mutations, methylation patterns, RNA expression, and protein markers—interpreted through AI-enabled analytics to detect cancer earlier and monitor disease progression more precisely. While oncology remains the dominant application, the conference also explores precision approaches in metabolic disease, neurology, and population health.

Taken together, the TRI-CON agenda suggests that diagnostics are evolving toward AI-mediated, multimodal clinical intelligence systems, combining laboratory science, computational modeling, and decentralized testing to support more personalized and proactive healthcare.