Wednesday, April 1, 2026

Quick Access to Top Providers of any "Popular" Part B CPT Code. OpenMedicare.US

 CMS group "Data.CMS.Gov" has an elaborate website for payments to providers by CPT code.  It's a little tricky to use but powerful.  Find it here:

https://data.cms.gov/provider-summary-by-type-of-service/medicare-physician-other-practitioners/medicare-physician-other-practitioners-by-provider-and-service

There's a nonprofit website OpenMedicare.US, that lets you look up the providers of any CPT code in a simple way.

https://www.openmedicare.us/procedures/81479

You can go here:

https://www.openmedicare.us/procedures

and then scroll down to "search by code or description."  Note, however, they only provide "top 500 codes" and that's among all procedures, not just lab procedures.  

For example, 81479 (unlisted code) makes the top 500, as does 88342 (IHC), while 81162 (BRCA) does not.  When you can't get granular listings, it still offers you the national spend.  


Here's top data for 88342 IHC:




AMA CPT Publishes Latest Batch of Quarterly PLA Codes

 On April 1, 2027, AMA CPT released the results on the latest batch of quarterly PLA codes.  These codes were applied for around December 10, published today, and active July 1

AMA summarizes there were 2 revisions, 4 deletions, and 29 new codes 0631-659U.

https://www.ama-assn.org/system/files/cpt-pla-codes-long.pdf

Note that AMA includes a number of important instructions for acceptable or disallowed PLA codes, in the two page prolog to the code list.  E.g. "PLA codes do not have a physician work component," which the PLA committee will review rigorously.

##

The next PLA date is April 14, when they will post new code applications for comment, being codes applied-for around March 10.  Those will be hustled through the system from the April 21 public comment date to the Aprl 30 CPT editorial voting date.  The codes should appear in the June and July CMS pricing meetings.





Will CMS Nationalize MOLDX? Coverage at 360DX

A month ago, CMS announced a major new anti-fraud initiative, called CRUSH.  It had two main targets: DME fraud and genomics fraud.  The comment period closed on March 30, and journalists are sorting through the comments received.


See coverage by Adam Bonislawski here (subscription).   (See my CMS comment here).

I see a Regulations.gov posting that 768 comments were received but I haven't found the "search them" link yet.  

Here's the 18-page comment from ACLA - the fact it runs 18 pages alone, suggests they are taking this very seriously.

Here's a 120 word AI overview of the 2000 page full article.

  • Stakeholder comments on the CMS fraud RFI show a divided but nuanced response to possible nationwide MolDX expansion. Lab groups and consultants generally agree that fraud in molecular testing is a real problem and that clearer front-end controls could help. However, their comments emphasize that MolDX also brings slower coverage timelines, heavier documentation demands, and uncertainty for new test launches. 
  • ACLA stressed delays and stalled coverage requests; NILA argued CMS should focus more on inappropriate ordering than on labs alone; consultants noted MolDX can improve predictability once coverage is secured, but at the cost of greater upfront burden. 
  • The overall tone of stakeholder comment was not anti-oversight, but cautionary: many support stronger anti-fraud tools, yet want CMS to avoid replacing one problem—improper payments—with another—bureaucratic delay and reduced patient access.
And here's a longer summary of the open-access ACLA comment:

Tuesday, March 31, 2026

New to Me: AMA Comment Deadlines are NOON CENTRAL on the Day

 New to me, so I'm just flagging this.

The last few weeks, comments have been accepted for the April 30 - May 1 AMA CPT meeting.  One important topic is revisions to Appendix S, a major ongoing issue that affects AMA CPT policy for software-intensive services (e.g. AI).

The comment deadline is March 31 2026, I had that right.

I for one, had not noticed the specific deadline is 11:59 am Central Time (e.g. NOON on the deadline day.)

Something to keep track of.  This applies to pathology comments, non pathology comments, PLA comments, etc.






Monday, March 30, 2026

Horizons in Diagnostics Value: Case Study: Rethinking Value for Infection Diagnostics

Here's a paper that is worth discussion, and potentially applicable to many areas of diagnostics, not just infection.

In a 2025 paper at Open Forum Infectious Diseases, some excellent thought capital is created by Claeys, Prinzi, and Timbrook.  Here.



It's also a great example of a good abstract - I can't do better than quoting it.

  • Evaluating the clinical impact of in vitro diagnostic tests (IVDs) for infectious diseases is complex given their effectiveness depends on context, implementation, and provider behavior. 
  • Traditional methodologies for therapy interventions do not adequately capture this complexity, necessitating novel analytical approaches and study designs. 
  • This review highlights methodological considerations for improving evidence generation for infectious diseases IVDs. 
    • Design and analysis challenges leading to bias and related solutions are reviewed such as the target trial framework. 
    • Moreover, novel frameworks such as Benefit–Risk Evaluation of Diagnostics: A Framework, Desirability of Outcome Ranking Management of Antimicrobial Therapy, and Desirability of Outcome Ranking and study designs such as hybrid effectiveness–implementation designs are discussed which allow for holistic ways to assess real-world outcomes.
  •  By evaluating IVDs with practical, real-world evidence, tests can better inform clinical decision making, policy, and ultimately patient outcomes.

###

Saturday, March 28, 2026

Chris Klomp, Health Policy Expert for CMS and HHS - Some Notes on His Experience

Over the last few weeks, shake-ups at HHS have brought Chris Klomp to the #2 position next to Secretary Kennedy.  See news reports here; see an annotated one-hour interview with Klomp here.

I asked Chat GPT to discuss his educational background and professional experience through the lens of his current top-level health policy roles.

Endpoints discusses Klomp on AI, Klomp on biotech/China, Klomp on TrumpRx.

Friday, March 27, 2026

Korie et al. 2026: What Drives Next Gen Sequencing Denials at Yale Pathology?

Header:  A Yale pathology study presented at USCAP 2026 shows that NGS reimbursement denials are less about overuse and more about administrative failure—especially ICD-10 miscoding. Only 20% of cases were denied (275/1392), and most denials occurred despite guideline-concordant testing. The authors conclude, the fix is operational, not clinical.




Reimbursement Denials for NGS:
A Systems Problem, Not a Clinical One

[By Chat GPT 5.4]

At the March 2026 USCAP meeting, Korie et al. (Yale Pathology) presented a timely analysis of reimbursement denials for next-generation sequencing (NGS) in solid tumors:

Link (abstract PDF):
https://www.laboratoryinvestigation.org/action/showPdf?pii=S0023-6837%2825%2901936-1

The study evaluated 1,392 NGS tests performed between 2022–2023 at a large academic center. Of these, 275 cases (20%) were denied—a meaningful but not overwhelming fraction. That denominator matters: the system is not broadly failing, but the failures are highly patterned and correctable.

Register for AMA Meeting on Coding & AI: "Appendix S Revisions" - April 16

 AMA has big, big plans for changing how it handles AI services (potentially affecting digital pathology and genomics) in terms of policy and coding, possibly even with whole new classes of codes.  

These come under the headline of "Revising Appendix S," which has been a topic for several AMA CPT meetings in a row.   

You can register with AMA to view and comment on Appendix S plans, under the heading "Tab 67" of the next AMA CPT meeting.  Instructions here.  

New News: April 16:

AMA has just announced a special public meeting on Thursday April 16, from 430-600pm Central Time (530-700 ET, 230-400 PT).

Here's the AMA text and links.   Further below, I give you a very short AI summary of Appendix S.

See an essay from AMA policy participant Richard Frank MD - here

Thursday, March 26, 2026

AI, Advanced Software, and AMA CPT Policy: Deadline March 31: Appendix S for Upcoming CPT Meeting

 For several quarters, AMA CPT has been debating major amendments to the AMA CPT "Appendix S," which may have enormous implications for how AI- or software-dominant healthcare services are reimbursed.

At the upcoming AMA CPT meeting in Chicago [virtual registration still available], a new round of revisions to Appendix S will be debated.  You can sign up now to read the current revisions and make public comment.  Debate was vigorous at the AMA CPT last September and this past February.  

Revisions to Appendix S may be followed by creating a new coding section called "CMAA," Clinically Meaningful Algorithmic Analyses.   

The deadline to comment is Tuesday, March 31.

My main concern is, they'll bring in policies adapted for radiology, cardiology, etc, and they may be a poor fit for genomics, which makes universal heavy use of extremely sophisticated software including AI and which already does not require "physician work" as its main input.

Here's how to comment:

First, go to the online PDF agenda for the April CPT meeting:

https://www.ama-assn.org/system/files/cpt-panel-may-2026-agenda.pdf

Click on the boldface link for INTERESTED PARTY COMMENT.  This should take you to AMA website here, but click on the PDF's link if the one below doesn't work.

https://cptsmartapp.ama-assn.org/ipdashboard

You may need to email register with AMA to access AMA functions like this comment dashboard.

When you get to the AMA CPT Smart App, be sure and click the tab near the top for "INTERESTED PARTY" view. Scroll down to bottom.


Note that for Tab 67, Appendix S, it sends you to the "Ballot" option (far right column) which is where you find the actual markup version of a new Appendix S.

Use the progress button near the bottom to scroll ahead to Agenda 67 (Appendix S).


So you've tapped Interested Party Portal, advanced to where you find Appendix 67.   There are four columns:

  • IP Interested Party Access (to CPT application and supporting documents like publications)
  • IP Comment (you get a fixed form on which to write your comments)
  • View Comments
  • BALLOT (in the case of Appendix S, you gotta get this, the actual 4-page appendix)
Appendix S is damn hard to read - it's nearly entirely fields of struck-out text and inserted text from beginning to end.   But it's important.

See snapshot of the heavy edits throughout:





Comment on CRUSH, CMS Policy, Genomic Testing: Due Monday, March 30

On February 27, 2026, CMS announced a vigorous plan for anti-waste, anti-fraud measures in Medicare, with strong highlighting given to two areas: (1) durable medical equipment DME, and (2) genomic testing.   The initiative, abbreviated CRUSH ("crush fraud") is open for public comment until Monday, March 30, 2026.

See my blog and links here:

https://www.discoveriesinhealthpolicy.com/2026/02/cms-issues-rfi-on-fraud-highlighting.html

Genomics fraud includes highly improper billing of hundreds of millions of dollars for genetic testing that is impossible or pointless in a Medicare population.   (These occurences were vastly dominated by the states of Florida and Texas where Medicare payment controls were amazingly weak, for years.) An example of a $52M genetic test scheme is here.

There have been 183 comments to date, but comments often pour in on the final day.   See the policy discussion here and find a "submit a public comment" checkbox.  Anti-fraud options discussed include nationalizing MolDx.

https://www.federalregister.gov/documents/2026/02/27/2026-03968/request-for-information-rfi-related-to-comprehensive-regulations-to-uncover-suspicious-healthcare



Tuesday, March 24, 2026

Nerd Note: 2017 PAMA Raw Data File is Still Posted

Header:  CMS still stores publicly available cloud data on lab test pricing surveyed in 2017 and representing CY2016.

###

 Congress and CMS are re-activating the PAMA reporting process.  For reporting laboratories who had >$12500 in Medicare payments in 1H 2025, they will report data on all claims paid by commercial payors in 1H2025, and they will report in May-June-July 2025.  See websites and announcements at CMS.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting

The prior survey reporting 1H2016, reported and posted in 2017.  This set a new fee schedule for 2018 forward.

See the 2016/2017 Cloud Data 

Monday, March 23, 2026

AI Experiment: How Alex Dickinson Describes the CARIS MCED ACHEIVE Report

In March, Caris released top-line results of its ACHIEVE study, testing its MCED test in real cases.  Press release here.  Active Linked In author Alex Dickinson wrote a set of 5 articles about the results.  One, two, three, four, five.

Out of curiousity, I asked what Chat GPT could make out of the six documents.



AI CORNER

###

Overview

Caris reports striking interim performance for its Detect MCED assay using deep whole-genome sequencing, with unexpectedly strong early-stage sensitivity in common cancers. However, enriched cohorts, limited follow-up, and incomplete blinded validation constrain interpretation. Dickinson’s analyses highlight a differentiated WGS multi-signal strategy with potential advantages over methylation-first approaches.


Consolidated Article (Caris + Dickinson)

Focusing first on the press release, the key point is that Caris reported an interim analysis, not a completed prospective screening validation. The Achieve 1 dataset includes 2,122 subjects (1,505 undiagnosed; 617 cancers), but the undiagnosed group is enriched, not general-population screening. 

Only 22.5% had ~1-year follow-up, with ~7% later diagnosed with cancer—again indicating high-risk enrichment. About 865 samples remain in blinded validation, so current results are signal-generating, not definitive.

The reported performance is notable. Stage-specific sensitivity was 56.8% (I), 70.1% (II), 77.1% (III), 99.1% (IV), with 61.3% for stage I–II. Early-stage sensitivity in key cancers included 53% breast, 78.9% prostate, 86.7% lung, and 62.2% colorectal. Specificity was 99.1% in a small asymptomatic subset (n=121) and 95.3% in the broader undiagnosed cohort. These are the central empirical results.

An Expert Discusses The Data

Dickinson’s posts provide useful context. He frames Caris as entering MCED from a position of scale and infrastructure—large tumor databases, clinical profiling, and sequencing capacity—suggesting Detect is an extension of an existing oncology data platform rather than a stand-alone assay.

Scientifically, Dickinson highlights the assay design: ~250x whole-genome sequencing of plasma with paired buffy coat sequencing to remove CHIP, extracting mutational, fragmentomic, and nucleosome-positioning signals for ML classification. This multi-signal WGS framework plausibly explains the observed sensitivity pattern.

His most provocative point concerns tumor-type performance, especially breast and prostate. He argues methylation-first MCED approaches may underperform in these high-incidence cancers, making Caris’s relatively strong early-stage sensitivity clinically meaningful. This reframes MCED evaluation toward incidence-weighted performance, not aggregate sensitivity alone.

Key Limitations

However, key limitations remain. The 99.1% specificity estimate rests on only 121 asymptomatic subjects. The broader 95.3% specificity is more relevant operationally but less reassuring for screening. The cohort is enriched, follow-up incomplete, and blinded validation pending—raising the risk of optimistic interim estimates.

Caris also claims superiority over methylation approaches, but provides no head-to-head comparison, making this an interpretive rather than evidentiary claim. Cross-study comparisons, including Dickinson’s, remain hypothesis-generating.

Strategically, Dickinson emphasizes Caris’s ability to pursue a sequencing-intensive design due to its scale. He notes current costs may be high (e.g., overlapping 2x150 reads), but could fall with alternative platforms. This underscores that economic feasibility will be as important as analytical performance.

Finally, Caris signals future expansion to whole transcriptome integration, reinforcing its view of MCED as a multimodal inference problem, though at the cost of added complexity.

Bottom line: Detect represents a biologically coherent and potentially important MCED approach with strong interim early-stage signals. But given cohort design, small screening subsets, and pending validation, the appropriate stance is serious interest with disciplined caution, not definitive conclusion.



CMS Reports on First Rural Health Summit under Tech Transformation

One of the lesser-known outcomes of last summer's budget bill was $50B for rural health transformation (RHT) focused on technology.  Last fall CMS established the Office of RHT and in December $50B of awards were announced.  Home page here.

On March 18, CMS convened leaders from all 50 states to discuss.  Find the online report here:

https://www.cms.gov/newsroom/press-releases/readout-cms-convenes-first-rural-health-transformation-summit-advance-state-led-innovation



AI CORNER

Here's a 100 word summary.

CMS convened its first Rural Health Transformation Summit on March 18, 2026, bringing leaders from all 50 states together to advance a $50 billion initiative created under recent federal legislation. The program aims to strengthen rural health systems through 

  • expanded access to care, 
  • workforce development, 
  • technology modernization, and 
  • innovative payment models. 

Discussions focused on five priorities: 

  • prevention and chronic disease, 
  • sustainable access, 
  • workforce pipelines, 
  • value-based care, and 
  • health IT. 
States shared early strategies such as telehealth, mobile units, and regional partnerships. CMS emphasized aligning efforts with Medicare and Medicaid financing and embedding long-term workforce and infrastructure solutions to ensure durable improvements in rural health outcomes.

Saturday, March 21, 2026

Journal Club: Value of WGS in Real-World Cancers (Van Putten, Nat Med)

 What's the value of going upscale to whole genome sequencing (WGS) in solid cancers?  Van Putten et al. assemble date from their experience with 888 solid cancers.  The work is from Hartwig Medical Foundation / Netherlands Cancer Institute.

Find the paper here and a Linked In essay here by Joseph Steward.  And here by Alex Dickinson.  Dr. Cuppen here, scientific director of Hartwig Foundation.


Most samples in this study were frozen tissue (89% success rate), but they remark that when archived samples were used, they had the same success rate (90%).

Chat GPT Discusses the Paper:

Friday, March 20, 2026

Can AI Re-Think Health Policy? Example Using WSJ Policy Essay (& MolDx)

Can AI read an article and project its possible applications into a different field?  That's today's question. 

Starting point: WSJ runs an essay by Harvard economics professor and Manhattan Institute authority Roland Fryer.  Fryer here, essay here.   


While his article was on "regulating AI," it clearly had ramifications or applications in other policy domains.  I asked Chat GPT 5 to read the essay and discuss its projection onto healthcare policy such as CMS.   I deliberately left my main initial request vague.   

At bottom, I asks it some Q&A, including how this applies to MolDx.

Here comes the initial response to my request, "apply Fryer's thinking to healthcare policy."