Monday, November 24, 2025

Fixing the Shrinking RVU: Insights Into the Debates

 The real-dollar value of the Medicare RVU, on which all physician payments are based, and many outpatient technical services, has been shrinking for years.  A new 12-page report from "Committee for a Responsible Federal Budget" collates much of the history and argumentation, even when you disagree with its conclusions.

###

The real-dollar value of the Medicare RVU has been shrnking for years - see a one-page update from AMA.  


The advisory body MEDPAC will be discussing physician reimbursement adequacy at its December 4-5, 2025, meeting.

One recent major action by CMS was to reduce many technical valuations by a 2.5% "efficiency factor" effecting January 2026, allowing funding to be redistributed towards primary care E&M claims. See e.g. a summary at Holland and Knight here.

###

An entity called "Committee for a Responsible Federal Budget" releases a 12-page white paper, in which it supports the "efficiency discount."  Find it here:

https://www.crfb.org/sites/default/files/media/documents/HSI%20PFS%20Final.pdf

While many readers will oppose that conclusion, I'm highlighting the 12 page white paper because it contains an extensively footnoted discussion of the history and present status of a range of issues - from the "efficiency' debate to the overall RVU deflation to the way that growing numbers of Nurse Practitiioners and other professionals are billed (direct vs incident-to). 




  

###

The big beautiful bill (HR 1) of July 2025 will lead to some changes in whether and how certain health care graduate students can borrow education funds, from 2026 forward.   In a nutshell, a limited number of degrees will be considered "professional" (law, medicine, dentistry, master's in theology for ministers).  Others, like nurse practitioners, will not be.   (Degrees like MBA or MPH will be officially classed as "non professional" with lower federal loan caps.)   From my general knowledge we want to encourage the production of N.P.s and other kinds of primary care - hopefully this will continue to be revisited.


Sunday, November 23, 2025

Head of CMMI - New Rules for CMMI

CMMI is Medicare's Center for Innovation, created by the 2010 Affordable Care Act and having a mixed history, probably a weak history, given the 15 years and billions of dollars of experience.

See a new article by Gita Deo, chief of staff of CMMI, and Abe Sutton, head of CMMI.  It's in Health Affairs open access:

https://www.healthaffairs.org/do/10.1377/hauthor20251114.865163/full/


To me, the most interesting thing about the short article was the tone.  It gets pretty close to, "Don't do X.  Only idiots do X."

##

CMMI's themes are to protect patients, protect providers, and protect taxpayers.   All models should include downside risk because this is most always a hallmark of previous, successful models.   Downside risk means, don't just spread money on top of problems.

Also, be sure you have rock-solid outcome measures.  By law, CMMI projects to succceed must save costs and improve quality (or save costs at same quality).   Meaning you have to have a control group, whether randomized or constructed, against which outcomes can be quantified.

##

Gita Deo is chief of staff at CMMI; she previously did a master's in public health and worked in a provider system and at McKinsey.   Sutton worked previously at McKinsey and at the Trump-01 White House, where he lead some effective renal initiatives.   He used the interregnum (Biden) to pick up a law degree at Harvard.

##


Friday, November 21, 2025

LANCET Editor Tries to Put Medical AI Into Perspective; so AI Responds

Richard Horton has been editor of LANCET since 1995, in his mid-thirties.   This fall, he writes a pair of op ed on AI as, well, "A reservoir of illusions."  Is he really that negative?   Let's take a closer look.

  • Offline: A Reservoir of Illusions (Part 1)
    • Lancet editor Richard Horton on medical AI, with a focus on Marc Andreesen
    • Here.
  • Offline: A Reservoir of Illusions (Part 2)
    • Horton on medical AI, focus on Emily Bender's book, "The AI Con: How to Fight Big Tech..."
    • Here.
Before we look at Horton's articles, if you like this topic, see two articles in JAMA Internal Medicine this week.  We have Bressman et al., Software as a Medical Practitioner - Time to License Artificial Intelligence?   Here.  And also Steiner, Scientific Writing in the Age of Artificial IntelligenceHere.  

Steiner has a case study in which AI writes, or shortens, medical journal abstracts, and he's not too convinced this meets his bar for quality or accuracy.   I'd just mention that AI can be trained to do this better (e.g. give i 500 well-written abstracts first, before asking it to write or edit) and five human authors, writing or re-writing an abstract, would never come up with the same text, word-choices, or edits.  Each human editor would pick or alter different things than his colleagues.

Working from Horton's Op Ed 2, Bremer's book against AI (which I haven't read but know only from this Op Ed), argues that AI is just regurgitating text strings.   I see more to it than that.  

For example, recently FDA held a full-day workshop on digital medicine and psychotherapy apps.  (My blog here.)  FDA provided a 27-page pre-read and FDA provided a several-page rapid post-meeting summary.   FDA also provided a link to a 7 hour archived video.   I recorded the video, got an auto-transcription from Otter.ai, and fed the three documents (the pre-read, the short summary, and the full transcript of 160pp, 62,000 words) to Chat GPT 5.   I asked, for example ,what you would discover in the full transcript that you wouldn't glean from the pre-read or meeting summary - without having to watch the 7 hour meeting myself.   I thought the result was interesting; you can decide; blog link at top of paragraph.

###
Looking at Horton's articles, I asked Chat GPT 5 to review them for us.  The result, meaning the "extruded text" [Bremer] follows.


###
AI CORNER
###

In Part 1 (“A reservoir of illusions, part 1”), Horton is mostly setting the stage. He contrasts Marc Andreessen’s techno-optimist manifesto with the current investment boom and breathless media narrative around AI. Andreessen is quoted as saying that AI will save lives, end environmental crisis, create material abundance, and that any deceleration of AI is tantamount to “a form of murder”. 

Horton treats this as Exhibit A of “cyber-futurism”: AI as universal problem solver, enemies named as the SDGs, precautionary principle, risk management, and social responsibility. He acknowledges genuine successes such as AlphaFold and the economic centrality of big tech, but the column’s punchline is that this optimism has become a “bubble”, and that critics who puncture it deserve much closer attention.

In Part 2 (“A reservoir of illusions, part 2”), Horton essentially hands the microphone to Emily Bender and Alex Hanna and endorses much of their thesis. He highlights their claim that “AI” is largely a marketing term: large models don’t think, don’t understand, aren’t sentient, can’t empathize, and can’t care. He repeats Bender/Hanna’s line that ChatGPT is a “souped-up autocomplete” or “text-extruding machine”, with no internal mechanism for source reliability, factual curation, or genuine evaluation. He stresses their view that such systems cannot replace human creativity or relationships, “add nothing to the human condition”, and are “all but irrelevant” to the Majority World, where political will and resources matter more than text synthesis.

Turning to health and science, Part 2 becomes more concrete and pointed. Horton concedes narrow, low-stakes utility—transcription, scheduling, simple triage—but frames the clinical and scientific hype as dangerous: automating diagnosis and care remains “unproven,” and using AI in peer review is a “serious abrogation of scholarly duty.” He foregrounds worries about AI-generated fraudulent manuscripts, collusion between editors and authors, and the risk that if editorial systems cannot cope, “the whole edifice of science” could be undermined. He then moves from Bender/Hanna’s critique to empirical examples: a radiology study where AI increased false positives without improving outcomes; experimental work showing AI systems more willing than humans to execute unethical instructions, lowering practical barriers to misconduct; and evidence that algorithms amplify age and gender bias, deepening inequities. He closes with the IBM line: a computer can’t be held accountable, so it mustn’t make management decisions—implicitly, nor life-and-death clinical ones.

Is Horton “too negative” or just prudently cautious?

For an expert audience in medical AI, I’d characterise his stance as deliberately counter-balancing the hype, but not methodically balanced.

Where he’s strongest / most defensible:

  • He is right that health-AI marketing routinely overruns its evidence base. 

  • Many products are launched on retrospective AUCs, reader-study surrogates, or workflow anecdotes rather than hard patient outcomes. Asking that AI be treated as a medical intervention—with explicit questions about information sources, reliability, effects on decisions, efficiency, quality, outcomes, and unintended harms—is exactly the posture most regulators and serious developers now espouse.

  • His concern about scholarly integrity is not exaggerated. 

  • Journal workflows, especially in high-volume/open-access environments, are already stressed; the combination of LLMs and paper mills is a genuine systemic risk. Highlighting fabricated AI-driven submissions and collusive editorial behaviour is a useful warning shot to the publishing community.

  • He is right to stress accountability and opacity: deploying opaque models in high-stakes decisions, without clear responsibility and post-deployment monitoring, is a governance failure even if the model’s ROC curve looks good. The IBM quote at the end is rhetorically old-fashioned but still a useful ethical anchor.

Where he overreaches or flattens the landscape:

  • By leaning heavily on Bender/Hanna’s line that these systems “add nothing to the human condition” and are “all but irrelevant” to the Majority World, he slides from valid scepticism into overstatement. There is a sizeable space between “AI will save the world” and “AI adds nothing”, and a lot of serious work in global health, imaging, and language accessibility sits in that middle space.

  • He tends to conflate LLM limitations with “AI” writ large. The critique of chatbots as text-extruders with no internal fact model is accurate for vanilla LLMs, but he doesn’t distinguish:

    • generative chat systems from

    • narrow supervised models (e.g., risk scores, triage models, segmentation networks) that are trained and validated on labelled clinical data, sometimes with outcome endpoints and prospective evaluation.

    • That conflation makes his critique feel, to experts, a bit like attacking a specific architecture while rhetorically indicting the entire field.

  • On the other side, he also uses extreme techno-optimism as “the AI side”—Andreessen’s claim that any slowing of AI is essentially murder, that AI will end environmental crisis and sustain 50 billion people. Those are real quotes, not invented straw men, but they are hardly representative of mainstream clinical AI research, where the discourse is about incremental gains, safety, and reimbursement. Framing the debate as Andreessen vs. Bender/Hanna obscures the much larger middle camp of “cautious, data-driven optimists.”

So I’d say: he isn’t fabricating straw men, but he is selectively choosing the loudest techno-optimist (Andreessen) and the loudest critics (Bender/Hanna) as his poles, and he lets their most sweeping lines stand without much nuance. That’s understandable for a short “Offline” column, but it means the pieces read more like rhetorical correction than a balanced policy analysis.

How should technical and policy readers read these pieces?

From a medical-AI/health-policy vantage point, Horton is useful as a mood indicator and a reminder of external perceptions, more than as a fine-grained map of the field.

  • For developers, these columns are a reminder that serious journals are deeply worried about fraud, bias, and hollow “AI-washed” claims. Any translational AI program that doesn’t bake in prospective evaluation, post-market monitoring, and clear lines of human accountability will simply reinforce the concerns Horton amplifies.

  • For policymakers, they crystallise a defensible regulatory attitude: treat AI as a medical intervention; require robust evidence tied to clinical endpoints; scrutinise claims about automation of judgment, empathy, or care; and do not let workflow efficiency gains substitute for patient-relevant benefit.

  • For both groups, the pieces probably under-represent the constructive middle ground—hybrid workflows, assistive decision support, carefully bounded automation, and AI used to widen access rather than replace clinicians—but that’s precisely the opening for your blog’s readers: to articulate what responsible, empirically grounded AI in medicine looks like, in contrast both to Andreessen-style manifestos and to “adds nothing to the human condition” nihilism.

In short: Horton is intentionally anti-hype, sometimes rhetorically sharp, occasionally over-generalizing, but not unserious. He’s not a technical critic of model architectures; he’s a cultural critic warning that we may be building an “edifice of illusions” on top of weak evidence and brittle institutions. For expert readers, the productive move is to accept the core cautions, note where he’s painting with too broad a brush, and then show—using real data and real deployments—how to do better than either pole of the Andreessen ↔ Bender/Hanna spectrum.  // Chat GPT 5


AMA to Hold All-Day Workshop on Digital Medicine Coding; Monday December 8, 2025

One of the major topics at AMA is how the AMA CPT will handle coding and payment for software-dominant services.   The AMA held an hour call on this earlier in the year, and an hour session at the September AMA CPT meeting.

Update:  AMA will now hold an all-day workshop on Coding & Payment for Algorithmic Services on Monday, December 8, 2025 (9-3 central time).  On-site registration is sold out but video registration is open.

https://www.ama-assn.org/member-benefits/events/coding-payment-algorithmic-services


Tidbit: AMA holds a more modest one-hour seminar on existing AI codes in CPT on December 4.

###

###


AMA meetings tend to have multi-layered confidentiality restrictions, but I was able to write a block on the topic on September 10, because I attended a public genomics conference in August where an AMA speaker talked about the AMA's  options, a public talk with no associated confidentiality contracts.

https://www.discoveriesinhealthpolicy.com/2025/08/digital-pathology-ai-and-coding-why.html

https://www.discoveriesinhealthpolicy.com/2025/09/update-on-ama-digital-and-ai-coding.html

You can also find non-confidential information via the AMA webpages for "Digital Medicine Payment Advisory Group" DMPAG and "Digital Medicine Coding Committee" DMCC.

https://www.ama-assn.org/practice-management/digital-health/digital-medicine-payment-advisory-group

One of the key ideas is to make a new section of AMA CPT - not PLA codes and not Category I or III codes - that would be a sort of registry of SaMD services (in pathology, radiology, cardiology, etc).   A lot of AMA attention goes to the fact there are new software services that don't include physician work, which have historically been very rare codes.   

However, in pathology and laboratory mediciene probably far more than 90% of the codes don't involve "physician work" (even "whole genome" or "comprehensive genomic procedures" on 400 tumor genes.)   And CMS has put all the PLA codes that involve whole slide imaging and AI on the Clinical Lab  Fee Schedule, even some with up-close CMS review as ADLTs.   So my perspective is that CMS can ALREADY handle the coding and regulations for pathology AI within existing coding and regulations.

##

For interesting thought capital on AI reimbursement and adoption from Parikh, Warshawsky, and others, blog here.

For my ideas on why existing coding and regulations work (or could work) for AI pathology, blog here.

Wednesday, November 19, 2025

New AMA CPT Text Defines Rules for PLA Code Changes (In a Vague Way)

In a previous blog, I noted that updated online PLA code instructions seem to disallow "distributed testing models" that may be approved by CAP inspections. 

https://www.discoveriesinhealthpolicy.com/2025/11/update-cap-accreditation-allows.html

Here's another PLA item of interest.  

For the CY2026 code book, which just arrived, AMA added some text about when a new PLA code is needed versus when a code with the same number can be revised editorially.

I'm not sure the results of the new language is crystal clear.   But here it is.

###

2026 code book, page 1016.  The repeated use of "may" and squishy terms like "material change" seem as vague as ever to me.  But here you go:

A new PLA code is required whenever the existing PLA has been updated in a manner that materially alters the original code descriptor, clinical indications, performance claims, or result reporting.  Examples of changes that MAY trigger the need for a new PLA code include:

1) Additional nucleic acid (DNA, RNA) and or protein analysis added to or removed from the current PLA test.

2) The algorithm (if used) has been materially changed so that it has expanded beyond its original intent, and

3) The name of the PLA test has changed in association with changes in test performance or test characteristics.

The addition of modification of the therapeutic application of the test require the submission of a code change, BUT it MAY not requre a new code number.

Refer to the introduction of the PLA section of the CPT for other details regarding the code set.

###



Tuesday, November 18, 2025

Hemoglobin A1c testing, and the Truly Mysterious Case of Code 83037 and PAMA

If you go back to tables released when PAMA pricing was new in 2018, you'll find that the surveyed median price of 83036 - laboratory hemoglobin A1c - was $8.50 (the fee schedule was $13.32).  And the surveyed median price of 83037 - POCT hemoglobin A1c - was $22.50, almost three times as high.

Yet the price of either 83036 or 83037 is now $9.52.

What's up????

83036 83037 in PAMA 2018


###

Here's an AI-generated review of episodes and actions that I lived through years ago.

###

83037 was born as a higher-paid point-of-care A1c code in 2006–2007, but Section 113 of the 2007 Medicare/Medicaid/SCHIP Extension Act mandated that any FDA-home-use HbA1c test (83037) be paid at the same rate as standard HbA1c (83036), effective April 1, 2008. CMS implemented this via a one-time transmittal, and the parity rule remains in force.

#####

In 2023 CMS Part B data, for 83036 there were $132M in payments, top labs being LabCorp and Quest, that's about 130,000 tests.   For 83037, there were only $738K in payments (about 70,000 tests).  

####

Here’s the back-story in “policy-wonk” form.  I thought it was a pretty amazing job of AI-driven automatic internet research and writing.


1. Why 83037 was created

CPT 83037 was introduced in 2006 as a new code for HbA1c testing performed by a device cleared by FDA for home use (i.e., a kit-type methodology distinct from conventional lab analyzers). Centers for Medicare & Medicaid Services+1

CMS treated 83037 as a separate HCPCS code under the glycated hemoglobin NCD and opened a national coverage analysis specifically to add it to the HbA1c NCD. The AMA explicitly commented that:

So from the start, 83036 = standard lab HbA1c; 83037 = POC / kit-based method using an FDA-cleared “home-use” device, but done in the office.


2. CMS initial pricing: 83037 ends up higher than 83036

When 83037 appeared on the lab fee schedule:

Contemporaneous reporting from AAFP notes that:

  • Many carriers had been paying around $13.56 for HbA1c;

  • After CMS’ 2007 move, payment for 83037 increased to a national average of about $21.06, and AAFP had lobbied for this higher rate to support point-of-care testing in primary care. AAFP+1

So by 2007 you had:

  • 83036 – conventional lab HbA1c, lower CLFS rate (~$13.56).

  • 83037 – POC / home-device HbA1c, higher CLFS rate (~$21).

From a lab-industry perspective, that reversed the usual hierarchy (kit method > analyzer method) and raised concerns that a relatively simple waived kit had been “over-rewarded” while high-volume reference labs were locked into lower payment.


3. The “Congressional fix”: Section 113 of the MMSCHIP Extension Act (P.L. 110-173)

Those concerns resulted in a statutory “correction.”

In late 2007, Congress enacted the Medicare, Medicaid, and SCHIP Extension Act of 2007 (MMSE / MMSCHIP; P.L. 110-173). Section 113 specifically targets HbA1c test payment:

In the case of any diagnostic laboratory test for HbA1c that is labeled by FDA for home use and furnished on or after April 1, 2008, the payment rate for such test shall be the same as the payment rate for a glycated hemoglobin test identified by HCPCS 83036 (as of Oct 1, 2007). Every CRS Report+1

CMS implemented this in a One-Time Notification (Transmittal R331OTN), which:

  • Reviews the history (gapfill in 2006; crosswalk to 82985 in 2007).

  • States that, effective for tests furnished on or after April 1, 2008, the price for 83037 (and 83037QW) must be equal to the CLFS rate for 83036. Centers for Medicare & Medicaid Services+1

An AAPC / MLN “News Flash” for the clinical lab fee schedule mirrors this: for tests on or after April 1, 2008, payment for 83037/83037QW equals the 83036 rate. AAPC+1

A Congressional Research Service report on P.L. 110-173 summarizes the same change for policy staff: HCPCS 83037 (home-use HbA1c) is now paid at the rate established for 83036. Every CRS Report+1

In other words: Congress explicitly eliminated any pricing premium for 83037 over 83036.


4. How the “slipped into SCHIP” story looks in the record

Contemporaneous physician-press coverage gives a nice window into the politics:

  • Jane Anderson, writing in Family Practice News / MDedge, reports that the reimbursement cut for fast in-office HbA1c tests (83037) was “mandated by a provision in the Medicare, Medicaid, and SCHIP Extension Act of 2007,” reversing CMS’ earlier 2006–2007 decision to pay more. MDedge+1

  • AAFP coding expert Cynthia Hughes is quoted as saying the provision was “slipped into SCHIP” and that it would take another act of Congress to undo it. MDedge+1

  • The same article states that the statutory language requires point-of-care HbA1c tests billed under 83037 to be paid at the same rate as HbA1c tests billed under 83036, i.e., at the lab-test rate. MDedge+1

AAFP’s line was that this would reduce payment for POC kits from ~$21 back down to ~$13.50, making in-office HbA1c less attractive for primary-care practices and potentially reducing patient access to same-visit counseling. The Hospitalist Blog+1

The documentation doesn’t spell out “reference lab lobbying” in so many words, but the economic logic is plain:

  • Point-of-care kit (83037) had been paying significantly more per test than analyzer-based HbA1c (83036). AAFP+1

  • Section 113 eliminates that differential and produces Medicare savings, as CRS notes in its budget table. Every CRS Report

Industry accounts at the time generally describe this as larger stakeholders pushing to avoid a precedent where a waived kit method is permanently priced above the “gold standard” lab method.


5. Where that leaves coding today (for context)

Because of Section 113 and the CMS implementation:

  • 83036 remains the dominant HbA1c code for laboratory-based testing.

  • 83037 is still used for rapid, CLIA-waived point-of-care HbA1c in a physician’s office with an FDA-cleared device, but its fee schedule amount is locked to the 83036 rate under federal statute for any HbA1c test labeled for home use and furnished on or after April 1, 2008. Centers for Medicare & Medicaid Services+1

That’s why, despite the coding distinction (lab vs POC), you do not see a durable national price premium for 83037 over 83036 on the CLFS.


6. Policy take-aways for experts

For a policy audience, this episode illustrates several recurring themes:

  • Coding vs. Payment vs. Methodology: Creating a separate CPT code (83037) for a device-specific or site-of-service-specific method can unintentionally create a payment wedge between similar clinical services (lab analyzer vs waived kit). CMS initially amplified this by crosswalking to a higher-priced code (82985). Centers for Medicare & Medicaid Services+1

  • Stakeholder conflict and “precedent anxiety”: Physician groups (e.g., AAFP) lobbied for higher 83037 payment to support POC testing. Other stakeholders—most plausibly large reference labs and cost-conscious policymakers—were concerned about establishing a precedent where kit methods or POC settings draw systematically higher reimbursement than core lab tests. The legislative fix can be read as Congress siding with the “parity” position.

  • Use of narrow statutory riders: Section 113 is a classic micro-targeted provision—one paragraph in a large Medicare/SCHIP bill that permanently fixes the relationship between two lab codes. CRS explicitly flags this as a discrete “payment rate for certain diagnostic laboratory tests” provision affecting HCPCS 83036/83037. Every CRS Report+1

  • Durability of statutory corrections: As Hughes noted, once you’ve hard-wired the relationship into statute (“same rate as 83036”), reversing it requires another act of Congress, not just a CMS rule change. MDedge

Brief Blog: Winter CPT Code Applications Out for Comment (November 15-30)

 AMA's next CPT meeting is in Palm Springs, February 5-7, 2026.

https://www.discoveriesinhealthpolicy.com/2025/11/ama-opens-registration-for-palm-spring.html

They've posted Category I CPT applications for comment, on a tight window, November 15-30.  You have to request a review packet, receive it, and then comment, so move quickly.

I count about 13 agenda items.

https://www.ama-assn.org/system/files/february-2026-lab-mopath-agenda.pdf


Resources: Value in Health Offers Special Issue on AI in Health Economics (November 2025)

 Value in Health, the official journal of ISPOR, the international pharmacoeconomics/HEOR organization, focuses on AI in HEOR for November 2025.

I clip the article list below; many are open access.

I also clip an AI summary of the TOC below.

###

AI Corner

###

The November AI HEOR Special Issue

This special issue of Value in Health is devoted to the accelerating role of generative and analytical artificial intelligence in health economics and outcomes research (HEOR). Two ISPOR Working Group Reports anchor the issue. The first proposes a taxonomy of generative AI and large language models (LLMs) for HEOR, aiming to standardize terminology and clarify use cases across evidence synthesis, modeling, and decision frameworks. The companion report introduces ELEVATE-GenAI, a structured reporting guideline and checklist to improve transparency, reproducibility, and methodological rigor when LLMs are used in HEOR studies.

A themed section of research articles and editorials then explores how AI is reshaping HEOR practice. An introductory editorial highlights the rapid evolution of AI methods and the corresponding need for standards, validation, and governance. Another article recognizes early-career researchers whose work advances AI-enabled HEOR.

Several empirical studies examine AI’s performance in systematic reviews, traditionally one of the most labor-intensive components of HEOR. Validating “Loon Lens 1.0,” investigators demonstrate 99% recall with confidence-guided human-in-the-loop checks, reducing manual review requirements to <6%. Other teams show that LLMs can accurately extract CEA data, select statistical models, and even execute components of network meta-analyses. The A4SLR framework offers a formalized, agentic AI-supported workflow for systematic literature reviews and HTA evidence synthesis, while a large systematic review finds that generative AI is useful for question formulation and data extraction but still unreliable for literature search, study selection, and bias assessment.

Beyond evidence synthesis, additional studies explore how AI may adapt Excel-based health economic models, generate technical reports, and create synthetic datasets to expand research accessibility and privacy protection. A final survey of public preferences in Australia finds that for AI-driven mobile health apps, accuracy remains the dominant factor, followed by how well clinicians and AI systems collaborate.

###

###

 
 
 
 
A Taxonomy of Generative Artificial Intelligence in Health Economics and Outcomes Research: An ISPOR Working Group Report
Rachael L. Fleurence, Xiaoyan Wang, Jiang Bian, Mitchell K. Higashi, Turgay Ayer, Hua Xu, Dalia Dawoud, and Jagpreet Chhatwal, on behalf of the ISPOR Working Group on Generative AI
Editor's Choice l Free
This report is about a taxonomy of generative artificial intelligence and large language models for health economics and outcomes research.

ELEVATE-GenAI: Reporting Guidelines for the Use of Large Language Models in HEOR: An ISPOR Working Group Report
Rachael L. Fleurence, Dalia Dawoud, Jiang Bian, Mitchell K. Higashi, Xiaoyan Wang, Hua Xu, Jagpreet Chhatwal, and Turgay Ayer, on behalf of the ISPOR Working Group on Generative AI
Editor's Choice l Free
The article introduces the ELEVATE-GenAI framework and checklist, providing structured guidance for reporting large-language-model-assisted research in health economics and outcomes research.
 
 
THEMED SECTION: ARTIFICIAL INTELLIGENCE IN HEALTH ECONOMICS AND OUTCOMES RESEARCH

 
 
Free
This editorial comments on the articles in this special section highlighting the rapid pace of development and the opportunities and challenges this presents for health economics and outcomes research.

Artificial Intelligence in Health Economics and Outcomes Research: Highlighting the Contributions of Early Career Researchers
Amy M. Miller and Emily Ortman
Free
The PhRMA Foundation awards 4 trainee authors with Challenge Awards for their outstanding articles on artificial intelligence in health economics and outcomes research.

Validating Loon Lens 1.0 for Autonomous Abstract Screening and Confidence-Guided Human-in-the-Loop Workflows in Systematic Reviews
Ghayath Janoudi, Mara Uzun, Tim Disher, Mia Jurdana, Ena Fuzul, Josip Ivkovic, and Brian Hutton
Validating an agentic artificial intelligence abstract screener across 8 reviews showed 99% recall, calibrated confidence scores, and a sub-6% human check, lifting precision to 90%.

Use of Large Language Models to Extract Cost-Effectiveness Analysis Data: A Case Study
Xujun Gu, Hanwen Zhang, Divya Patil, Zafar Zafari, Julia Slejko, and Eberechukwu Onukwugha
Open Access
The current data extraction for cost-effectiveness analyses (CEA) data is time-consuming and prone to make mistakes. This study showed large language models match CEVR in extracting CEA data.

The “Artificial Intelligence Statistician”: Utilizing Generative Artificial Intelligence to Select an Appropriate Model and Execute Network Meta-Analyses
Tim Reason, Yunchou Wu, Cheryl Jones, Emma Benbow, Kasper Johannesen, and Bill Malcolm
Open Access
With the enhanced capabilities of large language models, their potential for selecting an appropriate statistical model and conducting a network meta-analysis was assessed.

A4SLR: An Agentic Artificial Intelligence-Assisted Systematic Literature Review Framework to Augment Evidence Synthesis for Health Economics and Outcomes Research and Health Technology Assessment
Kyeryoung Lee, Hunki Paek, Nneka Ofoegbu, Steven Rube, Mitchell K. Higashi, Dalia Dawoud, Hua Xu, Lizheng Shi, and Xiaoyan Wang
This study reports the development, implementation, and validation of A4SLR, an artificial intelligence-assisted framework for health economics and outcomes research and health technology assessments.

Role of Generative Artificial Intelligence in Assisting Systematic Review Process in Health Research: A Systematic Review
Muhammed Rashid, Cheng Su Yi, Thipsukhon Sathapanasiri, Sariya Udayachalerm, Kansak Boonpattharatthiti, Suppachai Insuk, Sajesh K. Veettil, Nai Ming Lai, Nathorn Chaiyakunapruk, and Teerapon Dhippayom, on behalf of the Generative AI for Navigating Systematic Reviews working group
Open Access
Generative artificial intelligence supports systematic reviews in question formulation and data extraction, but lacks reliability for literature search, study selection and risk of bias assessment.

Generative Artificial Intelligence to Automate the Adaptation of Excel Health Economic Models and Word Technical Reports
William Rawlinson, Siguroli Teitsson, Tim Reason, Bill Malcolm, Andy Gimblett, and Sven L. Klijn
Open Access
Large language models have the potential to perform routine adaptations of Excel-based health economic models and technical reports accurately and rapidly at a low cost.

Roles of Artificial Intelligence-Based Synthetic Data in Health Economics and Outcomes Research
Tim C. Lai and Surachat Ngorsuraches
Open Access
Artificial-intelligence/machine-learning-driven synthetic data have the potential to enhance data accessibility and facilitate more robust analyses in health economics and outcomes research.

Unravelling Public Preferences for the Use of Artificial Intelligence Mobile Health Applications in Australia
Vinh Vo, Maame E. Woode, Stacy M. Carter, Chris Degeling, and Gang Chen
Open Access
Artificial-intelligence-based mobile health apps for heart disease and depression reveal that artificial intelligence accuracy matters the most, followed by doctor and artificial intelligence interaction.

Update: CAP Accreditation Allows "Distributed Model" Testing - But AMA CPT PLA Will Not

In recent quarters, a number of PLA applications have been rejected either because they were "dry lab only" services (e.g. starting from a digital image of an H&E section) or because they were in a "distributed model" for example with sequencing in one location and bioinformatics in another.

CAP clearly allows certain kinds of distributing model services.  I wrote a blog on this on November 13 - here.

That's CAP for CLIA.

However, AMA CPT has a specific PLA FAQ that specifically disallows this model as eligible for PLA coding.   I'm not sure when the text was specifically updated.   Find the FAQ and the text clipped below.


  • For requesting laboratories, all aspects of the analysis must be performed in a single (“sole-source,” including the same or different locations, owned and operated by the same entity) CLIA-certified or accredited clinical laboratory or licensed or marketed to multiple providing laboratories (e.g., cleared or approved by the Food and Drug Administration [FDA]).   

##
Here's some additional text from that FAQ you may want to be aware of.

  • Proprietary laboratory analyses (PLA) codes are alpha-numeric CPT® codes with a corresponding descriptor, for labs or manufacturers to more specifically identify their test. PLA codes can be provided either by a single (“sole-source” including the same or different locations, owned and operated by the same entity) laboratory or licensed or marketed to multiple providing laboratories (e.g., cleared or approved by the Food and Drug Administration [FDA]). Tests with PLA codes must be performed on human specimens, do not have a physician work component, and must be requested by the clinical laboratory or manufacturer that offers the test.

Summary 
PLA Distributed Model
    Not allowed (quoted above).
Physician work.
    NO physician work is allowed, including required physician work to review and confirm data, prior to a PhD's sign out at the end.
Dry Lab
   I've seen the term "dry lab" used occasoinally (e.g. digital only analysis of an H&E image), but I haven't seen this excluded from PLA coding in writing.  However, I believe the PLA committee will reject what it perceives as a "dry lab" application, in practice.   [Regardless of possible past acceptances in earlier years].




Monday, November 17, 2025

Legislation: Where to Find It: Delay Lab Pricing 30 Days. Spend $50B for Rural Health Technology.

 Too pieces of recent legislation and where to find them.

HR 5731 and Section 6209 PAMA

The bill that ended the Fall 2025 government shutdown was HR 5731.   Look to Section 6206 to find a 30-day delay in the next implementation of PAMA lab pricing cuts.  Find many Medicare payment rules by searching "1834" and find PAMA changes by searching "1834A."

https://www.congress.gov/bill/119th-congress/house-bill/5371/text

click to enlarge

For comparison, 6206 tweaks ambulance payments and section 6208 extends telehealth flexibility, the first to January 30 and the second to January 31.

Coverage at Genomeweb.

##

RURAL HEALTH TECHNOLOGY HR 1

This was the "Big Beautiful Bill" and it includes a big beautiful $50 billion dollars for rural health technologies.

https://www.congress.gov/bill/119th-congress/house-bill/1

https://www.congress.gov/bill/119th-congress/house-bill/1/text

Government Summary

  1. (Sec. 71401) This section provides $10 billion per fiscal year for FY2026-FY2030 for a program that supports the provision of health care in rural areas. 
  2. Under the program, states may apply for financial allotments to improve the access and quality of care of services in rural areas, such as through enhanced technology, strategic partnerships, and workforce training. States must submit detailed rural health transformation plans and certify that no funds will be used to finance the non-federal share of Medicaid or CHIP. The CMS must approve or deny applications by December 31, 2025; states that receive approval do not need to reapply each year. States are not required to contribute any matching funds with respect to program allotments.
  3. The CMS must award allotments so that 50% of funds are awarded equally among all approved states. The remaining 50% of funds must be awarded based on certain considerations, including the proportion of rural health facilities in the state compared to the number of such facilities nationwide.
  4. The section additionally provides $200 million for FY2025 for the CMS to implement the program.

##

AI Summary (Chat GPT 5)

##

Below is a structured ~500-word paragraph-style summary of Section 71401 (Rural Health Transformation Program), integrating the statutory foundations with the implications for med-tech and health-IT suppliers, and concluding with what CMS is likely doing operationally in the first six months and into CY2026.


Section 71401 of the 2025 budget reconciliation law establishes the Rural Health Transformation Program, a $50 billion, five-year mandatory spending initiative aimed at restructuring and modernizing rural health care delivery in all 50 states. 

Between FY2026 and FY2030, CMS will allocate $10 billion per year to states that submit approved “rural health transformation plans.” These plans must improve access to rural hospitals and providers; enhance health outcomes; integrate new and emerging technologies; promote regional partnerships and economies of scale; strengthen rural clinical workforce supply; and present credible strategies for long-term hospital solvency. 

States may use the funds for at least three of a defined set of “health-related activities,” including evidence-based chronic disease interventions, limited provider payments, consumer-facing technology solutions, the adoption of advanced technologies such as remote monitoring, robotics, AI, and other innovations; workforce recruitment; major information technology upgrades; cybersecurity improvements; and reconfiguration of service lines across the full continuum of rural care. 

Only a small fraction of each state's allotment (10 percent for administrative costs, 15 percent for provider subsidies) may go to overhead or direct financial support, ensuring that the majority of funds flow into structural modernization rather than temporary relief.

The implications for medical technology, diagnostics, and health IT suppliers are significant, even though the statute avoids naming specific product categories. The law repeatedly emphasizes “new and emerging technologies,” “technology-driven solutions,” “technology-enabled solutions,” “software and hardware,” “information technology advances,” and “advanced technologies, including remote monitoring, robotics, artificial intelligence.” 

This creates a statutory foundation for a wide range of equipment and digital platforms, from connected diagnostic devices and point-of-care instruments to telehealth infrastructure, imaging and pathology informatics, remote monitoring systems, cybersecurity architecture, and cloud-based workflow systems. 

For med-tech suppliers, the opportunity lies not in selling individual devices but in positioning integrated solutions that help states satisfy the transformation plan requirements: improving quality, stabilizing rural access, reorganizing service lines, and modernizing workforce-limited settings. For health-IT suppliers, the strongest hooks are in the explicit authorization for software, hardware, interoperability, and cybersecurity upgrades—essentially giving states legal authority and federal dollars to modernize outdated rural hospital infrastructure, strengthen data systems, and deploy digital tools that enable new care models.

The View from CMS

In the first six months after passage (July–December 2025), CMS is likely occupied with program design, application review, and operational scaffolding. Statutory deadlines required states to submit applications by the end of 2025, meaning CMS staff were working intensively to publish guidance, answer technical questions, process fifty state applications, and build the internal analytics necessary to score transformation plans. 

CMS also needed to design the allocation formula that splits funds evenly across states and distributes the remainder based on rural population, facility mix, and other factors. 

Internally, CMS likely staffed or expanded a cross-agency team, activated its $200 million implementation budget, coordinated with OGC for legal interpretation, and began constructing mandatory reporting templates, oversight frameworks, and technical-assistance resources for states.

Looking into CY2026, CMS will shift from startup activity to execution: approving applications, issuing allotments, conducting early monitoring, and guiding states as they begin procurement and program rollout. CY2026 will be the year when states translate their plans into concrete vendor partnerships for IT modernization, diagnostic upgrades, workforce support technologies, telehealth expansion, and advanced digital tools. 

CMS will simultaneously face a growing oversight burden—tracking expenditure, ensuring compliance with allowable uses, and responding to inevitable questions from Congress, GAO, and state stakeholders about whether the $50 billion is producing measurable improvements in rural access and stability.

Siemens' Remarkable Comments to CMS on Software Reimbursement - the New "Must-Read?"

 For several years CMS has struggled with changing, vague, or counterproductive policies for how topay for software-dominant services.   This year, the Senate introduce S.1399, which (if passed) would force CMS to develop coherent software reimbursement policies, at least in the OPPS setting.   Last July, CMS inserted a Request for Information into both the PFS and OPPS rules.  Numerous parties responded (you can find all rulemaking comments online), but CMS has only responded to the extent of saying, "Thanks for getting back to us, we'll be reading that" to stakeholders.

In this blog I'll focus on the responses from Siemens to both the PFS and OPPS rules.  To help you, I'll put a zip file in the cloud four documents, the two RFIs and the two Siemens letters.

https://drive.google.com/file/d/1kKdDr14gczk-ATk8MrT3ovoL5WqirZUa/view?usp=sharing

I've read both letters but I can't write a better summary than Chat GPT 5 does, below.  Worth reading.

And here's an article at Genomeweb at Siemens broader 2026 strategies.

###

AI CORNER

##

Overview of Siemens Comments to RFI’s on SaaS

 Siemens Healthineers’ comments to CMS on the July 2025 PFS and OPPS RFIs reflect a highly coordinated, sophisticated strategy to shape the reimbursement landscape for artificial intelligence, software-driven clinical analytics, and broader digital health technologies.

Across both rulemaking venues, Siemens proposes a conceptual shift away from CMS’s narrow framing of “Software-as-a-Service (SaaS)” toward a broader and more inclusive category they call Algorithm-Based Healthcare Services (ABHS). This new term deliberately captures not only cloud-based software tools but also AI and machine-learning applications embedded[1] in imaging equipment, delivered via web or workstation, or provided by third-party analytic vendors. By urging CMS to adopt ABHS as the central regulatory category, Siemens is effectively attempting to define the vocabulary—and therefore the policy architecture—through which Medicare will evaluate and pay for clinical AI. In regulatory affairs, naming the category often determines how the category will be regulated, and Siemens is moving early to ensure that the definitions mirror its own wide-ranging digital portfolio.

In both letters, Siemens argues that AI-driven analytic services must receive separate, explicit payment, rather than being folded into packaged payments for imaging or other underlying procedures. They underline that packaging AI into base procedure codes would immediately suppress adoption and undercut the value proposition of these technologies.[2]  To prevent this, Siemens requests that CMS codify formal regulatory text guaranteeing separate payment for ABHS—even providing draft language for 42 CFR 419.2 in the OPPS comment letter. This is a significant move: Siemens is no longer asking CMS to “consider” separate payment, but is proposing the actual legal language that would lock separate payment into federal regulation. These comments also push CMS to explicitly shield ABHS add-on codes from OPPS packaging rules—an attempt to close every possible loophole through which CMS might inadvertently or intentionally eliminate separate payment in future rulemaking.

Another major pillar of Siemens’ strategy is the request that CMS automatically place all new ABHS CPT codes into New Technology APCs for a minimum of five years, using manufacturer-supplied cost data rather than unreliable early Medicare claims. Siemens argues that early claims for new AI services are frequently distorted by incorrect revenue code assignments, slow hospital adoption, and lack of clear billing pathways. By providing a five-year protected runway—parallel to the lifespan of Category III codes—CMS would ensure stable and predictable reimbursement while allowing enough time for claims data to mature. This approach mirrors the policy logic behind transitional pass-through payments and NTAP in the inpatient setting. Siemens also requests that these ABHS codes be exempt from the Universal Low-Volume APC policy, which otherwise risks downward payment bias for emerging technologies.

Siemens devotes substantial attention to the unique cost structure of AI technologies, emphasizing that ABHS products involve ongoing subscription or licensing fees, cloud computing costs, cybersecurity infrastructure, integration with EHRs, and ongoing staff training. These are not capital costs, and they do not resemble the equipment depreciation and clinical labor inputs that CMS traditionally uses to calculate reimbursement. Consequently, Siemens argues that CMS must modernize its resource-based methodology or risk chronically undervaluing digital health technologies. They also highlight that AI often generates additional clinical outputs, which increase cognitive workload for physicians rather than replacing it. Clinicians must review expanded information, validate algorithmic findings, integrate AI-derived insights into care planning, document the use of AI, and communicate results to patients. Siemens leverages this point to argue for meaningful work RVUs under the PFS, counteracting any CMS assumption that AI reduces professional effort.

Interwoven throughout these comments is Siemens’ broader strategic posture as both a policy thought leader and a legislative partner. Siemens explicitly supports Senate Bill 1399, the Health Tech Investment Act, which directs CMS to create clear reimbursement pathways for AI. Even though the bill is unfunded, Siemens recognizes that it carries substantial agenda-setting force; it can push CMS to create the structural payment models Siemens wants. By aligning itself with S.1399 and proposing fully formed regulatory text, Siemens positions itself not merely as a stakeholder responding to CMS, but as an architect offering CMS a ready-made blueprint for national AI reimbursement. Siemens also calls for CMS to convene a national public forum on ABHS, which would cement Siemens’ role as a central voice in federal policymaking and create a venue where Siemens can influence the conceptual and technical underpinnings of AI reimbursement.

Ultimately, Siemens’ comments reveal a comprehensive, proactive attempt to define how Medicare will recognize, categorize, and reimburse AI-enabled clinical services. Their strategy seeks harmonization across PFS and OPPS, codification of separate payment, early placement into New Technology APCs, explicit valuation of physician work, and modernization of cost inputs. Through legislative alignment, regulatory drafting, and conceptual reframing of AI as a clinical analytic service rather than as a software add-on, Siemens attempts to shape a durable federal policy framework that supports innovation while aligning closely with the structure and economics of its own product portfolio.



[1] I believe SaaS implies “cloud” and skips “embedded.”

[2] For published thinking on Ai reimbursement policy, see Warshawsky and others. https://www.discoveriesinhealthpolicy.com/2025/11/center-for-medicare-innovation.html

###

See my earlier blog about Artera and PathAI comments specific to digital pathology,

https://www.discoveriesinhealthpolicy.com/2025/10/pathai-proposes-new-coding-system-for.html

And general blog about the OPPS and PFS comment cycle,

https://www.discoveriesinhealthpolicy.com/2025/10/very-brief-blog-see-search-comments-on.html