Tuesday, November 18, 2025

Hemoglobin A1c testing, and the Truly Mysterious Case of Code 83037 and PAMA

If you go back to tables released when PAMA pricing was new in 2018, you'll find that the surveyed median price of 83036 - laboratory hemoglobin A1c - was $8.50 (the fee schedule was $13.32).  And the surveyed median price of 83037 - POCT hemoglobin A1c - was $22.50, almost three times as high.

Yet the price of either 83036 or 83037 is now $9.52.

What's up????

83036 83037 in PAMA 2018


###

Here's an AI-generated review of episodes and actions that I lived through years ago.

###

83037 was born as a higher-paid point-of-care A1c code in 2006–2007, but Section 113 of the 2007 Medicare/Medicaid/SCHIP Extension Act mandated that any FDA-home-use HbA1c test (83037) be paid at the same rate as standard HbA1c (83036), effective April 1, 2008. CMS implemented this via a one-time transmittal, and the parity rule remains in force.

#####

In 2023 CMS Part B data, for 83036 there were $132M in payments, top labs being LabCorp and Quest, that's about 130,000 tests.   For 83037, there were only $738K in payments (about 70,000 tests).  

####

Here’s the back-story in “policy-wonk” form.  I thought it was a pretty amazing job of AI-driven automatic internet research and writing.


1. Why 83037 was created

CPT 83037 was introduced in 2006 as a new code for HbA1c testing performed by a device cleared by FDA for home use (i.e., a kit-type methodology distinct from conventional lab analyzers). Centers for Medicare & Medicaid Services+1

CMS treated 83037 as a separate HCPCS code under the glycated hemoglobin NCD and opened a national coverage analysis specifically to add it to the HbA1c NCD. The AMA explicitly commented that:

So from the start, 83036 = standard lab HbA1c; 83037 = POC / kit-based method using an FDA-cleared “home-use” device, but done in the office.


2. CMS initial pricing: 83037 ends up higher than 83036

When 83037 appeared on the lab fee schedule:

Contemporaneous reporting from AAFP notes that:

  • Many carriers had been paying around $13.56 for HbA1c;

  • After CMS’ 2007 move, payment for 83037 increased to a national average of about $21.06, and AAFP had lobbied for this higher rate to support point-of-care testing in primary care. AAFP+1

So by 2007 you had:

  • 83036 – conventional lab HbA1c, lower CLFS rate (~$13.56).

  • 83037 – POC / home-device HbA1c, higher CLFS rate (~$21).

From a lab-industry perspective, that reversed the usual hierarchy (kit method > analyzer method) and raised concerns that a relatively simple waived kit had been “over-rewarded” while high-volume reference labs were locked into lower payment.


3. The “Congressional fix”: Section 113 of the MMSCHIP Extension Act (P.L. 110-173)

Those concerns resulted in a statutory “correction.”

In late 2007, Congress enacted the Medicare, Medicaid, and SCHIP Extension Act of 2007 (MMSE / MMSCHIP; P.L. 110-173). Section 113 specifically targets HbA1c test payment:

In the case of any diagnostic laboratory test for HbA1c that is labeled by FDA for home use and furnished on or after April 1, 2008, the payment rate for such test shall be the same as the payment rate for a glycated hemoglobin test identified by HCPCS 83036 (as of Oct 1, 2007). Every CRS Report+1

CMS implemented this in a One-Time Notification (Transmittal R331OTN), which:

  • Reviews the history (gapfill in 2006; crosswalk to 82985 in 2007).

  • States that, effective for tests furnished on or after April 1, 2008, the price for 83037 (and 83037QW) must be equal to the CLFS rate for 83036. Centers for Medicare & Medicaid Services+1

An AAPC / MLN “News Flash” for the clinical lab fee schedule mirrors this: for tests on or after April 1, 2008, payment for 83037/83037QW equals the 83036 rate. AAPC+1

A Congressional Research Service report on P.L. 110-173 summarizes the same change for policy staff: HCPCS 83037 (home-use HbA1c) is now paid at the rate established for 83036. Every CRS Report+1

In other words: Congress explicitly eliminated any pricing premium for 83037 over 83036.


4. How the “slipped into SCHIP” story looks in the record

Contemporaneous physician-press coverage gives a nice window into the politics:

  • Jane Anderson, writing in Family Practice News / MDedge, reports that the reimbursement cut for fast in-office HbA1c tests (83037) was “mandated by a provision in the Medicare, Medicaid, and SCHIP Extension Act of 2007,” reversing CMS’ earlier 2006–2007 decision to pay more. MDedge+1

  • AAFP coding expert Cynthia Hughes is quoted as saying the provision was “slipped into SCHIP” and that it would take another act of Congress to undo it. MDedge+1

  • The same article states that the statutory language requires point-of-care HbA1c tests billed under 83037 to be paid at the same rate as HbA1c tests billed under 83036, i.e., at the lab-test rate. MDedge+1

AAFP’s line was that this would reduce payment for POC kits from ~$21 back down to ~$13.50, making in-office HbA1c less attractive for primary-care practices and potentially reducing patient access to same-visit counseling. The Hospitalist Blog+1

The documentation doesn’t spell out “reference lab lobbying” in so many words, but the economic logic is plain:

  • Point-of-care kit (83037) had been paying significantly more per test than analyzer-based HbA1c (83036). AAFP+1

  • Section 113 eliminates that differential and produces Medicare savings, as CRS notes in its budget table. Every CRS Report

Industry accounts at the time generally describe this as larger stakeholders pushing to avoid a precedent where a waived kit method is permanently priced above the “gold standard” lab method.


5. Where that leaves coding today (for context)

Because of Section 113 and the CMS implementation:

  • 83036 remains the dominant HbA1c code for laboratory-based testing.

  • 83037 is still used for rapid, CLIA-waived point-of-care HbA1c in a physician’s office with an FDA-cleared device, but its fee schedule amount is locked to the 83036 rate under federal statute for any HbA1c test labeled for home use and furnished on or after April 1, 2008. Centers for Medicare & Medicaid Services+1

That’s why, despite the coding distinction (lab vs POC), you do not see a durable national price premium for 83037 over 83036 on the CLFS.


6. Policy take-aways for experts

For a policy audience, this episode illustrates several recurring themes:

  • Coding vs. Payment vs. Methodology: Creating a separate CPT code (83037) for a device-specific or site-of-service-specific method can unintentionally create a payment wedge between similar clinical services (lab analyzer vs waived kit). CMS initially amplified this by crosswalking to a higher-priced code (82985). Centers for Medicare & Medicaid Services+1

  • Stakeholder conflict and “precedent anxiety”: Physician groups (e.g., AAFP) lobbied for higher 83037 payment to support POC testing. Other stakeholders—most plausibly large reference labs and cost-conscious policymakers—were concerned about establishing a precedent where kit methods or POC settings draw systematically higher reimbursement than core lab tests. The legislative fix can be read as Congress siding with the “parity” position.

  • Use of narrow statutory riders: Section 113 is a classic micro-targeted provision—one paragraph in a large Medicare/SCHIP bill that permanently fixes the relationship between two lab codes. CRS explicitly flags this as a discrete “payment rate for certain diagnostic laboratory tests” provision affecting HCPCS 83036/83037. Every CRS Report+1

  • Durability of statutory corrections: As Hughes noted, once you’ve hard-wired the relationship into statute (“same rate as 83036”), reversing it requires another act of Congress, not just a CMS rule change. MDedge

Brief Blog: Winter CPT Code Applications Out for Comment (November 15-30)

 AMA's next CPT meeting is in Palm Springs, February 5-7, 2026.

https://www.discoveriesinhealthpolicy.com/2025/11/ama-opens-registration-for-palm-spring.html

They've posted Category I CPT applications for comment, on a tight window, November 15-30.  You have to request a review packet, receive it, and then comment, so move quickly.

I count about 13 agenda items.

https://www.ama-assn.org/system/files/february-2026-lab-mopath-agenda.pdf


Resources: Value in Health Offers Special Issue on AI in Health Economics (November 2025)

 Value in Health, the official journal of ISPOR, the international pharmacoeconomics/HEOR organization, focuses on AI in HEOR for November 2025.

I clip the article list below; many are open access.

I also clip an AI summary of the TOC below.

###

AI Corner

###

The November AI HEOR Special Issue

This special issue of Value in Health is devoted to the accelerating role of generative and analytical artificial intelligence in health economics and outcomes research (HEOR). Two ISPOR Working Group Reports anchor the issue. The first proposes a taxonomy of generative AI and large language models (LLMs) for HEOR, aiming to standardize terminology and clarify use cases across evidence synthesis, modeling, and decision frameworks. The companion report introduces ELEVATE-GenAI, a structured reporting guideline and checklist to improve transparency, reproducibility, and methodological rigor when LLMs are used in HEOR studies.

A themed section of research articles and editorials then explores how AI is reshaping HEOR practice. An introductory editorial highlights the rapid evolution of AI methods and the corresponding need for standards, validation, and governance. Another article recognizes early-career researchers whose work advances AI-enabled HEOR.

Several empirical studies examine AI’s performance in systematic reviews, traditionally one of the most labor-intensive components of HEOR. Validating “Loon Lens 1.0,” investigators demonstrate 99% recall with confidence-guided human-in-the-loop checks, reducing manual review requirements to <6%. Other teams show that LLMs can accurately extract CEA data, select statistical models, and even execute components of network meta-analyses. The A4SLR framework offers a formalized, agentic AI-supported workflow for systematic literature reviews and HTA evidence synthesis, while a large systematic review finds that generative AI is useful for question formulation and data extraction but still unreliable for literature search, study selection, and bias assessment.

Beyond evidence synthesis, additional studies explore how AI may adapt Excel-based health economic models, generate technical reports, and create synthetic datasets to expand research accessibility and privacy protection. A final survey of public preferences in Australia finds that for AI-driven mobile health apps, accuracy remains the dominant factor, followed by how well clinicians and AI systems collaborate.

###

###

 
 
 
 
A Taxonomy of Generative Artificial Intelligence in Health Economics and Outcomes Research: An ISPOR Working Group Report
Rachael L. Fleurence, Xiaoyan Wang, Jiang Bian, Mitchell K. Higashi, Turgay Ayer, Hua Xu, Dalia Dawoud, and Jagpreet Chhatwal, on behalf of the ISPOR Working Group on Generative AI
Editor's Choice l Free
This report is about a taxonomy of generative artificial intelligence and large language models for health economics and outcomes research.

ELEVATE-GenAI: Reporting Guidelines for the Use of Large Language Models in HEOR: An ISPOR Working Group Report
Rachael L. Fleurence, Dalia Dawoud, Jiang Bian, Mitchell K. Higashi, Xiaoyan Wang, Hua Xu, Jagpreet Chhatwal, and Turgay Ayer, on behalf of the ISPOR Working Group on Generative AI
Editor's Choice l Free
The article introduces the ELEVATE-GenAI framework and checklist, providing structured guidance for reporting large-language-model-assisted research in health economics and outcomes research.
 
 
THEMED SECTION: ARTIFICIAL INTELLIGENCE IN HEALTH ECONOMICS AND OUTCOMES RESEARCH

 
 
Free
This editorial comments on the articles in this special section highlighting the rapid pace of development and the opportunities and challenges this presents for health economics and outcomes research.

Artificial Intelligence in Health Economics and Outcomes Research: Highlighting the Contributions of Early Career Researchers
Amy M. Miller and Emily Ortman
Free
The PhRMA Foundation awards 4 trainee authors with Challenge Awards for their outstanding articles on artificial intelligence in health economics and outcomes research.

Validating Loon Lens 1.0 for Autonomous Abstract Screening and Confidence-Guided Human-in-the-Loop Workflows in Systematic Reviews
Ghayath Janoudi, Mara Uzun, Tim Disher, Mia Jurdana, Ena Fuzul, Josip Ivkovic, and Brian Hutton
Validating an agentic artificial intelligence abstract screener across 8 reviews showed 99% recall, calibrated confidence scores, and a sub-6% human check, lifting precision to 90%.

Use of Large Language Models to Extract Cost-Effectiveness Analysis Data: A Case Study
Xujun Gu, Hanwen Zhang, Divya Patil, Zafar Zafari, Julia Slejko, and Eberechukwu Onukwugha
Open Access
The current data extraction for cost-effectiveness analyses (CEA) data is time-consuming and prone to make mistakes. This study showed large language models match CEVR in extracting CEA data.

The “Artificial Intelligence Statistician”: Utilizing Generative Artificial Intelligence to Select an Appropriate Model and Execute Network Meta-Analyses
Tim Reason, Yunchou Wu, Cheryl Jones, Emma Benbow, Kasper Johannesen, and Bill Malcolm
Open Access
With the enhanced capabilities of large language models, their potential for selecting an appropriate statistical model and conducting a network meta-analysis was assessed.

A4SLR: An Agentic Artificial Intelligence-Assisted Systematic Literature Review Framework to Augment Evidence Synthesis for Health Economics and Outcomes Research and Health Technology Assessment
Kyeryoung Lee, Hunki Paek, Nneka Ofoegbu, Steven Rube, Mitchell K. Higashi, Dalia Dawoud, Hua Xu, Lizheng Shi, and Xiaoyan Wang
This study reports the development, implementation, and validation of A4SLR, an artificial intelligence-assisted framework for health economics and outcomes research and health technology assessments.

Role of Generative Artificial Intelligence in Assisting Systematic Review Process in Health Research: A Systematic Review
Muhammed Rashid, Cheng Su Yi, Thipsukhon Sathapanasiri, Sariya Udayachalerm, Kansak Boonpattharatthiti, Suppachai Insuk, Sajesh K. Veettil, Nai Ming Lai, Nathorn Chaiyakunapruk, and Teerapon Dhippayom, on behalf of the Generative AI for Navigating Systematic Reviews working group
Open Access
Generative artificial intelligence supports systematic reviews in question formulation and data extraction, but lacks reliability for literature search, study selection and risk of bias assessment.

Generative Artificial Intelligence to Automate the Adaptation of Excel Health Economic Models and Word Technical Reports
William Rawlinson, Siguroli Teitsson, Tim Reason, Bill Malcolm, Andy Gimblett, and Sven L. Klijn
Open Access
Large language models have the potential to perform routine adaptations of Excel-based health economic models and technical reports accurately and rapidly at a low cost.

Roles of Artificial Intelligence-Based Synthetic Data in Health Economics and Outcomes Research
Tim C. Lai and Surachat Ngorsuraches
Open Access
Artificial-intelligence/machine-learning-driven synthetic data have the potential to enhance data accessibility and facilitate more robust analyses in health economics and outcomes research.

Unravelling Public Preferences for the Use of Artificial Intelligence Mobile Health Applications in Australia
Vinh Vo, Maame E. Woode, Stacy M. Carter, Chris Degeling, and Gang Chen
Open Access
Artificial-intelligence-based mobile health apps for heart disease and depression reveal that artificial intelligence accuracy matters the most, followed by doctor and artificial intelligence interaction.

Update: CAP Accreditation Allows "Distributed Model" Testing - But AMA CPT PLA Will Not

In recent quarters, a number of PLA applications have been rejected either because they were "dry lab only" services (e.g. starting from a digital image of an H&E section) or because they were in a "distributed model" for example with sequencing in one location and bioinformatics in another.

CAP clearly allows certain kinds of distributing model services.  I wrote a blog on this on November 13 - here.

That's CAP for CLIA.

However, AMA CPT has a specific PLA FAQ that specifically disallows this model as eligible for PLA coding.   I'm not sure when the text was specifically updated.   Find the FAQ and the text clipped below.


  • For requesting laboratories, all aspects of the analysis must be performed in a single (“sole-source,” including the same or different locations, owned and operated by the same entity) CLIA-certified or accredited clinical laboratory or licensed or marketed to multiple providing laboratories (e.g., cleared or approved by the Food and Drug Administration [FDA]).   

##
Here's some additional text from that FAQ you may want to be aware of.

  • Proprietary laboratory analyses (PLA) codes are alpha-numeric CPT® codes with a corresponding descriptor, for labs or manufacturers to more specifically identify their test. PLA codes can be provided either by a single (“sole-source” including the same or different locations, owned and operated by the same entity) laboratory or licensed or marketed to multiple providing laboratories (e.g., cleared or approved by the Food and Drug Administration [FDA]). Tests with PLA codes must be performed on human specimens, do not have a physician work component, and must be requested by the clinical laboratory or manufacturer that offers the test.

Summary 
PLA Distributed Model
    Not allowed (quoted above).
Physician work.
    NO physician work is allowed, including required physician work to review and confirm data, prior to a PhD's sign out at the end.
Dry Lab
   I've seen the term "dry lab" used occasoinally (e.g. digital only analysis of an H&E image), but I haven't seen this excluded from PLA coding in writing.  However, I believe the PLA committee will reject what it perceives as a "dry lab" application, in practice.   [Regardless of possible past acceptances in earlier years].




Monday, November 17, 2025

Legislation: Where to Find It: Delay Lab Pricing 30 Days. Spend $50B for Rural Health Technology.

 Too pieces of recent legislation and where to find them.

HR 5731 and Section 6209 PAMA

The bill that ended the Fall 2025 government shutdown was HR 5731.   Look to Section 6206 to find a 30-day delay in the next implementation of PAMA lab pricing cuts.  Find many Medicare payment rules by searching "1834" and find PAMA changes by searching "1834A."

https://www.congress.gov/bill/119th-congress/house-bill/5371/text

click to enlarge

For comparison, 6206 tweaks ambulance payments and section 6208 extends telehealth flexibility, the first to January 30 and the second to January 31.

Coverage at Genomeweb.

##

RURAL HEALTH TECHNOLOGY HR 1

This was the "Big Beautiful Bill" and it includes a big beautiful $50 billion dollars for rural health technologies.

https://www.congress.gov/bill/119th-congress/house-bill/1

https://www.congress.gov/bill/119th-congress/house-bill/1/text

Government Summary

  1. (Sec. 71401) This section provides $10 billion per fiscal year for FY2026-FY2030 for a program that supports the provision of health care in rural areas. 
  2. Under the program, states may apply for financial allotments to improve the access and quality of care of services in rural areas, such as through enhanced technology, strategic partnerships, and workforce training. States must submit detailed rural health transformation plans and certify that no funds will be used to finance the non-federal share of Medicaid or CHIP. The CMS must approve or deny applications by December 31, 2025; states that receive approval do not need to reapply each year. States are not required to contribute any matching funds with respect to program allotments.
  3. The CMS must award allotments so that 50% of funds are awarded equally among all approved states. The remaining 50% of funds must be awarded based on certain considerations, including the proportion of rural health facilities in the state compared to the number of such facilities nationwide.
  4. The section additionally provides $200 million for FY2025 for the CMS to implement the program.

##

AI Summary (Chat GPT 5)

##

Below is a structured ~500-word paragraph-style summary of Section 71401 (Rural Health Transformation Program), integrating the statutory foundations with the implications for med-tech and health-IT suppliers, and concluding with what CMS is likely doing operationally in the first six months and into CY2026.


Section 71401 of the 2025 budget reconciliation law establishes the Rural Health Transformation Program, a $50 billion, five-year mandatory spending initiative aimed at restructuring and modernizing rural health care delivery in all 50 states. 

Between FY2026 and FY2030, CMS will allocate $10 billion per year to states that submit approved “rural health transformation plans.” These plans must improve access to rural hospitals and providers; enhance health outcomes; integrate new and emerging technologies; promote regional partnerships and economies of scale; strengthen rural clinical workforce supply; and present credible strategies for long-term hospital solvency. 

States may use the funds for at least three of a defined set of “health-related activities,” including evidence-based chronic disease interventions, limited provider payments, consumer-facing technology solutions, the adoption of advanced technologies such as remote monitoring, robotics, AI, and other innovations; workforce recruitment; major information technology upgrades; cybersecurity improvements; and reconfiguration of service lines across the full continuum of rural care. 

Only a small fraction of each state's allotment (10 percent for administrative costs, 15 percent for provider subsidies) may go to overhead or direct financial support, ensuring that the majority of funds flow into structural modernization rather than temporary relief.

The implications for medical technology, diagnostics, and health IT suppliers are significant, even though the statute avoids naming specific product categories. The law repeatedly emphasizes “new and emerging technologies,” “technology-driven solutions,” “technology-enabled solutions,” “software and hardware,” “information technology advances,” and “advanced technologies, including remote monitoring, robotics, artificial intelligence.” 

This creates a statutory foundation for a wide range of equipment and digital platforms, from connected diagnostic devices and point-of-care instruments to telehealth infrastructure, imaging and pathology informatics, remote monitoring systems, cybersecurity architecture, and cloud-based workflow systems. 

For med-tech suppliers, the opportunity lies not in selling individual devices but in positioning integrated solutions that help states satisfy the transformation plan requirements: improving quality, stabilizing rural access, reorganizing service lines, and modernizing workforce-limited settings. For health-IT suppliers, the strongest hooks are in the explicit authorization for software, hardware, interoperability, and cybersecurity upgrades—essentially giving states legal authority and federal dollars to modernize outdated rural hospital infrastructure, strengthen data systems, and deploy digital tools that enable new care models.

The View from CMS

In the first six months after passage (July–December 2025), CMS is likely occupied with program design, application review, and operational scaffolding. Statutory deadlines required states to submit applications by the end of 2025, meaning CMS staff were working intensively to publish guidance, answer technical questions, process fifty state applications, and build the internal analytics necessary to score transformation plans. 

CMS also needed to design the allocation formula that splits funds evenly across states and distributes the remainder based on rural population, facility mix, and other factors. 

Internally, CMS likely staffed or expanded a cross-agency team, activated its $200 million implementation budget, coordinated with OGC for legal interpretation, and began constructing mandatory reporting templates, oversight frameworks, and technical-assistance resources for states.

Looking into CY2026, CMS will shift from startup activity to execution: approving applications, issuing allotments, conducting early monitoring, and guiding states as they begin procurement and program rollout. CY2026 will be the year when states translate their plans into concrete vendor partnerships for IT modernization, diagnostic upgrades, workforce support technologies, telehealth expansion, and advanced digital tools. 

CMS will simultaneously face a growing oversight burden—tracking expenditure, ensuring compliance with allowable uses, and responding to inevitable questions from Congress, GAO, and state stakeholders about whether the $50 billion is producing measurable improvements in rural access and stability.

Siemens' Remarkable Comments to CMS on Software Reimbursement - the New "Must-Read?"

 For several years CMS has struggled with changing, vague, or counterproductive policies for how topay for software-dominant services.   This year, the Senate introduce S.1399, which (if passed) would force CMS to develop coherent software reimbursement policies, at least in the OPPS setting.   Last July, CMS inserted a Request for Information into both the PFS and OPPS rules.  Numerous parties responded (you can find all rulemaking comments online), but CMS has only responded to the extent of saying, "Thanks for getting back to us, we'll be reading that" to stakeholders.

In this blog I'll focus on the responses from Siemens to both the PFS and OPPS rules.  To help you, I'll put a zip file in the cloud four documents, the two RFIs and the two Siemens letters.

https://drive.google.com/file/d/1kKdDr14gczk-ATk8MrT3ovoL5WqirZUa/view?usp=sharing

I've read both letters but I can't write a better summary than Chat GPT 5 does, below.  Worth reading.

###

AI CORNER

##

Overview of Siemens Comments to RFI’s on SaaS

 Siemens Healthineers’ comments to CMS on the July 2025 PFS and OPPS RFIs reflect a highly coordinated, sophisticated strategy to shape the reimbursement landscape for artificial intelligence, software-driven clinical analytics, and broader digital health technologies.

Across both rulemaking venues, Siemens proposes a conceptual shift away from CMS’s narrow framing of “Software-as-a-Service (SaaS)” toward a broader and more inclusive category they call Algorithm-Based Healthcare Services (ABHS). This new term deliberately captures not only cloud-based software tools but also AI and machine-learning applications embedded[1] in imaging equipment, delivered via web or workstation, or provided by third-party analytic vendors. By urging CMS to adopt ABHS as the central regulatory category, Siemens is effectively attempting to define the vocabulary—and therefore the policy architecture—through which Medicare will evaluate and pay for clinical AI. In regulatory affairs, naming the category often determines how the category will be regulated, and Siemens is moving early to ensure that the definitions mirror its own wide-ranging digital portfolio.

In both letters, Siemens argues that AI-driven analytic services must receive separate, explicit payment, rather than being folded into packaged payments for imaging or other underlying procedures. They underline that packaging AI into base procedure codes would immediately suppress adoption and undercut the value proposition of these technologies.[2]  To prevent this, Siemens requests that CMS codify formal regulatory text guaranteeing separate payment for ABHS—even providing draft language for 42 CFR 419.2 in the OPPS comment letter. This is a significant move: Siemens is no longer asking CMS to “consider” separate payment, but is proposing the actual legal language that would lock separate payment into federal regulation. These comments also push CMS to explicitly shield ABHS add-on codes from OPPS packaging rules—an attempt to close every possible loophole through which CMS might inadvertently or intentionally eliminate separate payment in future rulemaking.

Another major pillar of Siemens’ strategy is the request that CMS automatically place all new ABHS CPT codes into New Technology APCs for a minimum of five years, using manufacturer-supplied cost data rather than unreliable early Medicare claims. Siemens argues that early claims for new AI services are frequently distorted by incorrect revenue code assignments, slow hospital adoption, and lack of clear billing pathways. By providing a five-year protected runway—parallel to the lifespan of Category III codes—CMS would ensure stable and predictable reimbursement while allowing enough time for claims data to mature. This approach mirrors the policy logic behind transitional pass-through payments and NTAP in the inpatient setting. Siemens also requests that these ABHS codes be exempt from the Universal Low-Volume APC policy, which otherwise risks downward payment bias for emerging technologies.

Siemens devotes substantial attention to the unique cost structure of AI technologies, emphasizing that ABHS products involve ongoing subscription or licensing fees, cloud computing costs, cybersecurity infrastructure, integration with EHRs, and ongoing staff training. These are not capital costs, and they do not resemble the equipment depreciation and clinical labor inputs that CMS traditionally uses to calculate reimbursement. Consequently, Siemens argues that CMS must modernize its resource-based methodology or risk chronically undervaluing digital health technologies. They also highlight that AI often generates additional clinical outputs, which increase cognitive workload for physicians rather than replacing it. Clinicians must review expanded information, validate algorithmic findings, integrate AI-derived insights into care planning, document the use of AI, and communicate results to patients. Siemens leverages this point to argue for meaningful work RVUs under the PFS, counteracting any CMS assumption that AI reduces professional effort.

Interwoven throughout these comments is Siemens’ broader strategic posture as both a policy thought leader and a legislative partner. Siemens explicitly supports Senate Bill 1399, the Health Tech Investment Act, which directs CMS to create clear reimbursement pathways for AI. Even though the bill is unfunded, Siemens recognizes that it carries substantial agenda-setting force; it can push CMS to create the structural payment models Siemens wants. By aligning itself with S.1399 and proposing fully formed regulatory text, Siemens positions itself not merely as a stakeholder responding to CMS, but as an architect offering CMS a ready-made blueprint for national AI reimbursement. Siemens also calls for CMS to convene a national public forum on ABHS, which would cement Siemens’ role as a central voice in federal policymaking and create a venue where Siemens can influence the conceptual and technical underpinnings of AI reimbursement.

Ultimately, Siemens’ comments reveal a comprehensive, proactive attempt to define how Medicare will recognize, categorize, and reimburse AI-enabled clinical services. Their strategy seeks harmonization across PFS and OPPS, codification of separate payment, early placement into New Technology APCs, explicit valuation of physician work, and modernization of cost inputs. Through legislative alignment, regulatory drafting, and conceptual reframing of AI as a clinical analytic service rather than as a software add-on, Siemens attempts to shape a durable federal policy framework that supports innovation while aligning closely with the structure and economics of its own product portfolio.



[1] I believe SaaS implies “cloud” and skips “embedded.”

[2] For published thinking on Ai reimbursement policy, see Warshawsky and others. https://www.discoveriesinhealthpolicy.com/2025/11/center-for-medicare-innovation.html

###

See my earlier blog about Artera and PathAI comments specific to digital pathology,

https://www.discoveriesinhealthpolicy.com/2025/10/pathai-proposes-new-coding-system-for.html

And general blog about the OPPS and PFS comment cycle,

https://www.discoveriesinhealthpolicy.com/2025/10/very-brief-blog-see-search-comments-on.html


Sunday, November 16, 2025

Prasad & Makary: Radical New Ideas for Drug Approvals; Do They Matter to Diagnostics??

 On November 12, 2025, NEJM published an important new essay by Vinay Prasad MD and Martin Makary MD, the head of drug approvals and the commissioner of the FDA.

The essay, titled, "FDA's New Plausible Mechanism Pathway," states that new therapies "challenge traditional models of drug and biologic development" at FDA, and require new thinking.  The article is getting a fair bit of attention.

Let's ask, are there ideas in the Prasad-Makary assay that apply to genomics and other diagnostic tests?  If so, what are the points of application, and where is there a mismatch?


https://www.nejm.org/doi/pdf/10.1056/NEJMsb2512695

See coverage at Fierce Biotech, Politico Pro, Biopharma Dive, and Stat+.

See also an excellent 7-page review at Linked In by AgencyIQ about Makary’s vision for "continuous trials."  Here

Friday, November 14, 2025

ACLA Scores Short-Term Win in PAMA Pricing!

With the budget that reopened the government on November 13, 2026, ACLA scored a short term win in delaying PAMA cuts.  I'll quote them below.

https://www.acla.com/acla-applauds-enactment-of-short-term-pama-relief-in-spending-package-and-urges-advancement-of-results-act/

Washington, D.C. – The American Clinical Laboratory Association (ACLA) applauds the enactment of a short-term delay to payment cuts to the Medicare Clinical Laboratory Fee Schedule (CLFS) and data reporting requirements under the 2014 Protecting Access to Medicare Act (PAMA), included in the federal government funding package. 

The provision delays these harmful policies through January 30, 2026. 

Without this latest extension, laboratories would have faced a fourth round of cuts—up to 15 percent reductions on roughly 800 tests—along with burdensome reporting mandates beginning in [January 1] 2026.

##

“Now, Congress has a vital opportunity to prioritize enacting meaningful, permanent reform through the Reforming and Enhancing Sustainable Updates to Laboratory Testing Services (RESULTS) Act (H.R. 5269 / S. 2761).” [the President of ACLA] added. “ACLA is committed to working with lawmakers and key health care committees to advance the RESULTS Act and protect patient access to critical laboratory testing services nationwide.”

The RESULTS Act would modernize the CLFS rate-setting process by leveraging comprehensive, representative commercial market data while significantly reducing administrative burdens on laboratories and the Centers for Medicare & Medicaid Services.

##

See an October 30, 2025, two-page letter to both House and Senate.

https://www.acla.com/wp-content/uploads/2025/10/2025-RESULTS-Act_Provider-Letter-10.30.25.pdf


Thursday, November 13, 2025

Trump Administration: Far Less Rulemaking 2025 vs 2024

Did the Trump Administration do far less rulemaking in their first year than Biden in year four?

Last year, the Physician Fee Schedule final rule appeared at 89 FR 97710.

This year, the same final rule appeared at only 90 FR 49266 - about half as many pages along in the Federal Register.

##

Their are some balancing factors.  The Biden rule appeared December 9, 2024, the Trump rule November 5, 2025.  The Trump administration was shut down during October.   Still, these don't account for the 50% difference in sheer regulatory volume between page 49266 and page 97710.


For-Example AI Essay: CLIA Registration but not a License [REVISED]

[REVISED]

Someone was talking to me about a lab having "CLIA Registration but not a License."

The question is a source of immediate confusion because the CLIA agency does not seem to use the term "license" but rather has three types of registration status (CoR, CoC, CoA as further below).  While I've seen AMA PLA instructions that use the term [CLIA License].

Here is a key passage from a CLIA agency government PDF:

https://www.cms.gov/files/document/laboratory-quick-start-guide-cms-clia-certification-november-2024.pdf


Here's what I came up with after several emails with colleagues and after reading more CLIA documents like the link above.

It appears the CLIA agency avoids the term "License" - for example the word LICENSE does not appear in this detailed four page PDF:
https://www.cms.gov/files/document/laboratory-quick-start-guide-cms-clia-certification-november-2024.pdf

But if you look at AMA CPT PLA instructions, they do use the term "CLIA license."

What you do have is:

STAGE A
Applied online for certificate.    At this short initial stage, you get a clia number as a tracking number but it is not active.  This stage does not have a formal name.

STAGE B
CLIA issues CERTIFICATE OF REGISTRATION or CoR.   
This is based on a paperwork review at the CLIA agency.  For example, your director and other senior staff have appropirate credentials. 
You hope to get this CoR a month or two after the day you applied.   This is good for up to two years AND IT DOES ALLOW complex testing assuming you applied for complex testing and demonstrated on paper you have the right people.

STAGE C1
You get INSPECTION and upgraded to a CERTIFICATE OF COMPLIANCE or CoC (from CLIA) or

OR
STAGE C2
You get INSPECTION and upgraded to a CERTIFICATE OF ACCREDITATION or COA (from CAP or Joint Commission).

Relationship to PLA application at AMA CPT:

When readers read that AMA PLA application instructions require a "CLIA license," readers may be uncertain if this means any of the three certificates (CoR Registration - CMS CLIA paperwork review passed, lab is active) (CoC Compliance - site visit by CLIA passed) (CoA Accredidation - site visit by CAP passed).   Or is there any chance PLA "means" only the two certificates issued post-inspection (CoC, CoA).

##

AI CORNER [REVISED]

##

 What “CLIA Registration” Really Means — Untangling a Common Point of Confusion

In CLIA discussions, few phrases create more misunderstanding than “registered but not licensed.” The confusion stems from the fact that “registration” has both an informal meaning in everyday conversation and a formal meaning within the CLIA regulatory framework. 

The formal term: Certificate of Registration (CoR)

Under CLIA, a Certificate of Registration (CoR) is not just a placeholder or a pre-approval. It is a valid CLIA certificate issued to new laboratories applying for either a Certificate of Compliance (CoC, from CLIA itself) or a Certificate of Accreditation (CoA, i.e. accreditation by CAP).   

 The Certificate of Registration is issued when the application has passed a paperwork review, but not yet a physical inspection.

The CoR is temporary—valid for up to two years—but during this period it fully authorizes the laboratory to perform moderate- and high-complexity testing, as well as waived testing.  (See the quotation that opened this essay).

The purpose of the CoR is to allow a new laboratory to begin operating while awaiting its first full inspection. After that inspection, the CoR is replaced by a CoC or CoA. Until that transition occurs, the CoR serves as the operative license, and no additional CLIA certification is required for the lab to run non-waived tests.

The informal usage: “We’re registered with CLIA”

Outside of regulatory documents, companies often use “registered with CLIA” loosely to mean that they have submitted an application, received a CLIA number, and appear in CMS’s database. 

This status—essentially a pending application that has been filed but not revieweddoes not authorize any moderate- or high-complexity patient testing. A CLIA number alone is not a certificate.

Thus, the informal phrase “registered but not licensed” probably refers to this pre-certificate stage, not to a laboratory holding an actual Certificate of Registration.

Why the distinction matters

The formal CoR and the informal “registration” reflect two entirely different regulatory statuses:

StatusFormal CLIA Term?Does it authorize moderate/high-complexity testing?
Application submitted, CLIA number assignedNoNo
Certificate of Registration (CoR) issuedYesYes
Certificate of Compliance (CoC) [CLIA]YesYes
Certificate of Accreditation (CoA) [CAP]YesYes

Understanding these distinctions prevents the common error of assuming that a “registered” laboratory cannot legally perform testing. In CLIA’s vocabulary, a Certificate of Registration is itself the legal authority to operate until inspection.

The bottom line

A CLIA Certificate of Registration is a genuine, valid CLIA certificate that authorizes all levels of testing for up to two years while a new laboratory awaits inspection. The lab's paperwork has been reviewed, and passed, but the inspection is still in the future.  Confusion only arises when “registration” is used informally to describe the pre-certificate stage. Clarifying this difference ensures accurate communication, particularly in regulatory, operational, and consulting contexts where the precise meaning of CLIA terms has real consequences.


 

You said:
Could you give an example with ABC LAB? which is a new lab. Make it concise but use dates as examples to make it seem more concrete.
ChatGPT said:

Here is a concise, concrete example you can drop into your blog:


Example: ABC Lab and the Difference Between “Registration” and a Certificate of Registration

Imagine ABC Lab, a brand-new molecular diagnostics lab.

January 5, 2026

  • ABC Lab submits its CLIA application. CMS assigns a CLIA number the same week.
  • At this point, ABC Lab is informally “registered” in the database—but it cannot perform any moderate- or high-complexity patient testing.

February 1, 2026

  • CMS issues ABC Lab a Certificate of Registration (CoR).
  • This is a real CLIA certificate, valid for up to two years, and it immediately authorizes ABC Lab to begin performing moderate- and high-complexity testing while awaiting its initial inspection.

It means the applicatoin has passed an initial review - for example, that high-complexity staff have appropriate training per the application on January 5.

September 2026
The state agency performs ABC Lab’s first full CLIA inspection.

October 2026
After passing inspection, ABC Lab’s CoR is replaced with a Certificate of Compliance, which becomes the long-term certificate for the lab.

This example shows the key distinction:

  • A CLIA number alone (January) does not authorize testing.

  • A Certificate of Registration (February) does authorize testing until inspection.


I can also format this as a shaded call-out box, sidebar, or timeline graphic if you want it to visually stand out in your blog.

You said:
what happens between January 5 and Feb 1? They just read the lab's name and address at CLIA?  
ChatGPT said:

Between January 5 (application submitted) and February 1 (Certificate of Registration issued), it looks like “nothing is happening”—but in reality CMS and the state CLIA agency are doing several non-trivial administrative checks behind the scenes. None of it is glamorous, but none of it is simply “reading the name and address,” either.

Here is what actually occurs during that gap.


What Happens Between Application and Issuance of the Certificate of Registration

1. CMS validates the application data (CLIA-116)

They review that ABC Lab has provided:

  • Correct lab type (e.g., independent, hospital-based, physician office, etc.)

  • Accurate address and contact information

  • Appropriate test menu (claimed complexity)

  • Proposed clinical specialties/subspecialties

  • Correct payment category (fees vary by complexity and test volume)

This includes checking for mismatches—e.g., a lab claiming high-complexity molecular testing but listing personnel who only qualify for moderate complexity.

2. Verification of the Laboratory Director’s credentials

This is the most time-consuming part.

CMS/state reviewers confirm that the proposed director:

  • Meets CLIA high-complexity director requirements (PhD + board certification, pathology MD, etc.)

  • Has appropriate documentation (licenses, ABMGG/ABMM/ABP certification, or equivalent)

  • Has no gaps, mismatches, or missing credentialing documents

If anything is incomplete, CMS requests additional materials—this alone can stretch the timeline.

3. Personnel review for key positions

For moderate- or high-complexity labs, CMS may also check:

  • Technical supervisor qualifications

  • Clinical consultant qualifications

  • General supervisor qualifications

For some states, this must be manually approved.

4. Fee processing and confirmation

CLIA will not issue a certificate until:

  • The invoice is generated

  • The lab pays the fee

  • The payment is confirmed by the accounting contractor

This can add 1–2 weeks depending on timing.

5. Enrollment in Proficiency Testing (PT)

CMS verifies that the lab has:

  • Selected an approved PT provider (e.g., CAP, API, AAB)

  • Enrolled in applicable PT programs for the tests they plan to run

A lab cannot claim certain specialties on its CoR without demonstrating PT enrollment.

6. Assignment to the state survey agency or accreditation body

CMS flags the new lab for:

  • State inspection (if pursuing a Certificate of Compliance), or

  • Accreditation review (if applying under CAP, COLA, etc.)

This step places the lab in the queue for its upcoming inspection, and the CoR cannot be issued until this routing is configured.


So is it trivial paperwork?

Not really.
It’s bureaucratic and unexciting, but it is real work that must be manually verified.

Most delays occur because of:

  • Missing director credential documents

  • Incomplete PT enrollment

  • Fee processing delays

  • Manual review by state agency staff

  • Seasonal backlogs (e.g., January is notoriously slow)

Once all these pieces are checked and validated, CMS issues the Certificate of Registration (CoR), which allows the lab to begin moderate- and high-complexity testing.