Saturday, November 29, 2025

Noted: Luis Cano's Essay on "Invisible War in Digital Pathology"

 Luis Cano MD PhD is a Paris-based expert on digital pathology.  Find his Linked in here.  His article feed here.   His Substack articles, "Beyond the Slide," here.

His new 5000-word mega-essay about digital pathology system dynamics:

https://beyondtheslide.substack.com/p/the-invisible-war-in-digital-pathology

By the way, the bibliography of his new article is quite impressive too.

Enjoy the original essay but here's a 300 word summary (AI):

Digital pathology, Cano argues, is less a neat “digital upgrade” and more an invisible war: a messy, protracted collision between technology that already works and systems that are nowhere near ready to absorb it. The microscope era was stable, tactile, and culturally entrenched; digital workflows turn glass into data, demanding scanners, clouds, networks, standards, and cybersecurity. 

COVID accelerated adoption by necessity, proving feasibility but also exposing fragile infrastructure and improvisational workflows, especially in low-resource settings where digital tools could help most but basic prerequisites—power, bandwidth, capital—are lacking. The result is a tug-of-war between an analog world that won’t die and a digital one not yet fully born.

The conflict is driven less by algorithms than by misaligned incentives and timelines. Startups sprint on 18-month venture clocks; hospitals move on 5–10-year cycles; pathologists, as final signatories, face an identity and liability crisis. 

Economics form a central minefield: capital costs are huge, ROI is mostly indirect efficiency, and reimbursement for digitization or AI is nascent and fragmented. Technologically, proprietary formats, weak interoperability, domain shift, and brittle real-world performance keep AI stuck in “demo mode.” Regulatory fog deepens the tension: the FDA’s PCCP and the EU AI Act try to tame evolving, data-hungry models with frameworks built for static devices, while liability for AI-driven errors remains unclear.

Cano’s core thesis is that progress hinges on alignment, not more clever models. He points to federated and swarm learning, shared infrastructures like BigPicture, and human-centered design as early blueprints for cooperation. The future pathologist becomes an integrator of multi-modal information, not a human scanner; AI should amplify rather than replace. The “war” ends, he suggests, when stakeholders stop behaving like rival armies and start acting as architects—building standards, economic models, regulations, and lab designs that are interoperable, explainable, and explicitly human-centered.


###

Updates.  20260105.  See an Linked In article about FDA endorsement of a novel digital pathology approach -

https://www.linkedin.com/posts/michael-kalutkiewicz-66494458_congratulations-to-pathai-on-their-full-qualification-activity-7403901517650354177-o-Tw/


Not Bad for a Machine: Chat GPT Offers to Review CMS & AI for Me

For almost three years, I've had a $20 a month subscription to Chat GPT.  Like other AI's, the home page me offers me topics for the AI to discuss (as if I could think of none, myself?)

This morning the home page of Chat GPT offered me:

  • What are the latest CMS policy updates on AI reimbursement and digital health? Any new regulations or pilot programs worth noting?

I thought, OK, it's a holiday, I'll bite.   

Here's what I got.  

NOTE: I would not necessarily make the same recommendations or emphasize the same things as Chat GPT.  What follows is offered only as an example of automated internet research and automated conclusions and summaries.

I asked it about turnkey venders for remote monitoring; in that section, I've stripped the websites it found but left the AI text.

Remarkable Series of Articles on Digital Medicine from Flavio Angei (Linked In)

I've started noticing the rapid flow of articles highlighted every week by Flavio Angei at Linked-In.

Find his home page here:

https://www.linkedin.com/in/flavio-angei-b5476841/


This should take you to his Linked-In postings:

https://www.linkedin.com/in/flavio-angei-b5476841/recent-activity/all/

He highlights top papers in digital medicine from a wide range of journals.

  • Evolving health technologies: Aligning with and enhancing the NIH Care Excellence Standards Framework.
  • Success factors for sclaing patient-facing digital health technologies: Leaders' insights
  • Navigating regulatory challenges across the life cycle of SaMD
  • LSE: Evaluation framework for health professionals' digital health and AI technologies.
  • Rethinking clinical trials for emdical AI with dynamic deployments of adaptive systems.
  • AI policy in healthcare; A checklist-based methodology for structured implementation.
  • Artificial intelligence in key pricing, reimbursement, market access processes.  Faster better cheaper - Can you really pick two?
  • Systematic review of cost effectiveness and budget impact of AI in healthcare.
  • Commercialization of medical AI technologies: Challenges and Opportunities

Etc etc etc....


___

Goranatis et al. Weigh In on "Value & Valuation" of Genomics

  • First, updating some links on new reviews of MCED.  
  • Then, we look at a new paper by Goriatis, Buchanan, et al in Nature Medicine on valuation of genomics in healthcare.

###

Reviews of MCED come out regularly in major journals; here is the latest one.  

See the latest in Annals of Internal Medicine, Kahwati et al. (November issue; ahead of print 9/16/25).   Sponsored by AHRQ, it comes with an op ed by Weinberg.  See the May 2025 AHRQ output, by Kahwati, at 139pp - here.

I think most of us are used to seeing "MCED" - Multi-cancer early detection.  These articles are headlining with "MCDT" - Multi-cancer detection tests.

####

I'd put those articles in context of a major new paper by a health economics team on "determining the value of genomics in healthcare."  See the home page here:

https://healtheconomicsandgenomics.com/

And see Goranitis et al. in Nature Medicine, dateline November 27:

https://www.nature.com/articles/s41591-025-04061-3

Below, find Chat GPT 5 on Goranitis and on Buchanan, Goranitis.

Tuesday, November 25, 2025

CMS Issues "Request for Information" - Strategic Directions for Medicare Advantage

 CMS has issued its CY2027 proposed rule for Medicare Advantage.  It includes a "request for information" about future strategic directions for the program.

###

CMS issues four major Medicare rules each year.  In the spring, we have the Inpatient Rule, which finalizes in August, ahead of the October fiscal year.  In the summer, we have the Physician and the Hospital Outpatient rules, which publish November 1, ahead of the new calendar year.

And around November, we get the Medicare Advantage proposals, which finalize in the spring, and of the next MA contract year.

Find the MA press release here:
https://www.cms.gov/newsroom/press-releases/cms-proposes-new-policies-strengthen-quality-access-competition-medicare-advantage-part-d

Find the fact sheet here:
https://www.cms.gov/newsroom/fact-sheets/contract-year-2027-medicare-advantage-part-d-proposed-rule

Find the actual proposed rule here (paginated publication on 11/28):
https://www.federalregister.gov/public-inspection/2025-21456/medicare-program-contract-year-2027-policy-and-technical-changes-to-the-medicare-advantage-program

###

The word "coverage" occurs 726 times, but I don't see the words LCD or NCD this year. Prior Authorization 17 times, denial 3 times.  Artificial intelligence, twice.

The request for information on "Future Directions in Medicare Advantage" starts on inspection copy page 6-11.  

Comment to January 26, 2026.

Big News: FDA to Down-Classify Many Companion Diagnostics as Class II (510k)

Update - Friends of Cancer Research letter, January 2026 here.

_____

Last year, when FDA was sparring with stakeholders and courts over its LDT regulations, FDA promised to downclassify many types of diagnostics from Class III to Class II.   They went radio-silent from April to November, but now, the regulation is in print.

This is big news because it changes the landscape of how hard it is to get an FDA label as a companion diagnostic.  It also means that new ranges of tests will qualify for Medicare benefits.  NCD 90.2, for NGS testing in cancer, automatically covers NGS tests that are "cleared or approved" as CDx.   And sole-source tests (run from one lab) are eligible for ADLT pricing rules if they are "cleared or approved."   Now the range of "cleared" tests will be larger.

See an early essay at Linked In by Karin Hughes PhD here.  A 5-page PDF from AgencyIQ here. By Lawrence Worden here.  At Genomeweb here.  Kahles et al. review EU and US IVD regulations - prior to this FDA change - here.  Dark Daily here.

See the Fed Reg regulation proposal here.  Comment in 60 days until January 26, 2026.

The rule runs 14 pages and covers many considerations and details.  The regulatory category for 510K aka Class II CDx will be at 21 CFR 866.6075, as “Nucleic Acid-Based Test Systems for Use with a Corresponding Approved Oncology Therapeutic Product.”  Within the 14-page publication, the actual regulation (which goes at 21 CFR) is about 680 words long (two full columns of the Federal Register).


###

AI CORNER

###

FDA Down-Classifies 
Key Oncology Companion Diagnostics:
A Policy-Level Summary

In a significant regulatory shift, FDA has proposed reclassifying a cluster of oncology companion diagnostics (CDx) and CDx-adjacent molecular tests—from Class III (PMA) to Class II with special controls. The 14-page notice marks FDA’s first broad structural change to CDx oversight in more than a decade and reflects the agency’s conclusion that these technologies are now mature, well-characterized, and manageable within the 510(k) framework.

Rationale for Reclassification

FDA’s justification rests on two main pillars:

Bipartisan Policy Center and Medicare: They Can't Stop Writing

I ran across five recent documents from the Bipartisan Policy Center on Medicare reform.  


If you like this try also "Committee for a Responsible Federal Budget."
.
###

AI CORNER

###

Chat GPT5/ Here’s a structured take on the three groups you asked for.


1. The 54-page white paper (N0): Strengthening Primary Care: Medicare Physician Payment and Related Reforms

Big frame.
The report reframes Medicare Part B reform explicitly as primary-care policy and not just fee-schedule engineering. It opens with the now-familiar BPC storyline: chronic disease is driving unsustainable Medicare spending; the U.S. underinvests in primary care (5–8% of spend vs ~13% in peer countries); and Part B is both the problem child and the lever for change. The chart on page 5, showing Part B rising from ~$231B (2011) to a projected $1.17T (2034), is doing a lot of rhetorical work here: Part B is presented as the largest and fastest-growing piece of Medicare, and therefore the natural focus for sustainability policy.

Diagnosis: three intertwined failures.

  1. Misaligned incentives that keep clinicians in FFS and out of APMs.

    • MACRA’s APM bonus is phasing out; the remaining differential in conversion-factor updates is too small to matter.

    • APMs demand upfront investments and operational changes that are especially hard for small and rural practices, and hybrid payment (APCM codes, partial capitation) is underdeveloped as a bridge.

    • Integration of primary care with specialty and behavioral health is a key ambition but underpowered in existing models.

  2. Structural undervaluation of primary-care work in the MPFS and lack of data.

    • CMS is depicted as overly dependent on RUC survey data and specialty-dominated recommendations; empirical data on time, intensity, and resource use are too thin.

    • There is no consistent federal definition or tracking of “primary-care spend” across programs, making target-setting and accountability almost impossible.

  3. Crippling administrative complexity, especially in APMs.

    • Measure clutter, unaligned quality metrics, and non-interoperable EHR requirements are framed as primary reasons clinicians stay in or drift back to plain FFS.

Solutions: what’s materially new in BPC’s thinking.

The report’s recommendations are not just “more ACOs” but a package that tries to rebalance FFS, APM incentives, and primary-care infrastructure:

  • Rebuild the APM incentive structure, not just extend it.
    The report calls for extending and restructuring the Advanced APM bonus, with an explicit shift toward prospective, per-beneficiary, risk-adjusted payments rather than all-or-nothing thresholds based on total Part B revenue.

  • Create a formal HHS advisory body on MPFS valuation.
    This is one of the clearest “new” institutional proposals. BPC wants a FACA-governed advisory body inside CMS to complement (and de-bias) the RUC by:

    • relying more heavily on empirical data (claims, EHRs, time-motion studies),

    • prioritizing primary-care and care-coordination services for review, and

    • systematizing identification of misvalued services.

  • Track and eventually set targets for primary-care spending.
    HHS would define “primary-care spending,” report it across federal programs, and use that to inform policy targets—essentially building the same infrastructure states like Rhode Island built for commercial plans, but at the federal level.

  • Align quality measurement and reduce reporting load.
    The report explicitly backs convergence of measures across Medicare, Medicaid, and private payers, using Medicare’s agenda-setting role to force simplification rather than adding yet another measure set for each model.

  • Support hybrid and prospective primary-care payments.
    It highlights new APCM codes as a step toward hybrid models but warns that if valuation and beneficiary cost-sharing aren’t addressed, they will be marginal rather than transformative.

Net effect: the 54-pager is BPC’s “integrated theory” document. It ties Medicare sustainability, primary care, APM design, and MPFS reform into a single policy program, with primary care explicitly cast as the system’s leverage point rather than just another stakeholder.


2. N1, N2, N3 in sequence: the three-brief policy staircase

The three briefs function as a stepwise argument: N1 = problem + history; N2 = barriers; N3 = actionable recommendations. Read together, they show BPC tightening from broad concern about MACRA’s underperformance to a concrete legislative/regulatory agenda.

N1 – The Need for Medicare Part B Physician Payment Reform (Issue Brief #1)

This brief sets up the macro problem and the political economy:

  • MACRA hasn’t delivered on its promise.
    It walks through the SGR era, the annual “doc fixes,” and MACRA’s intent to move clinicians into APMs via the QPP (MIPS vs Advanced APMs). Then it shows that FFS remains dominant, and MACRA’s formula continues to generate unsustainable cuts that require yearly congressional patches.

  • Part B spending is the pressure point.
    N1 reprises the chart showing Part B at ~49% of Medicare benefit outlays in 2023 and growing at ~9% annually through 2034. It emphasizes that this growth is not clearly associated with measurable gains in quality or outcomes, and that beneficiaries are bearing higher premiums and deductibles.

  • Political alignment.
    Bi-partisan concern is flagged explicitly: both parties accept that the current “yearly patch” dynamic is untenable, and both profess support for increasing APM participation and primary-care strength.

Functionally, N1 is BPC’s case memo to Congress: it validates the sense of crisis, documents MACRA’s structural flaws, and primes the reader to accept that something bigger than annual patches is now required.

N2 – Key Barriers to Clinicians’ Participation in Promising APMs (Issue Brief #2)

N2 dives into why APMs aren’t scaling, organizing barriers into three clusters.

  1. Misaligned incentives and a flawed bonus design.

    • Expiring Advanced APM bonus; dwindling differential in conversion-factor updates.

    • All-or-nothing thresholds (≥35% of Medicare patients or ≥50% of Part B revenue through an APM) that can punish clinicians who are part-way through the transition.

    • Bonus amounts keyed to total Part B revenue rather than the population actually in the APM—advantaging large, high-volume systems and doing little to reward marginal high-value care.

  2. Fee-schedule misvaluation and its drag on APMs.
    The brief explains how APMs sit on an MPFS “chassis,” so undervaluation of primary-care and care-coordination codes flows directly into ACO and other model economics. It highlights the lack of robust empirical data and CMS’s reliance on RUC survey data as core technical obstacles.

  3. Fragmented APM landscape + MSSP structural issues + admin burden.

    • Multiple overlapping models with shifting rules.

    • MSSP benchmarks that “ratchet down” when ACOs succeed, discouraging continued participation.

    • Risk adjustment that under-captures the complexity and cost of high-need patients.

    • Quality reporting and EHR interoperability burdens estimated in the billions annually, which are particularly punitive for primary care and small/rural practices.

N2 is where BPC’s analytic voice comes through most strongly; it’s essentially a barrier taxonomy and sets up the logic for each of N3’s recommendations.

N3 – Recommendations to Modernize Medicare Part B Physician Payment and Related Reforms (Issue Brief #3)

N3 is the action menu that flows directly from N2’s barrier map.

Highlights (in roughly the order presented):

  • Rebuild the Advanced APM bonus.
    Extend the bonus (no lower than the 1.88% level) with no gap, then convert it to a flat, risk-adjusted per-beneficiary payment and drop the problematic patient/payment thresholds.

  • Create a CMS/HHS advisory body on MPFS valuation.
    Focus on empirical data, transparency, and systematic reviews of misvalued services; explicitly complementary to CMS’s proposed efficiency adjustments and expanded data use in the 2025 PFS rule.

  • Simplify and rationalize APMs, especially ACOs.

    • Elevate the most promising primary-care-focused models, particularly those using prospective payments.

    • Establish a permanent higher-risk MSSP track for “graduated” ACOs.

    • Fix benchmarking and risk adjustment (including using EHR data) to avoid penalizing successful ACOs and better capture high-need patients’ costs.

  • Attack administrative burden directly.

    • Implement and enforce a national data-exchange framework (e.g., CMS Interoperability Framework) for APM participants by a target date (e.g., 2028).

    • Align quality metrics across payers to move toward a unified reporting system.

  • Strengthen primary-care infrastructure.

    • Consolidate and streamline federal HIT and EHR support grants, with explicit attention to small and rural primary-care practices.

As a sequence, N1→N2→N3 shows BPC’s full theory of change: you fix MACRA’s incentive structure, clean up APM design, and simultaneously re-platform MPFS and primary-care infrastructure. The long N0 report is then a deeper dive on one pillar—primary care—inside this larger framework.


3. The two comment letters (PFS and OPPS): “live-fire exercises” of the framework

The PFS and OPPS comment letters are essentially where BPC takes the intellectual architecture above and tests it against real regulatory text. They show BPC applying the same themes—site neutrality, empirical valuation, primary-care support, digital health, and rural equity—to specific code proposals.

(a) OPPS/ASC rule comment (CMS-1834-P, Sept 15, 2025)

Core themes.

  • Site neutrality & volume control.
    BPC explicitly supports CMS’s use of OPPS volume-control authority to extend the 2019 “unnecessary volume” policy to drug administration in excepted off-campus PBDs, with a carve-out for rural sole community hospitals. They link this to their earlier recommendation for broader site-neutral payments for services safely furnished in multiple ambulatory settings, with savings partially reinvested in rural and safety-net hospitals.

  • Alignment with their 2023 Sustaining and Improving Medicare report.
    The letter repeats the argument that payment differentials between MPFS and OPPS drive consolidation and billing shifts that inflate total program outlays and beneficiary cost-sharing—essentially importing the “FFS chassis + misaligned incentives” critique into OPPS.

  • Rural Emergency Hospital quality measurement & SDOH.
    BPC backs CMS’s proposal to offer an eCQM access/timeliness measure as an alternative to the median ED arrival-to-departure measure, framing this as consistent with their earlier rural-health work. They also urge CMS to retain SDOH-1 and SDOH-2 measures, while acknowledging burden and urging ongoing stakeholder engagement—a nice example of their “pro-SDOH but administratively sober” posture.

  • Hospital price transparency.
    The letter supports CMS’s push for meaningful, accurate pricing data and explicitly invokes BPC’s prior 2020 transparency work on uniform data collection.

In short, the OPPS letter deploys the site-neutrality and rural-reinvestment planks of the broader agenda and ties OPPS policy back to Part B alignment and SDOH measurement.

(b) PFS rule comment (CMS-1832-P, Sept 12, 2025)

This letter is more tightly connected to the physician-payment and primary-care work, and you can almost read it as the “short regulatory version” of N0 + N1–3.

Key positions:

  • Practice expense RVUs and valuation methodology.
    BPC strongly supports CMS’s efforts to update PE methodology, use a wider range of empirical data, and better capture cost differences across settings. This is explicitly linked to their June 2025 recommendation for an HHS advisory body inside CMS to make the valuation process more empirical and transparent.

  • Telehealth services list.
    They endorse the proposal to permanently maintain all previously approved telehealth codes, including those added during the PHE, and to shift the review standard toward “can it be safely furnished by interactive telecommunications?” with strong program-integrity oversight. That aligns with their generic telehealth view: stable benefit design + clinical-appropriateness test + guardrails.

  • Direct supervision via two-way audio/video and “incident-to.”
    BPC takes a notably skeptical line on incident-to billing:

    • It argues incident-to makes quality and fraud oversight harder,

    • recommends prohibiting incident-to billing for any practitioner who can bill directly, and

    • at minimum urges targeted monitoring for telehealth incident-to claims, referencing pending Hill legislation on telehealth oversight.
      That is a relatively strong position and consistent with their emphasis on clear accountability and data specificity.

  • Efficiency adjustment and misvalued services.
    BPC supports CMS’s proposed efficiency adjustment to non time-based codes and again ties this to their call for an advisory body and more empirical studies for valuation. They explicitly encourage CMS to use automatic adjustments and systematic methods to identify and correct misvalued, high-growth codes.

  • Remote physiological and therapeutic monitoring (RPM/RTM).
    They back new codes that allow less rigid data-collection days and time blocks, arguing that clinically effective remote monitoring often requires only short bursts of data and largely asynchronous analysis. They then press for:

    • patient-selection and re-assessment requirements,

    • EHR integration and response-time expectations,

    • removal of geographic payment dampers for rural providers, and

    • richer claims data elements (device types, data type, ordering clinician) to support evidence generation and program integrity.

Across both letters, the through-line is clear: site neutrality + primary-care support + empirical valuation + digital innovation with guardrails. The comment letters show BPC in “reg-ops mode,” continuously reinforcing the intellectual scaffolding from N0 and the issue briefs, but translating it into very specific stances on CMS proposals.


Compare the big white paper 54pp with the triplet of smaller ones.

###

How the 54-page white paper differs from — and relates to — the N1/N2/N3 triplet

1. Scope and Purpose

  • 54-page report (N0):
    A comprehensive, integrated blueprint for modernizing Medicare physician payment through the lens of strengthening primary care. It addresses the entire ecosystem: MPFS valuation, primary-care underinvestment, APM structure, quality measurement, administrative burden, rural issues, and system-wide sustainability.

  • Triplet (N1/N2/N3):
    A modular, staged argument focused specifically on MACRA, APMs, and Part B reform mechanics.

    • N1: Why Medicare physician payment reform is needed.

    • N2: The specific barriers preventing APM adoption.

    • N3: Targeted, actionable recommendations to fix those barriers.

2. Intellectual Center of Gravity

  • 54-page report:
    Primary care is the central organizing principle. Everything else—APMs, valuation, interoperability, metrics—is framed as either supporting or inhibiting robust primary care.

  • Triplet:
    MACRA and the APM architecture are the center.  Primary care appears often, but as one part of the mechanics of Part B reform, not the conceptual anchor.

3. Policy Breadth

  • 54-page:
    Broad, system-level. Includes:

    • National primary-care spending targets

    • Federal advisory body on RVUs

    • Hybrid payments, APCM codes

    • APM redesign

    • Quality-measurement simplification

    • Interoperability and admin-burden reduction
      It functions as “the full blueprint.”

  • Triplet:
    Narrower and more tactical. Each paper takes one slice:

    • Problem definition

    • Barrier taxonomy

    • Recommended fixes

  • Together they form a policy staircase rather than a full system redesign.

4. Relationship Between the Two Sets

  • The 54-page report is the umbrella, giving BPC’s complete argument for primary-care-centric Medicare reform.

  • The three briefs are the technical spine of that argument: a concise MACRA/APM package that can be handed to Congress.

  • Much of the 54-page analysis absorbs and extends the triplet’s insights, embedding them in a broader narrative linking primary care, MPFS valuation, and long-term Medicare sustainability.

5. Use Case for Readers

  • 54-page: For senior policymakers who want the whole theory and the cross-cutting rationale.

  • Triplet: For Hill staff, budget analysts, and CMS policy teams who need quick, structured modules they can plug into legislation or rulemaking discussions.

####
Do they say much about "diagnostic tests" or the specialty of Pathology?
####

Short answer:
No — across all four documents (the long 54-page white paper and the N1/N2/N3 triplet), diagnostic testing and the specialty of pathology are almost entirely absent.
When they appear at all, it is brief, indirect, and framed only in relation to primary care, APMs, or valuation mechanics, not as a domain of its own.

Below is the concise landscape:


1. In the 54-page report (N0): almost no discussion of diagnostics or pathology

The long primary-care-oriented report focuses overwhelmingly on:

  • undervaluation of cognitive/primary-care services,

  • APM incentive structure,

  • quality measures,

  • interoperability,

  • site-of-service dynamics, and

  • rural primary care.

Diagnostic testing is nearly invisible.
It is mentioned only in passing when discussing:

  • practice expense data or equipment inputs in RVUs (as examples of CMS data sources),

  • the need for data-sharing between primary care and specialists,

  • general “tests” as part of patient management.

There is no direct commentary on:

  • the role of lab diagnostics in APMs,

  • reforming coding/payment of clinical laboratory services,

  • the specialty of pathology,

  • clinical lab economics,

  • laboratory contribution to primary care value.

Even when discussing undervaluation, the focus is on evaluation and management, not pathology or diagnostic services.


2. In N1/N2/N3 (the triplet): silence on diagnostics and pathology

The triplet makes no substantive reference to lab diagnostics, laboratory workflow, or the pathology profession.

Their analytic frame is:

  • MACRA history

  • the failure of APM uptake

  • the barriers to risk-bearing

  • the flaws of the MPFS physician-work valuation system

  • the need for empirical time/intensity data

  • quality-measure alignment

  • rural and small-practice participation

  • care-management and primary-care financing

When diagnostic testing is mentioned at all, it is only in generic phrases like:

  • “tests ordered by clinicians”

  • “services delivered under the fee schedule”

  • “misvalued services” (meaning E/M and procedural codes, not labs)

Pathologists and laboratorians never appear.
There is no discussion of:

  • pathology specialty payment,

  • diagnostic coding policies,

  • clinical-laboratory contribution to APM models,

  • pathology reporting burdens,

  • molecular diagnostics or precision medicine.


3. Comment letters (PFS & OPPS): still very little on diagnostics

Even here—where diagnostic policy might naturally arise—BPC does not engage with laboratory issues.

In the PFS comment letter:

  • They support CMS updating practice-expense RVUs, but only in the context of physician office practice costs.

  • They endorse telehealth expansions, supervision rules, RPM/RTM coding, and efficiency adjustments—none of which involve lab diagnostics.

  • No discussion of Part B laboratory services, molecular diagnostics, or CLFS.

In the OPPS/ASC letter:

  • They discuss drug administration, site neutrality, and price transparency—but not lab tests.

  • No mention of hospital outpatient lab payments, packaged lab services, or OPPS-specific molecular test rules.

In short: laboratory testing is entirely orthogonal to the policy agenda they are advancing.


Why this absence is notable

BPC’s entire project is built around:

  • sustaining Medicare,

  • strengthening primary care,

  • making APMs workable,

  • rationalizing MPFS valuation,

  • reducing administrative burden,

  • improving rural equity.

They frame the crisis and its solutions almost entirely through the lens of clinicians who deliver longitudinal, face-to-face care.
Pathology and laboratory medicine—which sit outside that direct patient-facing E/M world—are not part of the narrative.

In fact, BPC’s framework implicitly accepts the CMS view that:

health-system sustainability hinges on primary care and APM incentives,
not on the organization or valuation of diagnostic testing.


Bottom line for policy watchers

Across the entire set of documents:

  • Diagnostics and pathology = almost completely absent.

  • Primary care = the anchor point of the entire reform logic.

  • APM and MPFS valuation = the dominant technical concerns.

If you want, I can also give you:

  • (1) a quick analysis of why BPC leaves diagnostics out (structural, political, and narrative reasons),

  • (2) what this means for pathology’s role in upcoming congressional conversations,

  • (3) how diagnostic-policy issues could piggyback on BPC’s larger framework.

CMS Issues Final Lab Prices for CY2026

On November 25, 2025, CMS issued final lab test prices for new tests for CY2026.

###

They regularly come out during Thanksgiving week.   It's the new-test lab test prices for CY2026.

Find the annual meeting page here and scroll down to "Meeting Notice" section and then scroll down to CLFS Test Code Payment Determinations for CY2026 Final.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs/annual-public-meetings

Or this link should drop down the Zip file for you:

https://www.cms.gov/files/zip/cy-2026-final-payment-determinations.zip


What Happened

There were 93 agenda items, four of which were deleted from the process. Of the 90 active items, 75 were finalize as proposed and 14 were changed.   11 were changed from one crosswalk to another, 1 was changed from CW to GF, and two were changed from GF to CW.

20 codes go to "gapfill," all but onc was proposed as gapfill.  That one code switched from CW to GF (as just noted).

Of 59 codes crosswalked, most were crosswalked to a single code x1.   

Other crosswalks were fractional or multiple or additive.  Only one was a fraction, 0094U x.5 for 0583U.  I didn't see a positive fraction (like x1.25 or x1.5) which CMS nearly never uses.   Three crosswalks were additive (e.g. to Code 1 + Code 2).  There were a couple cases of crosswalk to a multiple (x2).  

One code, 0523U, Pillar CDx OncoReveal, was crosswalked to code 0022U minus 81449, the 'minus' function is extremely rare in crosswalks.  That's (23 genes, about $1900) minus (5-50 genes, RNA only, about $600).

###

Comparing Results for Pillar 0523U and PGDX Elio 0562U



The PILLAR test 0523U. oncoReveal CDX.

 It is an FFPE test with 22 genes and 2 FDA PMA claims (lung, colon).

Method includes SNV and INDEL.

It runs a PILLAR kit on an ILMN platform.  The price is about $1350.  Press release.

###

The PGDX ELIO test is 510K, ELIO PLASMA FOCUS, specifically De Novo 510K.  I believe it is the first de novo plasma test.

It tests 33 genes and only claims SNV in its code 0562U.

The company requested crosswalk to plasma, 0388U, Inivata, which is 37 genes for $3500 for the whole range of SNV, INDEL, CNV, Structural variation.   

CMS found the crosswalk untenable because PGDX ELIO plasma only reports SNVs, hence, it got $600 rather than $1350 or $3500.

Monday, November 24, 2025

Fixing the Shrinking RVU: Insights Into the Debates

 The real-dollar value of the Medicare RVU, on which all physician payments are based, and many outpatient technical services, has been shrinking for years.  A new 12-page report from "Committee for a Responsible Federal Budget" collates much of the history and argumentation, even when you disagree with its conclusions.

###

The real-dollar value of the Medicare RVU has been shrnking for years - see a one-page update from AMA.  


The advisory body MEDPAC will be discussing physician reimbursement adequacy at its December 4-5, 2025, meeting.

One recent major action by CMS was to reduce many technical valuations by a 2.5% "efficiency factor" effecting January 2026, allowing funding to be redistributed towards primary care E&M claims. See e.g. a summary at Holland and Knight here.

###

An entity called "Committee for a Responsible Federal Budget" releases a 12-page white paper, in which it supports the "efficiency discount."  Find it here:

https://www.crfb.org/sites/default/files/media/documents/HSI%20PFS%20Final.pdf

While many readers will oppose that conclusion, I'm highlighting the 12 page white paper because it contains an extensively footnoted discussion of the history and present status of a range of issues - from the "efficiency' debate to the overall RVU deflation to the way that growing numbers of Nurse Practitiioners and other professionals are billed (direct vs incident-to). 



###
If you like this, try also  Bipartisan Policy Center.
  

###

The big beautiful bill (HR 1) of July 2025 will lead to some changes in whether and how certain health care graduate students can borrow education funds, from 2026 forward.   In a nutshell, a limited number of degrees will be considered "professional" (law, medicine, dentistry, master's in theology for ministers).  Others, like nurse practitioners, will not be.   (Degrees like MBA or MPH will be officially classed as "non professional" with lower federal loan caps.)   From my general knowledge we want to encourage the production of N.P.s and other kinds of primary care - hopefully this will continue to be revisited.


Sunday, November 23, 2025

Head of CMMI - New Rules for CMMI

CMMI is Medicare's Center for Innovation, created by the 2010 Affordable Care Act and having a mixed history, probably a weak history, given the 15 years and billions of dollars of experience.

See a new article by Gita Deo, chief of staff of CMMI, and Abe Sutton, head of CMMI.  It's in Health Affairs open access:

https://www.healthaffairs.org/do/10.1377/hauthor20251114.865163/full/


To me, the most interesting thing about the short article was the tone.  It gets pretty close to, "Don't do X.  Only idiots do X."

##

CMMI's themes are to protect patients, protect providers, and protect taxpayers.   All models should include downside risk because this is most always a hallmark of previous, successful models.   Downside risk means, don't just spread money on top of problems.

Also, be sure you have rock-solid outcome measures.  By law, CMMI projects to succceed must save costs and improve quality (or save costs at same quality).   Meaning you have to have a control group, whether randomized or constructed, against which outcomes can be quantified.

##

Gita Deo is chief of staff at CMMI; she previously did a master's in public health and worked in a provider system and at McKinsey.   Sutton worked previously at McKinsey and at the Trump-01 White House, where he lead some effective renal initiatives.   He used the interregnum (Biden) to pick up a law degree at Harvard.

##


Friday, November 21, 2025

LANCET Editor Tries to Put Medical AI Into Perspective; so AI Responds

Richard Horton has been editor of LANCET since 1995, in his mid-thirties.   This fall, he writes a pair of op ed on AI as, well, "A reservoir of illusions."  Is he really that negative?   Let's take a closer look.

  • Offline: A Reservoir of Illusions (Part 1)
    • Lancet editor Richard Horton on medical AI, with a focus on Marc Andreesen
    • Here.
  • Offline: A Reservoir of Illusions (Part 2)
    • Horton on medical AI, focus on Emily Bender's book, "The AI Con: How to Fight Big Tech..."
    • Here.
Before we look at Horton's articles, if you like this topic, see two articles in JAMA Internal Medicine this week.  We have Bressman et al., Software as a Medical Practitioner - Time to License Artificial Intelligence?   Here.  And also Steiner, Scientific Writing in the Age of Artificial IntelligenceHere.  

Steiner has a case study in which AI writes, or shortens, medical journal abstracts, and he's not too convinced this meets his bar for quality or accuracy.   I'd just mention that AI can be trained to do this better (e.g. give i 500 well-written abstracts first, before asking it to write or edit) and five human authors, writing or re-writing an abstract, would never come up with the same text, word-choices, or edits.  Each human editor would pick or alter different things than his colleagues.

Working from Horton's Op Ed 2, Bremer's book against AI (which I haven't read but know only from this Op Ed), argues that AI is just regurgitating text strings.   I see more to it than that.  

For example, recently FDA held a full-day workshop on digital medicine and psychotherapy apps.  (My blog here.)  FDA provided a 27-page pre-read and FDA provided a several-page rapid post-meeting summary.   FDA also provided a link to a 7 hour archived video.   I recorded the video, got an auto-transcription from Otter.ai, and fed the three documents (the pre-read, the short summary, and the full transcript of 160pp, 62,000 words) to Chat GPT 5.   I asked, for example ,what you would discover in the full transcript that you wouldn't glean from the pre-read or meeting summary - without having to watch the 7 hour meeting myself.   I thought the result was interesting; you can decide; blog link at top of paragraph.

###
Looking at Horton's articles, I asked Chat GPT 5 to review them for us.  The result, meaning the "extruded text" [Bremer] follows.


###
AI CORNER
###

In Part 1 (“A reservoir of illusions, part 1”), Horton is mostly setting the stage. He contrasts Marc Andreessen’s techno-optimist manifesto with the current investment boom and breathless media narrative around AI. Andreessen is quoted as saying that AI will save lives, end environmental crisis, create material abundance, and that any deceleration of AI is tantamount to “a form of murder”. 

Horton treats this as Exhibit A of “cyber-futurism”: AI as universal problem solver, enemies named as the SDGs, precautionary principle, risk management, and social responsibility. He acknowledges genuine successes such as AlphaFold and the economic centrality of big tech, but the column’s punchline is that this optimism has become a “bubble”, and that critics who puncture it deserve much closer attention.

In Part 2 (“A reservoir of illusions, part 2”), Horton essentially hands the microphone to Emily Bender and Alex Hanna and endorses much of their thesis. He highlights their claim that “AI” is largely a marketing term: large models don’t think, don’t understand, aren’t sentient, can’t empathize, and can’t care. He repeats Bender/Hanna’s line that ChatGPT is a “souped-up autocomplete” or “text-extruding machine”, with no internal mechanism for source reliability, factual curation, or genuine evaluation. He stresses their view that such systems cannot replace human creativity or relationships, “add nothing to the human condition”, and are “all but irrelevant” to the Majority World, where political will and resources matter more than text synthesis.

Turning to health and science, Part 2 becomes more concrete and pointed. Horton concedes narrow, low-stakes utility—transcription, scheduling, simple triage—but frames the clinical and scientific hype as dangerous: automating diagnosis and care remains “unproven,” and using AI in peer review is a “serious abrogation of scholarly duty.” He foregrounds worries about AI-generated fraudulent manuscripts, collusion between editors and authors, and the risk that if editorial systems cannot cope, “the whole edifice of science” could be undermined. He then moves from Bender/Hanna’s critique to empirical examples: a radiology study where AI increased false positives without improving outcomes; experimental work showing AI systems more willing than humans to execute unethical instructions, lowering practical barriers to misconduct; and evidence that algorithms amplify age and gender bias, deepening inequities. He closes with the IBM line: a computer can’t be held accountable, so it mustn’t make management decisions—implicitly, nor life-and-death clinical ones.

Is Horton “too negative” or just prudently cautious?

For an expert audience in medical AI, I’d characterise his stance as deliberately counter-balancing the hype, but not methodically balanced.

Where he’s strongest / most defensible:

  • He is right that health-AI marketing routinely overruns its evidence base. 

  • Many products are launched on retrospective AUCs, reader-study surrogates, or workflow anecdotes rather than hard patient outcomes. Asking that AI be treated as a medical intervention—with explicit questions about information sources, reliability, effects on decisions, efficiency, quality, outcomes, and unintended harms—is exactly the posture most regulators and serious developers now espouse.

  • His concern about scholarly integrity is not exaggerated. 

  • Journal workflows, especially in high-volume/open-access environments, are already stressed; the combination of LLMs and paper mills is a genuine systemic risk. Highlighting fabricated AI-driven submissions and collusive editorial behaviour is a useful warning shot to the publishing community.

  • He is right to stress accountability and opacity: deploying opaque models in high-stakes decisions, without clear responsibility and post-deployment monitoring, is a governance failure even if the model’s ROC curve looks good. The IBM quote at the end is rhetorically old-fashioned but still a useful ethical anchor.

Where he overreaches or flattens the landscape:

  • By leaning heavily on Bender/Hanna’s line that these systems “add nothing to the human condition” and are “all but irrelevant” to the Majority World, he slides from valid scepticism into overstatement. There is a sizeable space between “AI will save the world” and “AI adds nothing”, and a lot of serious work in global health, imaging, and language accessibility sits in that middle space.

  • He tends to conflate LLM limitations with “AI” writ large. The critique of chatbots as text-extruders with no internal fact model is accurate for vanilla LLMs, but he doesn’t distinguish:

    • generative chat systems from

    • narrow supervised models (e.g., risk scores, triage models, segmentation networks) that are trained and validated on labelled clinical data, sometimes with outcome endpoints and prospective evaluation.

    • That conflation makes his critique feel, to experts, a bit like attacking a specific architecture while rhetorically indicting the entire field.

  • On the other side, he also uses extreme techno-optimism as “the AI side”—Andreessen’s claim that any slowing of AI is essentially murder, that AI will end environmental crisis and sustain 50 billion people. Those are real quotes, not invented straw men, but they are hardly representative of mainstream clinical AI research, where the discourse is about incremental gains, safety, and reimbursement. Framing the debate as Andreessen vs. Bender/Hanna obscures the much larger middle camp of “cautious, data-driven optimists.”

So I’d say: he isn’t fabricating straw men, but he is selectively choosing the loudest techno-optimist (Andreessen) and the loudest critics (Bender/Hanna) as his poles, and he lets their most sweeping lines stand without much nuance. That’s understandable for a short “Offline” column, but it means the pieces read more like rhetorical correction than a balanced policy analysis.

How should technical and policy readers read these pieces?

From a medical-AI/health-policy vantage point, Horton is useful as a mood indicator and a reminder of external perceptions, more than as a fine-grained map of the field.

  • For developers, these columns are a reminder that serious journals are deeply worried about fraud, bias, and hollow “AI-washed” claims. Any translational AI program that doesn’t bake in prospective evaluation, post-market monitoring, and clear lines of human accountability will simply reinforce the concerns Horton amplifies.

  • For policymakers, they crystallise a defensible regulatory attitude: treat AI as a medical intervention; require robust evidence tied to clinical endpoints; scrutinise claims about automation of judgment, empathy, or care; and do not let workflow efficiency gains substitute for patient-relevant benefit.

  • For both groups, the pieces probably under-represent the constructive middle ground—hybrid workflows, assistive decision support, carefully bounded automation, and AI used to widen access rather than replace clinicians—but that’s precisely the opening for your blog’s readers: to articulate what responsible, empirically grounded AI in medicine looks like, in contrast both to Andreessen-style manifestos and to “adds nothing to the human condition” nihilism.

In short: Horton is intentionally anti-hype, sometimes rhetorically sharp, occasionally over-generalizing, but not unserious. He’s not a technical critic of model architectures; he’s a cultural critic warning that we may be building an “edifice of illusions” on top of weak evidence and brittle institutions. For expert readers, the productive move is to accept the core cautions, note where he’s painting with too broad a brush, and then show—using real data and real deployments—how to do better than either pole of the Andreessen ↔ Bender/Hanna spectrum.  // Chat GPT 5


AMA to Hold All-Day Workshop on Digital Medicine Coding; Monday December 8, 2025

One of the major topics at AMA is how the AMA CPT will handle coding and payment for software-dominant services.   The AMA held an hour call on this earlier in the year, and an hour session at the September AMA CPT meeting.

Update:  AMA will now hold an all-day workshop on Coding & Payment for Algorithmic Services on Monday, December 8, 2025 (9-3 central time).  On-site registration is sold out but video registration is open.

https://www.ama-assn.org/member-benefits/events/coding-payment-algorithmic-services


Tidbit: AMA holds a more modest one-hour seminar on existing AI codes in CPT on December 4.

Tidbit:  See an excellent essay on related topics by Luis Cano at "Beyond the Slide," Substack, November 28, "The invisible war in digital pathology."

###

###


AMA meetings tend to have multi-layered confidentiality restrictions, but I was able to write a block on the topic on September 10, because I attended a public genomics conference in August where an AMA speaker talked about the AMA's  options, a public talk with no associated confidentiality contracts.

https://www.discoveriesinhealthpolicy.com/2025/08/digital-pathology-ai-and-coding-why.html

https://www.discoveriesinhealthpolicy.com/2025/09/update-on-ama-digital-and-ai-coding.html

You can also find non-confidential information via the AMA webpages for "Digital Medicine Payment Advisory Group" DMPAG and "Digital Medicine Coding Committee" DMCC.

https://www.ama-assn.org/practice-management/digital-health/digital-medicine-payment-advisory-group

One of the key ideas is to make a new section of AMA CPT - not PLA codes and not Category I or III codes - that would be a sort of registry of SaMD services (in pathology, radiology, cardiology, etc).   A lot of AMA attention goes to the fact there are new software services that don't include physician work, which have historically been very rare codes.   

However, in pathology and laboratory mediciene probably far more than 90% of the codes don't involve "physician work" (even "whole genome" or "comprehensive genomic procedures" on 400 tumor genes.)   And CMS has put all the PLA codes that involve whole slide imaging and AI on the Clinical Lab  Fee Schedule, even some with up-close CMS review as ADLTs.   So my perspective is that CMS can ALREADY handle the coding and regulations for pathology AI within existing coding and regulations.

##

For interesting thought capital on AI reimbursement and adoption from Parikh, Warshawsky, and others, blog here.

For my ideas on why existing coding and regulations work (or could work) for AI pathology, blog here.

Wednesday, November 19, 2025

New AMA CPT Text Defines Rules for PLA Code Changes (In a Vague Way)

In a previous blog, I noted that updated online PLA code instructions seem to disallow "distributed testing models" that may be approved by CAP inspections. 

https://www.discoveriesinhealthpolicy.com/2025/11/update-cap-accreditation-allows.html

Here's another PLA item of interest.  

For the CY2026 code book, which just arrived, AMA added some text about when a new PLA code is needed versus when a code with the same number can be revised editorially.

I'm not sure the results of the new language is crystal clear.   But here it is.

###

2026 code book, page 1016.  The repeated use of "may" and squishy terms like "material change" seem as vague as ever to me.  But here you go:

A new PLA code is required whenever the existing PLA has been updated in a manner that materially alters the original code descriptor, clinical indications, performance claims, or result reporting.  Examples of changes that MAY trigger the need for a new PLA code include:

1) Additional nucleic acid (DNA, RNA) and or protein analysis added to or removed from the current PLA test.

2) The algorithm (if used) has been materially changed so that it has expanded beyond its original intent, and

3) The name of the PLA test has changed in association with changes in test performance or test characteristics.

The addition of modification of the therapeutic application of the test require the submission of a code change, BUT it MAY not requre a new code number.

Refer to the introduction of the PLA section of the CPT for other details regarding the code set.

###



Tuesday, November 18, 2025

Hemoglobin A1c testing, and the Truly Mysterious Case of Code 83037 and PAMA

If you go back to tables released when PAMA pricing was new in 2018, you'll find that the surveyed median price of 83036 - laboratory hemoglobin A1c - was $8.50 (the fee schedule was $13.32).  And the surveyed median price of 83037 - POCT hemoglobin A1c - was $22.50, almost three times as high.

Yet the price of either 83036 or 83037 is now $9.52.

What's up????

83036 83037 in PAMA 2018


###

Here's an AI-generated review of episodes and actions that I lived through years ago.

###

83037 was born as a higher-paid point-of-care A1c code in 2006–2007, but Section 113 of the 2007 Medicare/Medicaid/SCHIP Extension Act mandated that any FDA-home-use HbA1c test (83037) be paid at the same rate as standard HbA1c (83036), effective April 1, 2008. CMS implemented this via a one-time transmittal, and the parity rule remains in force.

#####

In 2023 CMS Part B data, for 83036 there were $132M in payments, top labs being LabCorp and Quest, that's about 130,000 tests.   For 83037, there were only $738K in payments (about 70,000 tests).  

####

Here’s the back-story in “policy-wonk” form.  I thought it was a pretty amazing job of AI-driven automatic internet research and writing.


1. Why 83037 was created

CPT 83037 was introduced in 2006 as a new code for HbA1c testing performed by a device cleared by FDA for home use (i.e., a kit-type methodology distinct from conventional lab analyzers). Centers for Medicare & Medicaid Services+1

CMS treated 83037 as a separate HCPCS code under the glycated hemoglobin NCD and opened a national coverage analysis specifically to add it to the HbA1c NCD. The AMA explicitly commented that:

So from the start, 83036 = standard lab HbA1c; 83037 = POC / kit-based method using an FDA-cleared “home-use” device, but done in the office.


2. CMS initial pricing: 83037 ends up higher than 83036

When 83037 appeared on the lab fee schedule:

Contemporaneous reporting from AAFP notes that:

  • Many carriers had been paying around $13.56 for HbA1c;

  • After CMS’ 2007 move, payment for 83037 increased to a national average of about $21.06, and AAFP had lobbied for this higher rate to support point-of-care testing in primary care. AAFP+1

So by 2007 you had:

  • 83036 – conventional lab HbA1c, lower CLFS rate (~$13.56).

  • 83037 – POC / home-device HbA1c, higher CLFS rate (~$21).

From a lab-industry perspective, that reversed the usual hierarchy (kit method > analyzer method) and raised concerns that a relatively simple waived kit had been “over-rewarded” while high-volume reference labs were locked into lower payment.


3. The “Congressional fix”: Section 113 of the MMSCHIP Extension Act (P.L. 110-173)

Those concerns resulted in a statutory “correction.”

In late 2007, Congress enacted the Medicare, Medicaid, and SCHIP Extension Act of 2007 (MMSE / MMSCHIP; P.L. 110-173). Section 113 specifically targets HbA1c test payment:

In the case of any diagnostic laboratory test for HbA1c that is labeled by FDA for home use and furnished on or after April 1, 2008, the payment rate for such test shall be the same as the payment rate for a glycated hemoglobin test identified by HCPCS 83036 (as of Oct 1, 2007). Every CRS Report+1

CMS implemented this in a One-Time Notification (Transmittal R331OTN), which:

  • Reviews the history (gapfill in 2006; crosswalk to 82985 in 2007).

  • States that, effective for tests furnished on or after April 1, 2008, the price for 83037 (and 83037QW) must be equal to the CLFS rate for 83036. Centers for Medicare & Medicaid Services+1

An AAPC / MLN “News Flash” for the clinical lab fee schedule mirrors this: for tests on or after April 1, 2008, payment for 83037/83037QW equals the 83036 rate. AAPC+1

A Congressional Research Service report on P.L. 110-173 summarizes the same change for policy staff: HCPCS 83037 (home-use HbA1c) is now paid at the rate established for 83036. Every CRS Report+1

In other words: Congress explicitly eliminated any pricing premium for 83037 over 83036.


4. How the “slipped into SCHIP” story looks in the record

Contemporaneous physician-press coverage gives a nice window into the politics:

  • Jane Anderson, writing in Family Practice News / MDedge, reports that the reimbursement cut for fast in-office HbA1c tests (83037) was “mandated by a provision in the Medicare, Medicaid, and SCHIP Extension Act of 2007,” reversing CMS’ earlier 2006–2007 decision to pay more. MDedge+1

  • AAFP coding expert Cynthia Hughes is quoted as saying the provision was “slipped into SCHIP” and that it would take another act of Congress to undo it. MDedge+1

  • The same article states that the statutory language requires point-of-care HbA1c tests billed under 83037 to be paid at the same rate as HbA1c tests billed under 83036, i.e., at the lab-test rate. MDedge+1

AAFP’s line was that this would reduce payment for POC kits from ~$21 back down to ~$13.50, making in-office HbA1c less attractive for primary-care practices and potentially reducing patient access to same-visit counseling. The Hospitalist Blog+1

The documentation doesn’t spell out “reference lab lobbying” in so many words, but the economic logic is plain:

  • Point-of-care kit (83037) had been paying significantly more per test than analyzer-based HbA1c (83036). AAFP+1

  • Section 113 eliminates that differential and produces Medicare savings, as CRS notes in its budget table. Every CRS Report

Industry accounts at the time generally describe this as larger stakeholders pushing to avoid a precedent where a waived kit method is permanently priced above the “gold standard” lab method.


5. Where that leaves coding today (for context)

Because of Section 113 and the CMS implementation:

  • 83036 remains the dominant HbA1c code for laboratory-based testing.

  • 83037 is still used for rapid, CLIA-waived point-of-care HbA1c in a physician’s office with an FDA-cleared device, but its fee schedule amount is locked to the 83036 rate under federal statute for any HbA1c test labeled for home use and furnished on or after April 1, 2008. Centers for Medicare & Medicaid Services+1

That’s why, despite the coding distinction (lab vs POC), you do not see a durable national price premium for 83037 over 83036 on the CLFS.


6. Policy take-aways for experts

For a policy audience, this episode illustrates several recurring themes:

  • Coding vs. Payment vs. Methodology: Creating a separate CPT code (83037) for a device-specific or site-of-service-specific method can unintentionally create a payment wedge between similar clinical services (lab analyzer vs waived kit). CMS initially amplified this by crosswalking to a higher-priced code (82985). Centers for Medicare & Medicaid Services+1

  • Stakeholder conflict and “precedent anxiety”: Physician groups (e.g., AAFP) lobbied for higher 83037 payment to support POC testing. Other stakeholders—most plausibly large reference labs and cost-conscious policymakers—were concerned about establishing a precedent where kit methods or POC settings draw systematically higher reimbursement than core lab tests. The legislative fix can be read as Congress siding with the “parity” position.

  • Use of narrow statutory riders: Section 113 is a classic micro-targeted provision—one paragraph in a large Medicare/SCHIP bill that permanently fixes the relationship between two lab codes. CRS explicitly flags this as a discrete “payment rate for certain diagnostic laboratory tests” provision affecting HCPCS 83036/83037. Every CRS Report+1

  • Durability of statutory corrections: As Hughes noted, once you’ve hard-wired the relationship into statute (“same rate as 83036”), reversing it requires another act of Congress, not just a CMS rule change. MDedge