Wednesday, December 17, 2025

A Little More About WISeR: Medicare's Rapidly-Arriving Pre-Authorization by AI

 WISeR is the recently-announced, rapidly-arriving demo program for AI-Prior Auth in Medicare Part B.

None of the target services are labs, like genomics.  But that could happen, so this policy zone is worth watching.

See a current update, December 10, 2025, by Corner et al. at the Sheppard Mullin law firm.   Find it at National Law Review:

https://natlawreview.com/article/new-wiser-model-aims-leverage-ai-technology-reduce-costs-and-inefficiencies




Rakha et al. 2026: New International Consensus Criteria for HER2 Low & Ultralow

For several years, there's been steady news about using drugs or combinations in ultra-low Her2 categories.  

See a new concensus article by Rakha et al., 'International Expert Consensus Recommendations for HER2 Reporting in Breast Cancer:  Focus on HER2-LOW and ULTRALOW Categories."

It's open-access at Modern Pathology.

https://www.modernpathology.org/action/showPdf?pii=S0893-3952%2825%2900223-6

The provenance is:

On behalf of the UK National Coordinating Committee of Breast Pathology, the Association of Breast Pathology, the European Working Group for Breast Screening Pathology and the International Society of Breast Pathology.

The abstract states in part,

  • Updated clinical management guidelines now, therefore, incorporate the spectrum of HER2 results into treatment selection algorithms in the metastatic setting. 
  • To align histopathologic practice with these developments, the College of American Pathologists has issued a new biomarker-reporting template that recommends explicit distinction between IHC 0/absent membrane staining and IHC 0+/with membrane staining. 
    • Key concerns among pathologists include assay variability, scoring reproducibility, and quality assurance standards for accurately detecting such low levels of HER2 expression. 
  • This manuscript provides expert consensus, evidence-based practical recommendations for identifying and reporting tumors with HER2-low and HER2-ultralow expression. 
  • We emphasize standardized testing protocols, validated assays, robust internal and external controls, and focused training for pathologists. A universal structured pathology report is proposed to highlight the accurate distinction between IHC 0 (null), IHC 0+ (ultralow), and HER2-low expressions.

Tuesday, December 16, 2025

MolDx on the Podcast Circuit: Gabriel Bien-Willner @ DIASORIN

The DIASORIN Podcast, "Beyond the Result," brings us an interview with Dr Gabriel Bien-Willner.  It's titled, "The Payer Perspective: Driving Diagnostic Stewardship and Smarter Coverage Policies.”

Find it at YouTube, here:

https://www.youtube.com/watch?v=KfXZAd9N5qc

ai pic

Here's an AI summary of the 40-minute interview.

###

MolDx’s Dr. Gabriel Bien-Willner on
Building Smarter Medicare Coverage for Molecular Diagnostics

In a recent episode of Beyond the Result, DiaSorin’s Luminex division podcast, host Angelo Rago interviewed Dr. Gabriel Bien-Willner, Chief Medical Officer of MolDx, the Medicare program that has become a central force in U.S. molecular diagnostics coverage. The conversation offered a rare inside look at how MolDx has reshaped reimbursement policy to better align innovation, evidence, and diagnostic stewardship.

Dr. Bien-Willner explained that MolDx was launched in 2012 by Palmetto GBA, a Medicare Administrative Contractor, to address a fundamental problem: traditional CPT coding could not adequately describe what modern molecular tests actually do. In response, MolDx introduced a test registry and Z-codes, requiring laboratories to precisely define each test—its analytes, methods, intended use, and clinical context—before Medicare reimbursement. This infrastructure allows payers to understand not just that a test was billed, but what was tested and why.

Over time, MolDx has expanded from human DNA and RNA testing to a broader range of molecular diagnostics and is now used across 28 states, with multiple Medicare contractors adopting its shared policies and tools. Bien-Willner emphasized that MolDx’s approach is not about limiting innovation, but about ensuring that new tests demonstrate they are “reasonable and necessary”—Medicare’s statutory standard—through credible evidence.

A major theme of the discussion was diagnostic stewardship: using the right test for the right patient at the right time. Bien-Willner described how MolDx policies have evolved away from rigid, test-specific coverage toward foundational, future-proof frameworks. These policies define evidence thresholds rather than naming specific products, allowing coverage to expand naturally as science matures. He cited next-generation sequencing in oncology and minimal residual disease testing as examples where MolDx policies have successfully accommodated rapid innovation without repeated rewrites.

Addressing concerns that reimbursement can stifle adoption, Bien-Willner argued the opposite: well-designed policy enables innovation by separating high-value diagnostics from waste and abuse. MolDx, he said, aims to act not as a blunt instrument, but as a scalpel, protecting the Medicare trust fund while ensuring that effective, evidence-based diagnostics can thrive.

For laboratories and industry, his advice was clear: engage payers as partners, understand their evidentiary standards, and recognize that sustainable innovation depends as much on policy design as on scientific breakthroughs.

SIDEBAR

5 Interesting, Slightly Surprising Takeaways

  1. MolDx exists partly because CPT is  [mathematically] “degenerate.” Bien-Willner’s blunt point: for molecular testing, a CPT code often can’t tell a payer what was actually tested—so the claim can’t be meaningfully adjudicated without extra structure.

  2. The “Z-code” is basically Medicare’s “tell me exactly what this test is” tag. It’s not just paperwork—it’s how MolDx ties coverage decisions to a specific lab’s specific LDT, rather than to a broad, ambiguous billing code.

  3. He reframed diagnostic stewardship as “separation of powers.” Providers treat; payers pay. In his analogy, the payer is a built-in “sanity check” like the friend who tells you the dealership’s $4,000 repair might be overkill.

  4. MolDx changed its philosophy on panels—more permissive, but also more logical. He described moving away from “if any analyte isn’t necessary, the whole panel fails” toward “a panel should include everything reasonably needed—avoid partial testing + stacking.”

  5. He’s trying to write coverage policy like software. His “foundational policy” approach uses conditional logic (“if X and Y, and one of A/B/C…”) to make policies future-proof, so new tests can qualify as evidence evolves without rewriting the whole LCD every year.

MedPAC 2024: How Medicare Thinks About AI Reimbursement

We have more and more emphasis in genomics on AI and bioinformatics, so the way CMS thinks about software is relevant.

A current article by Scott Gottleib about healthcare and AI, contains a link back to a 2024 chapter by MedPAC on how CMS thinks about software and AI.

Find it here, it's an up-to-date 32 page PDF.

https://www.medpac.gov/wp-content/uploads/2024/06/Jun24_Ch4_MedPAC_Report_To_Congress_SEC.pdf



From ESMO: Requirements for AI-based Biomarkers in Oncology

 At Linked In, Josie Hayes flagged an important consensus new article from European Society of Medical Oncology.   Her note is here.  

Hayes writes:

What if you could screen 100,000 patients for a low prevalence biomarker with histology slides, then confirm only the positives with molecular tests?

That's the promise of Class B AI biomarkers—and ESMO's new framework just gave us the roadmap to get there.

Class B biomarkers use AI as an indirect measure of known biomarkers through alternative methods. 

Example: Analyzing H&E slides to predict MSI status using AI, then confirming positives with molecular testing.

Tempus is already doing this with their prostate MSI screen (p-MSI)- AI on pathology images flags likely MSI-high cases before running expensive sequencing.

The economics are stunning: 

→ Screen broadly at low cost 

→ Confirm selectively with precision tests 

→ Deploy globally without breaking the budget

She points us to a new article, Aldea et al., in Annals of Oncology:

https://www.annalsofoncology.org/article/S0923-7534(25)06267-2/fulltext

###

AI CORNER

###

Chat GPT 5 discusses Aldea et al. at length.   Then, Chat GPT compares this paper on a framework for AI biomarkers, with, the GigaTime paper in Cell this month, Valanarasu et al., reporting actual data with a particular technology.

###

What the paper is and why it matters

Aldea et al. present EBAI (ESMO Basic Requirements for AI-based Biomarkers in Oncology), a consensus framework intended to close the widening gap between AI biomarker development and routine clinical adoption. The authors’ starting premise is pragmatic: regulatory authorization alone does not resolve the issues that actually determine whether an AI biomarker is used in practice. These include generalisability across sites, calibration stability, integration into clinical workflows, accountability when models err, and downstream implications for reimbursement and clinical trust.

The paper arrives at a moment when oncology faces a paradox. AI biomarkers are proliferating rapidly, often with striking performance metrics, yet clinical adoption remains cautious. Aldea et al. argue that this hesitation reflects not conservatism but category error: AI biomarkers are being treated as a single class when, in fact, they differ fundamentally in risk profile, evidentiary burden, and intended role in care. EBAI is meant to provide a shared vocabulary that aligns technical development with clinical expectations.

The framework was developed using a modified Delphi process involving 37 experts across oncology, pathology, radiology, biostatistics, ethics, regulation, and patient advocacy. Consensus was reached over four voting rounds, emphasizing the paper’s role as a field-level alignment document rather than a single-author position.


The central contribution: a use-based classification of AI biomarkers

The paper’s most important innovation is its three-class taxonomy (A, B, C), which explicitly links what an AI system does to the level and type of evidence required for its use. This classification reframes validation as context-dependent rather than universal.

Class A systems automate or standardize measurement of an already accepted biomarker using the same data modality that a human would interpret. These tools are conceptually closest to traditional pathology automation and therefore carry the lowest incremental risk. Because the output is directly auditable and maps onto existing clinical practice, the central validation question is concordance rather than discovery. Examples include automated PD-L1 or HER2 scoring on immunohistochemistry slides, tumor-infiltrating lymphocyte quantification, or residual tumor burden estimation following neoadjuvant therapy.

For Class A systems, Aldea et al. emphasize analytical validation and agreement with expert readers. Replacement of human scoring is considered reasonable when AI error rates fall within known inter-observer variability, rather than demanding unattainable perfection.

Class B systems represent a more disruptive—and economically powerful—category. These models predict a known biomarker using a different input modality, most commonly using H&E histology to infer molecular or transcriptomic features. Crucially, the intended use is usually pre-screening or triage, not full replacement. This is the category highlighted by Josie Hayes: AI can screen very large populations at low marginal cost, reserving expensive molecular testing for those most likely to benefit.

The paper draws a sharp conceptual line here. Using AI to enrich then confirm is treated as a fundamentally different—and lower-risk—proposition than replacing molecular testing outright. Validation expectations reflect this distinction. Analytical validation against a gold-standard reference test is mandatory, and high-quality real-world or retrospective trial data are acceptable. Many experts favor additional retrospective clinical validation, particularly if AI output could influence treatment decisions. Prospective “silent trials,” in which AI runs in the workflow without affecting care, are discussed as a trust-building step but are not universally required.

A key limitation is stated explicitly: when therapy selection depends on mutation subtype rather than gene-level status, current image-based predictors often lack sufficient granularity. In such cases, Class B systems should remain screening tools rather than aspirational replacements.

Class C systems are the most conceptually ambitious. These models derive novel biomarkers directly from clinical outcomes rather than predicting existing markers. The authors divide Class C into prognostic (C1) and predictive (C2) systems. Prognostic tools estimate outcomes such as recurrence or survival independent of treatment, while predictive tools aim to identify differential benefit from one therapy versus another.

For predictive Class C systems, the evidentiary bar is especially high. Demonstrating treatment interaction requires comparison across treatment arms or against an established predictive biomarker. The paper points to examples that have undergone randomized trial validation and have begun to enter clinical guidelines, underscoring that such adoption is possible—but demanding.


What ESMO says must be demonstrated

Across all classes, the framework converges on three essential requirements that cannot be waived. These are best understood not as technical formalities but as safeguards against misplaced confidence.

First, ground truth must be clearly defined. This includes how labels were generated, who performed them, whether readers were blinded, and how disagreements were adjudicated. Second, performance must be evaluated in a way that matches clinical intent, rather than relying on generic accuracy metrics. Third, generalisability must be demonstrated, with stability shown across institutions, scanners, laboratory protocols, and patient populations.

Beyond these core elements, the paper strongly encourages fairness auditing within validated populations and practical explainability checks. Importantly, explainability is framed not as philosophical transparency but as a diagnostic tool to detect shortcut learning or spurious correlations, using techniques such as occlusion testing or confounder stress-tests.


Moving beyond headline metrics

Aldea et al. are explicit in discouraging the field’s fixation on single summary statistics such as AUC. Instead, they advocate multi-dimensional performance reporting aligned to clinical use. This includes discrimination metrics, calibration assessment, incremental value over existing standards, and explicit evaluation of clinical utility.

Calibration receives particular emphasis. A well-calibrated model that clinicians can trust at specific decision thresholds is treated as more valuable than a marginally higher AUC with unstable probabilities. Decision curve analysis is highlighted as a practical way to connect model performance to real clinical trade-offs.

The authors also stress the importance of a priori sample size justification and independent validation cohorts. Models should not be validated on data that overlap—directly or indirectly—with training sources, and performance claims should be scoped to the populations actually studied.


Generalisability as a prerequisite, not an aspiration

One of the paper’s strongest messages is that AI biomarkers should not be casually “ported” across cancer types, specimen preparations, scanners, or institutions. Each such shift represents a new operating environment that requires evidence. Generalisability is treated as a first-class requirement, not a post-marketing hope.


Replacement versus pre-screening: an explicit risk calculus

Throughout the paper, intended use remains the organizing principle. For pre-screening applications, the relevant benchmark is whether human plus AI outperforms human judgment alone. For replacement, error rates must match or exceed the reliability of the existing standard of care, including accepted variability in molecular testing.

Notably, even when Class B systems approach molecular-test-level performance, many experts still favor confirmatory testing in the near term. This reflects sociotechnical realities—trust, liability, and workflow—rather than purely statistical concerns.


After adoption: monitoring is expected, not optional

EBAI treats post-deployment monitoring as part of validation, not an afterthought. Continuous performance surveillance, drift detection, and bias monitoring are recommended, with predefined remediation pathways. When retrospective validation is insufficient, limited prospective evaluation within real workflows is encouraged.


Ethics, regulation, and human oversight

While not a regulatory manual, the framework aligns closely with emerging AI governance principles. It emphasizes clinician oversight for high-impact decisions, transparency to patients when AI materially influences care, and careful alignment between validated populations and real-world use. Fairness is framed pragmatically as fitness for the intended population rather than abstract demographic parity.


Strategic interpretation in the context of gigaTIME and Hayes’ commentary

If gigaTIME represents the technological leap of foundation models applied to H&E pathology, Aldea et al. provide the governance and deployment logic. The paper formalizes Class B AI biomarkers as a legitimate and powerful category, particularly for population-scale screening and trial enrichment, while warning against prematurely claiming replacement.

The unifying message is not that AI biomarkers work, but that they must be classified, validated, deployed, and monitored in ways that match their intended role. This framing is what allows impressive AI performance to translate into clinically adoptable and economically scalable biomarker infrastructure.

 ###

###

###

ESMO PAPER (Aldea) vs CELL PAPER (GigaTime, Valanarasu)

You’re dealing with two fundamentally different genres of paper, serving complementary but non-overlapping purposes:


Aldea (ESMO): policy, framing, and clinical governance

The Aldea / ESMO EBAI paper is not trying to advance the science of AI models. Instead, it is doing something rarer and arguably harder: stabilizing the conceptual ground so that AI biomarkers can move from impressive demonstrations into clinical systems without constant category errors.

Its unit of analysis is intended use, not architecture or performance ceilings. The paper assumes that powerful models already exist (and will continue to improve) and asks:

  • What kind of AI biomarker is this, really?
  • What evidence is proportionate to the clinical risk it introduces?
  • When is screening acceptable, and when is replacement a bridge too far?
  • What does “validation” actually mean once you leave the lab?

In that sense, Aldea is closer to clinical doctrine, health policy, and systems engineering than to computer science. It is explicitly normative: it tells the field how to behave if it wants trust, adoption, and scale.


Valanarasu et al. (Cell): scientific discovery and technical proof

By contrast, Valanarasu et al. (gigaTIME) is a pure science research paper, published in Cell for exactly that reason. Its goal is to show that something previously thought infeasible is, in fact, possible.

Its core scientific claims are:

  • H&E morphology contains enough latent signal to reconstruct spatial proteomic patterns.
  • A multimodal, foundation-style model can learn a cross-modal translation from H&E to multiplex immunofluorescence.
  • Once that translation exists, you can generate virtual populations at unprecedented scale, enabling discoveries that were previously blocked by data scarcity.

The unit of analysis here is capability:

  • Can we do this at all?
  • Does it generalize?
  • What new biological insights fall out once we can?

This is not a clinical deployment paper, and it is not trying to be. There is no pretense that gigaTIME is a “biomarker” in the regulatory or ESMO sense. It is an enabling scientific instrument.


Why they are different — and why they belong together

Seen clearly, the papers are not in tension at all. They sit at different layers of the same stack:

  • gigaTIME (Cell) lives at the capability layer:
    What can foundation models extract from routine pathology that humans cannot?
  • Aldea / ESMO (Annals of Oncology) lives at the deployment layer:
    Once such capabilities exist, how do we classify, validate, and safely use them in medicine?

A useful way to say it bluntly:

gigaTIME expands the possibility space.
EBAI constrains the permission space.

That is exactly why your instinct to review them together — but not conflate them — is correct.


The key connective insight (without collapsing the categories)

What gigaTIME enables, Aldea helps discipline.

  • gigaTIME makes Class B–style economics (image-first, molecular-scale inference) plausible at population scale.
  • Aldea explains why those same tools should initially be framed as screening, enrichment, or discovery instruments, not instant replacements for molecular assays.
  • gigaTIME shows that H&E can be a gateway to multiplex biology.
  • Aldea explains when and how such gateways can be allowed to influence care.

Put differently:
Cell papers create new worlds; ESMO papers decide which doors you’re allowed to open, and under what supervision.

 


Pierre Arsene: LBX Stocks Raise $50B in Market Cap This Year

I haven't tried to replicate the math, but at Linked In, Pierre Arsene repors that liquid biopsy stocks generated $50B in market cap this year.  He's including companies who's market is partially in liquid biospy, but still.

His article is here:

https://www.linkedin.com/posts/pierrearsene_would-you-believe-that-liquid-biopsy-stocks-activity-7406596554528247808-nLe7/

His market is given as:

  • Exact Sciences
  • Natera
  • Tempus AI
  • Guardant Health
  • Adaptive Biotechnologies
  • Grail
  • GeneDx
  • BillionToOne
  • Caris Life Sciences
"Daniel G" made an interesting remark, quoting in part:

  • Over the same period, many liquid biopsy and precision diagnostics companies have either disappeared from public markets, been acquired at distressed valuations, or quietly failed to translate scientific promise into sustainable, reimbursed clinical businesses. [aka survivor bias]

  • Capital is not rewarding liquid biopsy per se — it is rewarding reimbursement visibility, guideline inclusion, operating scale, and disciplined execution. The winners built infrastructure, not just technology.




Sunday, December 14, 2025

Unprecedented; SENATE Wants to Hear Your Coding - Innovation Problems

Update.  Senator Cassidy has exchanged letters (and press releases) with the AMA on a number of coding topics, RUC, CPT membership, etc.  I used that in this blog.  It turns out this newly-opened portal is tuned to questions about AMA fees & and code-use contracting.  However, they also provide an email for the project, which is coding@help.senate.gov.


My blogs  here on October 29 and again on December 2 flagged little-known U.S. Senate investigations on the U.S. healthcare coding system.

Now the Senate opens a website for concerns about the coding system and innovation.  I cut/paste from Senate, below.  The submission webpage gives a due date of January 16, 2026.

###

https://www.help.senate.gov/rep/newsroom/press/chair-cassidy-continues-ama-inquiry-seeks-stakeholder-feedback

12.09.2025

Chair Cassidy Continues AMA Inquiry, Seeks Stakeholder Feedback

WASHINGTON – U.S. Senator Bill Cassidy, M.D. (R-LA), chair of the Senate Health, Education, Labor, and Pensions (HELP) Committee, is seeking information from stakeholders regarding the American Medical Association’s (AMA) monopoly of Current Procedural Terminology (CPT®) codes and its impact on patients, providers, and health care costs. Cassidy is asking stakeholders with relevant experience and knowledge of CPT ® coding contracts with the AMA to inform the Committee’s inquiry by responding to this questionnaire[If any problem start at "Newsroom" link above].



As chair of the HELP Committee, Cassidy is using all tools at his disposal to lower costs for American patients. Thus far, the AMA evaded questions and failed to cooperate with Cassidy’s inquiry. If the AMA does not respond in a fulsome and transparent manner by December 15, 2025, the Chairman is committed to finding answers by other means.

“The federal government mandated the use of CPT codes. This creates the potential for abuse in that if someone has to buy your product, you can charge them what you want,” said Dr. Cassidy. “There may be nothing wrong here, but we should get answers to make sure the CPT system is working for the American patient and for the American health care system.”

Saturday, December 13, 2025

Scott Gottlieb in JAMA: AI Can Solve Productivity Challenges

 I enjoyed this article alot, and it's very germane to current hot topics at CMS (and AMA).   Here's an AI picture, an AI summary, and a link to Gottlieb's open-access article.

###

https://jamanetwork.com/journals/jama-health-forum/fullarticle/2842893

ai pic


Scott Gottlieb, JAMA Health Forum (2025): 

How AI Will Help Solve Medicine’s Productivity Challenges

His core argument:
Health care appears “unproductive” not because medicine hasn’t improved, but because productivity is measured incorrectly—by counts of visits and procedures, not by health outcomes. This creates a classic case of Baumol’s cost disease, where human-intensive professions (medicine, education, performing arts) improve quality but fail to show output-per-hour gains, suppressing compensation growth and driving costs upward.

What AI actually changes (beyond hype):
Gottlieb’s key claim is that AI must move from augmentation to substitution. Merely making physicians “smarter” does not solve the productivity problem. True productivity gains require AI systems that safely replace discrete physician tasks—diagnostic interpretation, triage, routine decision-making—so that one clinician can effectively care for more patients per unit time.

Why regulation is the bottleneck:
FDA’s framework assumes static devices, not learning systems. Gottlieb argues for a regulatory model that allows controlled algorithmic drift, with predefined performance thresholds and re-review triggers. Without this, developers avoid FDA entirely by labeling tools as “research use only,” sharply limiting real-world deployment.

Why reimbursement is the bigger bottleneck:
CMS does not pay software vendors directly and ties physician payment to labor inputs. Ironically, AI that reduces physician labor can lower RVUs, triggering cuts to physician income.  Physicians must buy productivity-enhancing AI out of their own reimbursement, often with no financial upside.

Policy solutions proposed:

  • Bundle AI into episode payments

  • Expand OPPS pass-through mechanisms

  • Create new benefit categories (e.g., via the Health Tech Investment Act of 2025)
    The unifying goal is to align AI adoption with physician compensation growth, finally breaking the cost-disease trap.

Bottom line:
If AI is allowed to replace physician effort (not just decorate it), medicine could finally follow the same productivity curve as other tech-enabled industries—improving outcomes, stabilizing costs, and restoring physician earnings growth.

##

##

Hidden as an italicized hotlink rather than a full biblio citation, Gottlieb cites Medpac 2024, which is an up-to-date 32 page review of CMS issues with software pricing.


What the Hell is MAHA ELEVATE?

You've tried to at least recognize what the new programs ACCESS and TEMPO mean (at CMS and FDA, respectively, but they interact...)

Here's yet another big CMS demonstration/innovation program, MAHA ELEVATE.

I'll give a summary of Trey Rawles' article below.   If you want to start with Trey's whole article, here it is.

https://www.onhealthcare.tech/p/cms-just-opened-a-100m-door-for-lifestyle?'

##

And here's the cut-to-the-chase AI summary:

Trey Rawles argues that CMS’s new MAHA ELEVATE Model represents a quiet but consequential shift in Medicare policy that most investors will overlook. While the program’s $100 million budget—spread across up to 30 three-year cooperative agreements—is small relative to Medicare’s scale, its significance lies in what CMS is choosing to fund. For the first time, the CMS Innovation Center is explicitly testing functional and lifestyle medicine interventions—nutrition, physical activity, stress, sleep, and social connection—that are not currently covered by Original Medicare.

Rawles emphasizes that MAHA ELEVATE is not a coverage decision but an evidence-generation exercise designed to determine whether whole-person lifestyle interventions can improve outcomes and reduce costs for Medicare beneficiaries with chronic disease. Every proposal must include nutrition or physical activity, and three awards are reserved for dementia-focused interventions, signaling CMS’s priorities. Awards average roughly $1.1M per year, with CMS actively involved in program design, data collection, and quality oversight.

The strategic value extends far beyond the grant dollars. Winning an award provides CMS validation, access to Original Medicare beneficiaries, and—most importantly—a potential pathway to future Medicare coverage if strong cost and quality evidence emerges. Rawles argues that companies with documented scientific evidence, real-world outcomes data, HIPAA-ready infrastructure, and integrated, multi-modal interventions are best positioned to win, favoring mature digital health and lifestyle medicine platforms over early-stage wellness apps.

For investors, MAHA ELEVATE de-risks evidence generation for a historically under-reimbursed category and signals a broader CMS pivot toward prevention and lifestyle medicine as complements to conventional care.

Here's Fierce Healthcare.
Here's a time-travel article, back to May 2025, about what HHS wanted to achieve in chronic care innovation.  

Will FDA and Other Digital Pathology Advances Outrun AMA CPT Rules in 2026?

Header:  AMA CPT may soon face a crossroads for modernizing coding of digital pathology.

##

Update:  The U.S. Senate wants to hear your concerns about the coding system and blocking innovation - here.

##

AMA CPT has had limited means of coping with digital pathology and AI, and in some ways, has even regressed.   There is a single Category I CPT code for making 1 immunostained slide and interpreting it with computer assistance (88361).   

For several years, AMA also made about 10 codes involving whole slide imaging in the PLA series, but labs that have tried to replicate this have told me that the rules have changed.   If the biomarker requires immunohistochemistry, it may be judged to include "physician work" and therefore ineligible for PLA coding.  (I worked with one lab whose service didn't involve any physician work, and was rejected anyway for "requiring physician work.")  But if the service is based on conventional stains, like H&E, it may be judged as not having a bona-fide biomarker, and therefore ineligible for PLA.   If some of the service is performed offsite, e.g. a cloud algorithm allowed by CAP distributed testing (here), or a separately prepared DICOM file, it may be judged as a multi-lab, not single-lab, service and therefore ineligible for PLA.   If it involves "AI" (and NGS has already involved massive amounts of bioinformatics) it may get shunted into another coding system, a sort of registry for SaMD.  It's like a game of Chutes and Ladders, where there are lots of chutes on every side and not a ladder in sight.

##

Meanwhile, outside of AMA CPT policy, the world is moving fast.   Here are just four examples:

  1. Rakaee's report of deep learning immunotherapy guidance from digital imaging of H&E, reported in JAMA Oncology.  Here.
  2. FDA authorizes a clinical trial biomarker (which may well become a companion diagnostic), used in NASH drug trials and based on software reading of trichrome and H&E sections.  See FDA announcement, here. See the project's webpage for associated documents here.  There, see particularly the FDA integrated review (PDF 52pp).   See discussion at Linked In here.
  3. See similarly, Valanarasu et al., in Cell, on GigaTIME sophisticated proteomic reproduction from AI-assisted readings of H&E slides.  Here.  See insights from two authors at Linked In, here and here. Also Hayes.
  4. Meanwhile, and also based on FDA drug trials, see the novel "QCS" AI-assisted immunomarker reading, which can do things that humans simply can't do.   It's a collaborative project for Roche, AstraZeneca, FDA, entry point here.
###
AI CORNER
###
ChatGPT 5.2 reads the essays and articles above, and summarizes.
###

Is scientific reality is outrunning CPT structures?

Friday, December 12, 2025

CMS, FDA: Keeping Up with TEMPO and ACCESS

"Last week, CMS and the FDA announced coordinated efforts to widen access to technology-enabled care for Medicare beneficiaries with certain chronic conditions."  (Orrick)

###

CMS and FDA uaveiled separate but coordinated efforts last week, to catalyze advanced informatics applied to everyday health care.   The ACCESS program will support use of such services, like remote physiological monitoring, by paying for improvements in clinical outcomes like weight and blood pressure.

The TEMPO program will provide structured and monitored waivers to certain kinds of e-health technologies, in particular in the context of the Medicare ACCESS program.

You can google 50 news articles in a second, but here are three law firm-based reviews I found helpful. Further below, I use AI to summarize a 70-minute panel, and, to investigate how the CMS press release about the panel, differed from the actual transcript of the panel.

JD SUPRA - MCDERMOTT

Jeffrey Davis reviews the two programs.

https://www.jdsupra.com/legalnews/access-tempo-the-2-door-entryway-to-1472276/

At ORRICK, another joint review of ACCESS and TEMPO.  By Joseph, Ravitz, Sherer, and Johson (the author list itself sounds like a law firm...)

https://www.orrick.com/en/Insights/2025/12/CMS-and-FDA-Announce-Initiatives-to-Expand-Access-to-Digital-Health-Technologies

At Hyman Phelps, see an article by Jennifer Newberger on ACCESS and TEMPO:

https://www.thefdalawblog.com/2025/12/you-better-move-fast-acess-to-tempo/

MedTechDive also reviews ACCESS and TEMPO together on December 10:

https://www.medtechdive.com/news/fda-digital-health-pilot-cms-tempo/807374/
_______________________

For an original CMS document, see the CMS newsroom, December 5, writing;

Readout: CMS Convenes Leaders Across Government, Clinician Societies, Digital Health Industry to Discuss Innovation Center ACCESS Model

https://www.cms.gov/newsroom/press-releases/readout-cms-convenes-leaders-across-government-clinician-societies-digital-health-industry-discuss

See a one-hour video of the panelists:

https://www.youtube.com/watch?v=vM1guSugkQU


###

Finally, see an entirely separate but also very innovative CMMI program, MAHA ELEVATE, discussed in detail by  Trey Rawles here.

###

AI CORNER

###

For the next section, we let Chat GPT summarize the one-hour panel conference stored on YouTube, above.  Then, we ask Chat GPT to compare the CMS press release about the one-hour panel, with, the actual content of the one-hour panel.

AI Part One:
Chat GPT summarizes 1-hour panel

The 70-minute event was essentially a three-leg argument for a coordinated CMS+FDA “on-ramp” for digital, technology-enabled chronic care in Original Medicare:

  1. Infrastructure (Health Tech Ecosystem: interoperability, identity, provider directory)

  2. Payment (CMMI ACCESS: outcome-aligned payments; “pay for improvement/control, not app usage”)

  3. Regulatory sandbox (FDA TEMPO: time-bounded enforcement discretion + structured real-world data capture)

What made it interesting for experts is the way speakers repeatedly framed the problem as not “we lack tech,” but we lack (a) scalable payment rails and (b) regulatory clarity for tools that operate between traditional visit-based medicine and continuously measured home-life medicine.


1) Amy Gleason: the “Ecosystem” pitch is about trust infrastructure, not just APIs

Gleason’s segment is less about ACCESS than about CMS trying to finish the last 20% of interoperability that regulation hasn’t delivered: “it should work on paper” but doesn’t in real workflows. Her “sticky” focus was trust:

  • National Provider Directory as routing infrastructure (apps know where to query).

  • Modern digital identity (medicare.gov identity layer; “trust that the person asking is who they say they are”).

  • The rhetorical “47 patient portals” anecdote is doing policy work: it reframes interoperability as consumer experience failure, not vendor compliance.

The important subtext: CMS is positioning itself as an enabler of an app ecosystem (patient-chosen apps, QR/smart health cards, conversational AI layer) rather than a single CMS-owned solution. It’s “voluntary, fast, months not years,” with “pledges” and working groups—a deliberate contrast with the historically slow regulatory path.


2) Dr. Oz: ACCESS as a cost-curve play with a “make it a business” posture

Oz makes two notable policy claims:

  • The macro problem is health spending growing ~8–9% vs GDP, and he links that to information/transparency and patient empowerment (market-like behavior).

  • He explicitly says CMS wants innovators to make money building solutions—but only if outcomes move. This is unusually direct language in a CMS forum.

He then sketches ACCESS as a 10-year voluntary model providing an “on-ramp” for digital health in Original Medicare, explicitly tying it to rural policy and workforce scarcity. The subtext: ACCESS is being sold as Medicare modernization and as an industrial policy lever (“open for business”), while still needing OACT-friendly “savings” logic.


3) Jim O’Neill: HHS AI Strategy 1.0 + “regulatory clarity” + data visibility

O’Neill’s remarks are the clearest “whole-of-HHS” frame:

  • Internal AI adoption: custom LLM(s) deployed to 65,000 employees, explicitly framed as productivity and speed (fraud, claims adjudication, grant reviews, FDA review acceleration).

  • External vision: three pillars—reimbursement pathways, regulatory clarity, and R&D targeting.

His most pointed (and controversial) line for experts is the claim that a “key pillar” is visibility into de-identified patient data for payers/patients/providers to create a continuous improvement feedback loop. That’s a strong signal about real-world data as policy infrastructure, and it implicitly nudges debates about governance, consent, and secondary use.

He also explicitly name-checks TEMPO before FDA speaks, making clear it’s meant to be fast access + controlled learning.


4) ACCESS model mechanics (Jacob Shiff): outcome-aligned payments + referral rails + public outcomes

Shiff’s 10 minutes are the “operational kernel,” and there are a few design choices experts will notice:

  • Outcome-aligned payment: predictable payments with full earn-out when patients improve relative to baseline (BP, pain symptoms, etc.).

  • Tracks: grouped conditions for integrated care (signals intent to avoid “one app per diagnosis” fragmentation).

  • Care modality flexibility: virtual/asynchronous/in-person—explicit anti-micromanagement stance (“not micromanaging practice of medicine”).

  • Cost sharing: participants may collect or forego beneficiary cost-sharing (a lever to reduce friction; also a potential competitive variable).

  • Provider/supplier enrollment + HIPAA covered entity + clinical director: a “we’re not letting fly-by-night apps bill Medicare” posture.

  • Referral + co-management payment: referring clinicians can refer without 1:1 contracting; there’s a new co-management payment (no beneficiary cost sharing) for documented review/coordination. This is a practical attempt to avoid the classic failure mode: “specialty digital program runs parallel to primary care and nobody gets paid to integrate it.”

Big signal: CMS intends to maintain a directory of participants including risk-adjusted outcomes. That’s not just payment; it’s a market-making transparency tool (and it tees up methodological fights about risk adjustment, gaming, and measure selection).

He also claims demand: 250+ tech-enabled organizations submitted intent-to-apply within ~2–3 days. Whether that holds up is a later question, but it’s being used to argue “the ecosystem is ready.”


5) The society panel: it’s about workforce, data usability, and measurement sanity

This panel (AMA, ACC, APA) is less “rah rah digital” than “here’s where digital fails in clinic.” Three threads dominated:

A. Burnout is not only paperwork; it’s data overload + low-value measurement

  • AMA’s Whyte: clinicians don’t need “more data,” they need better data (continuous, personalized, outcome-linked). He also warns against being held accountable for outcomes clinicians can’t plausibly influence (food insecurity example).

B. Behavioral health: outcomes are “hard,” but we already have usable proxies

  • APA’s Trestman: acknowledges no biological equivalent of A1c yet, but argues that consistent use of PHQ-9 / GAD-7 can operationalize outcomes and—crucially—motivate patients when they see trend lines. He frames this as measurement-informed care that empowers patients, not just reporting burden.

C. Cardiology: the bottleneck isn’t biomarkers; it’s scaling programs into low-margin/rural systems

  • ACC’s Bhatt: home BP monitoring programs work, but don’t scale due to payment + implementation costs, especially in net-zero margin settings. She emphasizes “baseline” and personalization (your normal vs population normal), and repeatedly ties ACCESS to “creating a mechanism” to collect data and intervene where there’s currently zero access.

Notable “flash round” moment:

  • Whyte picks sleep as the under-measured outcome that matters and is increasingly measurable via wearables—but immediately ties it to the practical chain: fidelity → reimbursement → EHR integration → payer acceptance. That’s a neat encapsulation of the “payment/regulatory rails” theme.

  • Bhatt wants a metric for “patient feels like a partner in care” (not mere engagement). That’s conceptually important because ACCESS is outcome-aligned, but the panel is warning that relationships and agency are key mediators—hard to capture, easy to ignore, and easy to accidentally harm with poorly designed incentives.

  • Trestman says the key isn’t a new measure; it’s turning data into usable, integrated information at point of care.

The panel’s “success state” predictions converge: more care shifts to home/ambulatory, clinicians get time back, and patients feel “seen.”


6) TEMPO (FDA): the genuinely novel piece—enforcement discretion + real-world data under ACCESS

Traficant’s TEMPO description is the sharpest new policy mechanism in the event:

  • For certain digital health devices that would typically require FDA premarket authorization but don’t yet have it, TEMPO allows manufacturers to request FDA enforcement discretion when deployed within ACCESS, under a risk mitigation + monitoring + reporting plan negotiated with FDA.

  • ACCESS becomes a controlled deployment environment that generates real-world performance data that may support future FDA marketing authorization.

  • Requires enhanced consent: beneficiaries are told it’s an FDA pilot and that certain data will be shared with FDA (within privacy/security standards).

In plain English: TEMPO is a regulatory sandbox tightly coupled to a CMS payment model, with RWE as the bridge. For experts, the questions this tees up are immediate: scope (which devices), boundary conditions on “discretion,” what data is “ordinarily reported” vs incremental TEMPO data, and how consent + governance will be operationalized at scale.


The through-line: ACCESS is a payment model; TEMPO is an RWE-enabled regulatory ramp; the Ecosystem is the trust plumbing

The event’s key move is that CMS and FDA are trying to make digital chronic care in Original Medicare legible and financeable while keeping a safety narrative:

  • Outcome-aligned reimbursement to avoid “paying for churn.”

  • Transparency (risk-adjusted outcomes) to foster competition and referrals.

  • Regulatory discretion under monitored conditions to speed access without pretending safety review is optional.

If you want, I can also produce a second version as (a) a one-page “expert memo” with policy implications + likely implementation failure modes, or (b) a short set of “watch items” (e.g., measure selection/risk adjustment, consent + data governance, how co-management payments interact with existing RPM/CCM, and what kinds of devices TEMPO will actually include).

###

###

AI Part Two:   
Compare Actual Panel with CMS Press release.

What do you see in the whole 70 minute video, that isn't summarized in the CMS press release about the video.

###

###

The CMS press release is a tight, curated “what happened + what it is” summary; the 70-minute panel is the operating logic, governance posture, and (importantly) the frictions that the press release deliberately smooths over.

What the press release does (and why it feels “about the same”)

It’s basically a single narrative arc:

That’s it: announce, legitimate, de-risk, invite. It’s not trying to teach anyone how this will behave in the wild.

What the panel adds that the press release largely omits

The panel isn’t just longer—it contains different types of information:

1) The “platform” backstory (Health Tech Ecosystem)
In the panel, ACCESS is positioned as the payment engine for a broader digital plumbing agenda (interoperability infrastructure, patient-facing ecosystem). The press release only gestures at “modernizing the digital health ecosystem.” (Centers for Medicare & Medicaid Services)

2) Implementation mechanics (how ACCESS actually functions)
The panel version (and the supporting CMS model materials) gets into what sophisticated readers care about:

The press release stays at the “model exists” layer. (Centers for Medicare & Medicaid Services)

3) The measurement politics (what counts as an “outcome,” and what clinicians will tolerate)
The society panel spends real time on: data quality vs data volume, clinician burnout from measurement, behavioral health measurement pragmatics (PHQ-9/GAD-7 style tools), and even “what outcome we wish we could measure” (e.g., sleep as a health signal; “patient-as-partner” sentiment). 

None of that appears in the press release because it’s not message discipline—it’s the actual tension that will shape uptake.

4) The regulatory posture is more candid (TEMPO as a sandbox)
The press release: “FDA discussed TEMPO…collecting, monitoring, reporting real-world performance data.” (Centers for Medicare & Medicaid Services)
The panel (and FDA’s own release / CMS FAQ): TEMPO is explicitly a risk-based enforcement discretion pathway for certain digital health devices that would typically require premarket authorization, paired with real-world data generation and enhanced consent in a controlled context. (U.S. Food and Drug Administration)

That’s a materially different “feel”: not just coordination, but a deliberately engineered regulatory/payment coupled experiment.

The practical punchline

If you’re an engaged expert, the press release tells you what CMS wants the world to repeat

The panel tells you how CMS/HHS/FDA think this will work, where the adoption bottlenecks are (measurement burden, data integration, rural capacity, workforce constraints), and what their regulatory/payment theory of change is—i.e., how ACCESS + TEMPO is supposed to create an evidence-generating flywheel rather than a one-off demo. (Centers for Medicare & Medicaid Services)


Republican Health leader Joe Grogan Interviews Deputy Secretary of HHS on AI

There's been a flurry of AI-related news this week, including in healthcare.  See a December 12, 2025, article at Fierce Healthcare where Republican policy expert Joe Grogan interviews HHS Deputy Secretary O'Neill on the administration's vision for AI in healthcare.

See the article here.

At HHS, O’Neill is looking for lighter-touch AI regulation at HHS, arguing FDA and CMS delays are distorting markets, not protecting patients. He highlights a real behavioral consequence: companies deliberately avoid FDA pathways, market as consumer tools, then backfill regulation later—an inefficiency he wants to eliminate via earlier regulatory clarity. HHS is operationalizing this stance through paired ACCESS (CMS reimbursement) and TEMPO (FDA regulatory deferral) models. Internally, HHS has already deployed multiple LLMs agency-wide and is using AI to detect weak signals across datasets. Notably, O’Neill rejects industry intermediaries (e.g., CHAI), inviting direct regulator–company engagement instead.

O'Neill served in the GW Bush administration (2004-2008), then shifted to Silicon Valley including CEO of the Thiel Foundation.    

Grogan as Podcast Host:  DC EKG

Joe Grogan is a Republican health policy expert who worked in biopharma government affairs and in the "Trump I" White House.  More recently he's been a leader with Paragon Health Institute and Fire Arrow LLC.    For over a year, Grogan has hosted the podcast "DC EKG."

https://www.iheart.com/podcast/269-dc-ekg-107541606/


Here are two recent samples from DC EKG.

Fixing Obamacare Without Repeal: Tony LoSasso on Competition, Subsidies & Fiscal Reality

December 10, 2025 • 44 mins

In this episode of DC EKG with Joe Grogan: A Healthcare Policy Podcast, Joe sits down with health economist Tony LoSasso to dissect what serious, workable Obamacare reforms could look like without blowing up the Affordable Care Act entirely. They dig into the structure of healthcare subsidies, why current premium tax credits dull price sensitivity, and how that undermines insurance competition, drives up healthcare costs, and threatens the law's fiscal sustainability. Tony lays out a path to modernize the ACA with defined-contribution-style subsidies, patient-directed “health freedom” accounts, and targeted support for people with preexisting conditions through high-risk pools, rather than hiding transfer programs inside community-rated premiums. Along the way, they tackle essential health benefits, community rating, Medicare pricing, certificate-of-need laws, and growing hospital market concentration, and ask what a real bipartisan healthcare reform deal might look like in today’s political climate.

Inside the Business of American Healthcare with Wharton’s Dr. Lawton Burns

August 24, 2025 • 55 mins

Join host Joe Grogan for an exclusive masterclass with Dr. Robert Burns, James Joo-Jin Kim Professor of Health Care Management at the Wharton School. A nationally recognized expert on the U.S. healthcare system, Dr. Burns unpacks the complex forces driving healthcare costs. With a background in sociology, anthropology, and decades of research, Dr. Burns reveals why so many healthcare reforms fail, what policymakers and business leaders get wrong, and how the U.S. healthcare ecosystem really works behind the headlines. If you’ve ever wondered why American healthcare is so expensive, and what can actually be done about it, this episode is a must-listen.



Sunday, December 7, 2025

Humor: Medicare Non-Coverage of Stethoscopes, Microscopes, and X-Rays

I was somewhat annoyed by a new LCD announcing broad Medicare non-coveage of AI in imaging. (Here).

 I asked Chat GPT to review Medicare coverage of stethoscopes, microscopes, and x-rays.

###





1. Non-Coverage Determination: 
The Stethoscope (circa 1859)

Coverage Indications, Limitations, and Medical Necessity

This Local Coverage Determination (LCD) establishes non-coverage for the “stethoscope,” a newly introduced acoustic apparatus purported to aid in the detection and localization of thoracic and cardiac phenomena. Although early reports suggest that the device enables auscultation at a distance, there remains insufficient evidence to conclude that the instrument improves clinical outcomes, enhances diagnostic accuracy, or alters patient management when compared to traditional direct examination methods, including palpation, percussion, and visual inspection.

Summary of Evidence

A limited number of peer-reviewed case series assert that the stethoscope allows practitioners to identify internal bodily sounds without pressing an ear directly to the thoracic wall. However, studies conducted to date are small, narrowly focused, and generally lack rigorous methodological controls. No randomized investigations have demonstrated superiority of the device over skilled bedside examination using established techniques. Furthermore, available reports derive mostly from highly specialized academic hospitals in Paris and London, limiting generalizability to the broader U.S. population, particularly in rural and frontier regions where medical infrastructure is limited, as is exposure to cheese and croissants.

In addition, significant concerns exist regarding operator dependency. Effective use of the stethoscope appears to require specialized training in acoustic interpretation, a skill not widely available among current practitioners. The absence of standardized protocols for device use raises risk of misinterpretation, particularly among those with minimal experience. Importantly, many physicians have practiced successfully for decades using traditional examination methods, raising substantial uncertainty as to whether the incremental diagnostic insight achieved through this new device will translate to meaningful clinical benefit.

Analysis and Determination

Given insufficient evidence of clinical utility, lack of standardized training, absence of consensus guidelines, and unclear generalizability across care settings, the stethoscope remains investigational. Accordingly, Medicare will not provide separate reimbursement for its acquisition or use. Practitioners may continue to rely on proven, long-standing physical examination techniques that have ensured safe and effective care.


2. Non-Coverage Determination: 
The Microscope (circa 1890)

Coverage Indications, Limitations, and Medical Necessity

This LCD establishes non-coverage for the use of “microscopy” in routine clinical practice. Although preliminary investigations have reported that microscopic examination of tissues and fluids may reveal minute structures not visible to the naked eye, broad enthusiasm for this technology remains premature in the absence of robust evidence demonstrating improved patient outcomes.

Summary of Evidence

Proponents assert that identifying microorganisms, cellular irregularities, or “germs” could facilitate early, targeted interventions. However, most such claims derive from academic investigators who are simultaneously developing both the theories and the instruments used to prove them. This creates substantial risk of confirmation bias and undermines confidence in the results.

Current literature is composed primarily of small, observational studies conducted by highly specialized researchers with advanced technical knowledge. No multisite investigations have demonstrated that microscopic identification of organisms leads to improved clinical decision-making, reduced morbidity, or enhanced survival relative to standard care pathways, which include physical examination, symptom monitoring, and empiric treatment. Importantly, empiric therapies remain widely accessible, inexpensive, and effective across a range of common conditions.

Moreover, microscopy requires a significant investment in specialized equipment and technical labor, including specimen preparation, device calibration, and interpretation by trained observers. No evidence establishes that such investments are sustainable or scalable in typical practice settings, particularly those serving Medicare beneficiaries in rural communities. Variability in operator skill, specimen quality, and device manufacturing raises additional concerns regarding reliability, reproducibility, and equity of access.

Analysis and Determination

Absent compelling data linking microscopic examination to meaningful patient benefit, and given substantial barriers to training, standardization, and implementation, Medicare does not consider the microscope “reasonable and necessary.” The technology remains experimental, and its value should be evaluated within research settings rather than routine clinical care. Clinicians should continue to rely on established methods that have served patients for generations.


3. Non-Coverage Determination: 
Medical X-Ray Imaging (late 1890s)

Coverage Indications, Limitations, and Medical Necessity

This LCD outlines non-coverage for the emerging practice of radiographic imaging, also known as “X-ray,” which claims to produce internal anatomical pictures via exposure to novel electromagnetic emissions. Although proponents have reported images suggestive of bones, organs, or foreign bodies, current evidence does not demonstrate that these images improve diagnosis, management, or outcomes when compared to existing clinical assessment standards.

Summary of Evidence

Early case reports describe individual anecdotes in which fractures or swallowed objects were visualized using X-ray exposure. However, these findings lack methodological rigor, are limited to extremely small sample sizes, and have not been replicated in representative populations. No randomized trials have established superiority over thorough physical examination, patient history, and surgical exploration when indicated.

Additionally, radiographic images vary substantially based on operator technique, equipment design, exposure intensity, and patient positioning. There are no accepted standards for image quality, calibration, or interpretation. Consequently, reproducibility and safety remain uncertain. Further research is needed to establish whether operators across diverse settings can reliably acquire and interpret radiographs with minimal risk to patients or themselves.

Notably, early reports reveal a subset of practitioners who cite potential adverse effects from repeated exposure to these emissions, including skin irritation, burns, and possible long-term consequences. Without comprehensive safety data, routine deployment among Medicare beneficiaries—who may require frequent imaging or have comorbid vulnerabilities—would represent an unacceptable risk.

Analysis and Determination

Given the absence of compelling evidence demonstrating improved outcomes, significant variability in device performance, lack of standardized protocols, uncertain safety profile, and unclear scalability, X-ray imaging is considered investigational. Medicare will not provide reimbursement for its use. Providers are encouraged to continue employing established diagnostic techniques, including physical examination and surgical exploration as clinically appropriate, which remain safe, effective, and well-validated.


 

CMS Expert Becomes Deputy Director of CBER at FDA

Many of us who've worked for some time in genomics will remember Katherine Szarama PhD, who was one of the CMS coverage group staff assigned to the ground breaking NCD guaranteeing coverage of FDA-approved NGS companion diagnostics (e.g. Foundation Medicine circa 2017).

Szarama is now Deputy Center Director for CBER, the center for biologicals at FDA.   

Find her at Linked In:

https://www.linkedin.com/in/katherine-b-szarama-phd-627a172b/

Szarama worked at CMS 2016-2019.  She left CMS to join the Emerson Collective, then Arnold Ventures 2022-2025.   She was with the Advanced Research Projects Agency for Health, ARPA-H, most of 2025, January to fall 2025, before shifting over to the FDA.

See also a note at RAPS.org,

https://www.raps.org/news-and-articles/news-articles/2025/10/this-week-at-fda-more-layoffs-at-hhs-prasad-s-cbe

Noting,

According to MedPage Today, Vinay Prasad, the director of CBER, has started to shape his new  staff, as announced in an email to his colleagues. Katherine Szarama, PhD, has been appointed acting deputy director of CBER. She previously worked at the Advanced Research Projects Agency for Health and the Centers for Medicare and Medicaid Services before joining the FDA. Prasad noted that Szarama “will focus on policy and governmental relations while also participating more broadly within the center.”



MAC Issues LCD on Automated Detection of Brain MRI's: NO COVERAGE

 The CGS MAC has issued a final LCD on "Automated Detection and Quantification of Brain MRI's", L40224.   The proposal appeared on  September 25, with comment to November 8.  The final appeared December 4, with a "notice period" until January 18.  The LCD was "MAC initiated."

I check new LCD postings every Thursday and I don't recall seeing DL40224.   The non coverage statement is simple:

  • This is a non-coverage policy for artificial intelligence assistive software tool for automated detection and quantification of the brain.

I would note the author who thinks this way might also write"non coverage...automated detection...whole slide imaging."

The LCD has 54 citations, some (by no means all) on ARIA, brain defects seen on amyloid therapies.

The MAC notes that "no comments were received."  (?!)

This is the concluding rationale for non coverage:

While investigations have been exploring the potential of automated quantification technology for evaluation of ARIA, MS, TBI, epilepsy, brain tumors and other neurological conditions, this has been challenged by lack of established standards for measurements and access to large datasets to train the devices. While expert radiologists read the images based on visual patterns these programs quantify the brain volumes. While this is promising there is a lack of standards to establish what the normal values for brain volumes should be and each program has proprietary data so it is not interchangeable. There is not sufficient diversity within the data sets used to train the models to ensure changes based on age, gender, or ethnicity are accounted for. This is especially pertinent in the Medicare population as there are changes to brain volume related to age and with lack of standardized data it is challenging at this time to ensure subtle changes represent pathology and not variations of normal. At this time there is not sufficient clinical utility or validity data and use of this technology is considered investigational and not covered. CGS will continue to monitor the progression of research for these devices.


 Not sure what to make of this. See a sidebar humor essay - Medicare non-coverage of newfangled stethoscope, microscope, and x-ray.  Here.