I would argue that this recent article by Dogra et al., looking at the poor fit of today's coding and reimbursement with imaging AI - could be useful for pathology AI as well.
Find it here:
- For a discussion of SaaS/AI via Radiology: Dogra S et al. (2024) Reimbursement in the age of generalist radiology artificial intelligence. NPJ Dig Med https://www.nature.com/articles/s41746-024-01352-w
- Writing: We argue that generalist radiology artificial intelligence (GRAI) challenges current healthcare reimbursement frameworks. Unlike narrow AI tools, GRAI’s multi-task capabilities render existing pathways inadequate.
AI CORNER
This is direct AI output; do not quote without fact checking.
###
Here’s a graduate-level summary of Dogra, Silva & Rajpurkar (npj Digital Medicine, 2024), tuned for readers steeped in CMS policy, coding, and reimbursement—plus a few bridges to genomics/digital pathology.
Thesis & scope. The authors argue that generalist radiology AI (GRAI)—multi-task, adaptable, and increasingly multimodal—doesn’t fit today’s reimbursement scaffolding that was built for narrow, task-specific AI. They lay out how coding, valuation, and coverage would need to evolve and sketch several policy-ready pathways to get there.
System backdrop (why status quo strains). Under MPFS, payment flows via RBRVS (work, practice expense, malpractice), with relativity influenced by AMA RUC; hospitals are paid under IPPS/OPPS (MS-DRGs/APCs). CPT is the lingua franca: Cat I for established services; Cat III for emerging tech; and now CPT Appendix S classifies AI as assistive, augmentative, or autonomous. The sticking point for AI—especially generalist software—is practice expense: software licensing, maintenance, and cloud costs don’t map cleanly to the direct/indirect PE constructs, and historical PPIS inputs underweight these categories.
Where AI reimbursement stands today. A small set of SaMD examples illuminate the limits: 92229 (autonomous DR screening) and 75580 (FFR-CT) reached Category I, while additional AI codes remain Category III. On the inpatient side, NTAP has been inconsistently available (e.g., approval for Viz.ai LVO triage; denials where “substantial improvement” wasn’t met). CMS’s TCET pathway could accelerate coverage for Breakthrough-designated tech but is capacity-limited. These experiences show: (1) autonomous tools can have zero work RVU, complicating relativity; (2) software costs are hard to place within PE; and (3) evidence thresholds vary by setting and program.
Core problem GRAI poses. By design, GRAI can adapt tasks (triage, detection, segmentation, reporting), span modalities, and ** shift along Appendix-S levels** (assistive → augmentative → autonomous) over time. That dynamism clashes with static CPT descriptors, fixed RVU components, and coverage frameworks that expect discrete, stable services. The authors stress that GRAI’s value is composite (task breadth, autonomy level, adaptability, multimodal fusion) and cannot be captured by a single “add-on” widget model.
Coding pathways the authors surface.
-
Coordination-style codes (analogy: chronic care management): a single multi-task code that “bridges” activities (triage→detection→reporting) and can be appended across modalities.
-
Modality-series with gradations by task depth (e.g., detection-only vs. full report generation), preserving relativity within existing radiology code families.
-
Modifier strategy (telemedicine analogy): start by modifying existing codes to denote GRAI involvement/level, easing scale-up and avoiding code proliferation.
-
Adaptive descriptors aligned to FDA PCCPs: code language that anticipates pre-authorized algorithm updates, preventing payment from breaking when the model evolves within a PCCP.
Valuation options the authors contemplate.
-
Composite RVU builds that incorporate: (a) radiologist work for oversight/interpretation; (b) task complexity that escalates with autonomy (e.g., full report generation); (c) an adaptability component for multi-modality integration; and (d) a clearer home for software costs in PE.
-
Movement toward outcomes- or access-linked payment where FFS granularization fails (e.g., when GRAI’s contributions cut across multiple discrete services). The paper frames this as complementary—not a wholesale replacement—recognizing CMS’s FFS core.
Coverage & diffusion (what will really drive uptake). Even with codes and RVUs, coverage determines adoption. The authors expect a premium on comparative effectiveness and RWD/RWE, including calls for more randomized trials for AI. Societal guidance (e.g., ACR) will matter, as will harmonization of NCD/LCD expectations for safety, performance, and utilization management. Importantly, coverage criteria must learn to “live with change,” acknowledging PCCP-style evolution without re-litigating basic clinical utility after every minor update.
Practical implications for CMS policy wonks (and why genomics folks should care).
-
Appendix S as a common spine. Whether radiology or genomics/digital pathology, mapping AI functions to assistive/augmentative/autonomous helps anchor code design and evidence tiers. For “dry lab” genomics algorithms (e.g., therapy-ranking engines), Appendix S provides a shared taxonomy to argue work vs. PE allocation and supervision expectations.
-
SaaS friction = PFS pain point. The authors’ critique of PE treatment of software aligns with current CMS PFS SaaS RFI concerns: license/maintenance/cloud costs, thin claims data, and cross-setting consistency. Their proposed paths (crosswalking, composite valuation, adaptive coding) translate cleanly to genomic AI SaaS used across oncology pathways.
-
Coding strategy you can reuse. For generalist digital pathology or molecular decision-support (e.g., multi-task WSI models; pharmacogenomic/therapeutic ranking engines), consider:
-
Coordinator-style codes that bundle multi-step AI assistance;
-
Series or modifiers indicating autonomy level;
-
Descriptors that reference PCCPs, so updates don’t orphan payment.
-
-
Evidence playbook. Build a laddered evidence plan (technical performance → workflow impact → clinical outcomes → utilization/total cost) with RWE infrastructure from day one, anticipating LCD scrutiny and TCET-like pathways where applicable.
Bottom line. Dogra et al. make a clear case: generalist AI will force adaptive reimbursement design. For CMS and the AMA, that likely means new coding constructs, composite valuation that explicitly prices software & autonomy, and coverage rules that accept regulated evolution of models. For genomics, the blueprint is strikingly portable: many “generalist” decision engines face the same PE blind spots, coding rigidity, and evidence translation hurdles as GRAI—and can borrow these pathways almost verbatim.
##
Are there any quick coding fixes? Sidebar.