Header: AMA barrels ahead with efforts to implement major new AI coding policies by 2027. But will they see unintended consequences regarding AI-genomics and AI-pathology?
###
History and "Appendix S"
AMA efforts to undertake major coding and categorical reforms for software-intensive services have been underway since 2022.
- See Frank et al., 2022, Developing current procedural terminology codes that describe the work performed by machines, NPJ Dig Med 5:177.
- See also a May 2022 webinar on the "AI Taxonomy," here.
- See a December 2023 code set explanation here.
- See the actual AMA AI Appendix S online here.
Appendix S culminates in three kinds of "classification of AI medical services."
- Assistive is "work" performed by a machine for the physician, detects information wihtout analysis or conclusions.
- Augmentive is "work" performed by machine to analyze and quantify data for clinically meaningful output. (Requires MD report).
- Autonomous is "work" to automatically interpret data and generate clinically meaningful conclusions without [before] human involvement. "Interrogating and analyzing data."
What is Appendix S...FOR?
We're told the taxonomy "provides guidance for classifying AI applications" but the rationale or purpose is left unstated. To provide guidance - for what, when and where??? AMA Appendix S doesn't say.
A very recent essay by Frank (
November 2025) promises, "CPT Appendix S is the Missing Link Between AI Innovation and Reimbursement." There's more, it is "the strategic tool [to] significantly improve market access and success." The online essay is fairly long, but I still struggled to have simple takeaways.
CMAA is Coming?
A special coding system which is called "Clinically Meaningful Algorithmic Analysis" or CMAA, may be coming.
The AMA has posted the term CMAA (e.g.
October 25, 2025). AMA writes,
"CMAA codes seek to describe services in which algorithms process clinically relevant data, such as images or lab results, to produce medically actionable outputs even when no physician is directly involved at the point of service."
See more about CMAA codes in Love's essay on Linked in,
here.
Work Underway to Revise Appendix S - But Why?
After a fly-in and hybrid workshop in Chicago December 8, 2025, AMA has offered some revisions (an "A" option and a "B" option) for review now and for voting at the February 5, 2026, AMA CPT meeting in Palm Springs.
But all the confusion or vagueness about what Appendix S is "for" makes it hard to wisely judge whether all these complicated insertions and deletions of many single words or small phrases is good or bad. Good or bad, for WHAT?
Also, it's even more difficult to figure out how the multiple variant markups have or lack real-world implications without any gold standard of what this is for. Is every reader suppose to spend an hour parsing the nuances of that picky spiderweb of strikeouts and insertions?
The right way to do this would be to have a "meta" document, a cover piece, that states the goals, and ideally would list 10 problems with current system, 10 things the Appendix S revisions do, and just how each of those 10 things favorably impacts the problem it's aligned with.
This would have let people comment intelligently.
The A-B Test
While the text of Appendix S revisions is AMA confidential - I have an AI essay that
talks about the Appendix S revisions "A-B" without extensively quoting them, which I found very interesting. In fact, I could never unpack the nuances of the revisions without AI help.
Here.
Three Essays by Chat GPT on AI Coding Proposals
How do I conclude? What could be more on-point than offering three related essays, all by Chat GPT 5, helping to explain all this with logic, clarity, and historical examples as well as forward-looking projections.
I'll paste the first of the three AI essays below; see the full set of 3 essays in a
9-page online PDF.
Essay 1
AMA CPT Changes: Designed for Radiology, All Wrong for AI Pathology
Recent changes in AMA CPT policy
risk unintentionally undermining U.S. leadership in medical artificial
intelligence, particularly in digital pathology and genomics, two areas
where the United States currently holds a global advantage. The issue is not
opposition to AI itself, but a misalignment between CPT’s conceptual framework
and how modern diagnostic AI actually works.
Much of CPT’s thinking about AI
appears to be shaped by radiology use cases, where AI functions as
an adjunct: highlighting suspicious areas, prioritizing worklists,
or prompting a second look by a physician. In that context, CPT’s
principles—resistance to autonomous AI billing, concern about double payment,
and protection of physician RVUs—are internally coherent. However, those same
principles do not translate to computational pathology or genomics, where
AI is not advisory. In these domains, AI is the diagnostic act.
In genomics, billions of sequenced
DNA fragments are algorithmically analyzed to identify mutations that directly
determine therapy selection. There is (usually) no meaningful point at which a
pathologist’s subjective judgment is inserted into the result. AMA would call that autonomous results without physician work - and it's been like that for years. Digital pathology
AI increasingly operates the same way: validated models extract prognostic and
predictive signals from routine histopathology images that no human can
replicate. Treating these outputs as some quirky bird that is “autonomous without physician work” bad misunderstands the whole paradigm of clinical lab tests.
With regard to digital pathology, CPT’s current structure has no
natural home for such services. PLA codes require creation of new biomarkers;
Category I codes prohibit proprietary diagnostics; MAAA explicitly excludes
image-based algorithms (CPT mtg 10/2025); Category III codes are typically unpaid; and the
proposed CMAA category risks becoming a registry rather than a reimbursement
pathway. The result is that “upload image → download report” diagnostic
models—arguably among the most scalable and cost-efficient forms of medical
AI, exactly what the head of CMS wants—are structurally blocked from payment under PLA rules posted 12/31.
The commercial consequences are
already visible. Rather than launching standalone AI diagnostics, companies are
embedding AI invisibly inside existing reimbursed tests, not because the AI
lacks value, but because independent reimbursement is unattainable. Smaller
innovators without large corporate partners are likely to exit or never form.
At every step, these outcomes run counter to
national policy goals. Our system now discourages information-efficient diagnostics,
rewards molecular redundancy, and shifts AI innovation offshore or into
non-clinical domains. The risk is not immediate harm to patients, but long-term
erosion of U.S. leadership in healthcare AI at precisely the moment federal
policy seeks to strengthen it.
##
##
##
Extra
The appearing and disappearing AMA definition of AI
In Appendix S, AMA states that it has no definition of artificial intelligence. Does that strike anyone else as batsh#t crazy, for an appendix that is literally titled, "Artificial Intelligence Taxonomy"??? That exists to present "a classification of AI services"???
Online, AMA has stated that agumented intelligence is often called artificial intelligence. OK, well, you didn't want to define artificial intelligence, but it seems like if A is often B then either A is the definition of B or A and B are both defined the same.
And in the CPT change application, section V.1 is "Identifying Software as AI." Is the service based on output from software "which has performed more than data processing (data processing includes helping to aggregate, organize, arrange, transmit, develop, or otherwise visualy enhance the data?)" If you do that, you are "identified as AI," which sounds close to a definition of AI.