Thursday, April 23, 2026

Chat GPT Assessing an Important Court Case for Labs on Medical Necessity

For years, labs have had variable success during audits (or during prior authorization), depending on how much original physician-side chart documentation must be procured by a lab.  

Notably, MolDx published an article that a physician's order (often with ICD10 codes, etc) is an integral and valid part of the medical records as a whole.   A59798.  My blog here.

A recent court case revisits this.  See the case online here, 36pp.   See expert discussion by Jampol et al at Epstein Becker Green here.

###

Here is an AI assessment of the documents.

###

AI CORNER

###

The First Circuit’s OMNI v. MD Spine decision is important for labs because it supports a practical rule: a laboratory may generally rely on a physician’s test order as evidence of medical necessity, absent red flags or lab misconduct. 

That matters in audits, where labs are often pressed to obtain underlying physician chart notes, not just orders. The case does not give blanket immunity, but it helps distinguish the lab’s billing responsibility from the physician’s underlying clinical judgment.

###

From the lab industry viewpoint, the First Circuit’s decision is important because it pushes back against an increasingly burdensome theory of audit and FCA exposure: that a laboratory may receive a facially valid physician order, run the ordered test, bill Medicare, and then later be told that the order itself was not enough—that the lab should also have procured and retained the physician’s underlying chart notes or other physician-owned documentation to prove medical necessity. In OMNI v. MD Spine, the court did not frame the issue as a record-request case in those exact words, but its reasoning strongly supports the more practical lab-side position: a lab generally may rely on the ordering physician’s requisition/order as evidence that the test was medically necessary, unless the lab had red flags or engaged in misconduct of its own.

The case as a whole arose from allegations that MD Labs billed Medicare for PCR UTI tests that OMNI said were more expensive and no better than ordinary urine culture, and therefore medically unnecessary. But the record developed an unusual twist: OMNI’s physician-owner had instructed staff to order PCR tests even when clinicians had asked for the older culture test, apparently to help build an FCA case. The First Circuit still focused on the key FCA question: did the lab knowingly submit false claims? It agreed with the district court that the answer was no. The court emphasized that scienter under the FCA turns on the lab’s subjective belief at the time of claim submission, not hindsight and not what someone later argues a reasonable person should have known.

For labs, the core holding is the crucial sentence: a laboratory can generally rely on a doctor’s order to show that the test is “reasonable and necessary. The court treated the physician order as a kind of safe harbor starting point on medical necessity. Once the lab shows it had the doctor’s order, the burden shifts to the FCA claimant to rebut that reliance with evidence that the lab ignored warning signs or acted improperly. 

  • That is a major practical boundary line. 
  • It means the lab is not automatically transformed into the auditor of the physician’s full chart, differential diagnosis, and office note every time it receives a completed requisition.

That matters because the court repeatedly grounded its reasoning in the distinct role of the laboratory. The opinion says labs do not diagnose patients and do not decide what tests are medically necessary; they process tests ordered by licensed providers. The court openly worried that the opposite rule would force labs to second-guess physicians, delay testing while they re-audited orders, or even substitute cheaper services just to avoid FCA risk. From a lab operations standpoint, that is exactly the real-world issue: if every requisition is only provisional until the lab has chased down physician-owned progress notes, the lab becomes partly a records-collection company rather than a testing company.

On your specific point about medical record sufficiency, the opinion strongly implies that a properly completed test order/requisition is itself meaningful medical-record evidence, not an empty administrative shell. The court says the doctor’s determination of medical necessity is “inherent, if not explicitly stated, in the order for the test.” It also refers to the requisition form being “in-hand” and treats that order as the concrete basis for the lab’s reliance. In lab terms, that is very favorable language. It does not say the lab must routinely obtain the physician’s original chart note before the claim is defensible. Rather, it says the order itself ordinarily supports the lab’s reliance, absent red flags.

That does not mean the lab can be sloppy about documentation. The opinion and the EBG summary both preserve an independent lab duty to maintain proper records and compliance controls. The lab should retain the physician order, avoid changing the order without consent, ensure coding is appropriate, and document communications with providers. But that is a different proposition from saying the lab must always laboriously procure and warehouse the doctor’s underlying chart documentation as the only acceptable proof of necessity. The decision points toward a more balanced rule: retain the order, watch for red flags, escalate when something looks wrong.

The court was equally clear that the physician-order rule is not blanket immunity. A lab can still lose protection if it creates the problem itself. Examples given in the opinion and summary include misleading marketing, pressuring physicians, designing requisition forms that nudge or force medically unnecessary panels, adding services beyond what was ordered, or proceeding despite clear signs that the doctor did not understand what was being ordered. This is especially important for labs selling molecular panels or other high-complexity services. The protection comes from reliance on an independent physician’s order, not from hiding behind that order after the lab has manipulated the ordering process.

So the practical takeaway for audited labs is narrower and more useful than “orders are always enough.” It is this: a physician’s order is ordinarily sufficient to establish the lab’s good-faith basis to bill, but only so long as the lab has no meaningful reason to distrust it. The EBG summary lists examples of triggers for further inquiry: unusual volumes, abrupt ordering-pattern changes, inconsistent or contradictory requisitions, highly templated orders, or signs the provider misunderstood the test. Once those warning signs appear, passive reliance becomes less defensible, and the lab should seek clarification. That is where additional physician documentation may become important—not as a universal prerequisite, but as a response to red flags.

Viewed that way, the case is a welcome correction for the industry. Labs have long worried that Medicare contractors, UPICs, or FCA relators might collapse two different questions into one: first, whether the physician had adequate clinical justification to order the test; second, whether the lab knowingly submitted a false claim. The First Circuit keeps those concepts separate. A lab is not automatically liable just because someone later disputes the physician’s medical judgment. The lab’s own scienter still matters, and the presence of a physician order weighs heavily against scienter.

Put bluntly, from the lab side the case says: the lab is not the medical-necessity police. It may rely on the ordering physician’s order as a real and significant part of the medical record. As you note, MolDx has previously published guidance stating that a physician’s lab order can be viewed as an integral and satisfactory component of the medical record. I am not linking that here, per your instruction. The First Circuit’s reasoning is not identical to a MolDx article, but it is plainly congenial to that operational view of the world.

The Bottom Line

My bottom-line lab reading is this: OMNI v. MD Spine is a pro-lab decision on reliance, not a free pass on compliance. It supports the proposition that a complete physician order—with the ordered test and ordinary claim elements such as diagnosis coding where applicable—can ordinarily be enough for the lab to proceed and bill without first obtaining the physician’s entire underlying chart. But it also tells labs to keep their own house clean: no coercive forms, no stealth bundling, no altered orders, no careless coding, and no ignoring patterns that would make reliance unreasonable.

 ##

The Case as a Whole

As a whole, the case ended very well for the lab on the appealed medical-necessity issue. The First Circuit affirmed summary judgment for MD Labs, holding that OMNI had not produced enough evidence that the lab knowingly submitted false claims. In other words, even if one thought the PCR urine testing was expensive, aggressive, or debatably overused, the relator still failed on the key FCA element of scienter. The court said the lab could generally rely on the physician’s order and was not required to second-guess the doctor absent red flags or misconduct by the lab.

There was also an unusual factual twist that challenged OMNI: OMNI’s own physician-owner had instructed staff to order PCR tests even when clinicians had asked for BUC tests, apparently to help generate an FCA case. The court noted that nothing showed MD Labs knew about those internal tactics. So the lab was sitting there with physician requisitions in hand, while the relator later argued the tests should not have been ordered. That made the scienter theory especially weak.

At the district court level, MD Labs had sought summary judgment on several grounds, including that the tests were not false claims, that causation was broken by the physician’s own ordering conduct, and that the lab lacked scienter. The district court granted summary judgment based on the scienter ground alone, and the First Circuit affirmed on that basis. So the appellate court did not need to hold broadly that the PCR tests were always medically necessary, or that the testing model was ideal. It held instead that OMNI had not shown the lab knowingly submitted false claims. That is an important distinction, and probably the cleanest way to read the opinion.

So your instinct is right: the case does not read like a ringing judicial endorsement of lots of molecular urine testing as a general practice. It reads more like a decision that, whatever one thinks about that testing pattern, this relator did not prove FCA liability against this lab. The court was willing to say, in effect, “maybe the medicine is debatable, but that does not automatically make the lab" in the wrong in a FCA case.

There was also a broader procedural point: MD Labs had already settled other issues with the government and OMNI earlier in the case, and OMNI retained the right to pursue this remaining medical-necessity/FCA theory. That remaining theory is the one that failed. So the “whole case” did not end with total vindication on every historical issue ever raised; rather, the surviving appealed issue ended with a defense win for the lab.

My one-sentence bottom line: the lab won the appealed case because the court found insufficient evidence that it knowingly billed Medicare falsely, not because the court needed to fully bless the underlying molecular UTI-testing business.

###

Most of the clais were circa 2017 2018 2019 before the MolDx (and thus Noridian) crackdown on UTI molecular testing, which would have affected a Nevada-based lab billing Noridian.

###

MD Labs did not rely only on the “we had doctors’ orders” argument.** That was its **main legal shield** on scienter, but the record also shows it offered **substantive support that PCR/UTI testing was not outside the bounds of reasonable practice**.  

Most notably, the First Circuit says MD Labs had **consulted an infectious disease doctor about the PCR tests**, and that doctor told the lab about the tests’ **importance in patient care**. The court used that fact when rejecting OMNI’s effort to portray the founders’ internal emails as showing awareness of falsity. 

And on the **panel/bundling** issue, MD Labs went further still. The opinion says MD Labs offered an **expert infectious disease physician report** stating that the pathogens included in its panels **“are reasonable”** and **“within the range endorsed” by a peer-reviewed article**. 

The court then noted that the plaintiff OMNI did **not** respond with record evidence showing the panel makeup was unnecessarily broad. 

So the lab’s position was really a **two-track defense**:

**First**, legally, it said: *we received apparently valid physician requisitions and could generally rely on them.* 

**Second**, factually, it said: *and in any event, this was not some crazy fringe testing model; we had infectious-disease input and expert support that the pathogens/panels were within a reasonable clinical range.*  

That said, the court’s actual holding rested on **scienter**, not on a sweeping endorsement of molecular urine testing as best practice. The First Circuit specifically emphasized reliance on the physician orders and the lack of evidence that MD Labs should have doubted them. So the clinical “reasonableness” material helped the defense, but the decisive point was still that OMNI failed to prove the lab **knowingly** submitted false claims.  


G2 Intelligence Article: Appeals Courts Supports Labs on Medical Necessity

 I don't have a subscription to G2 Intelligence [for labs] but I can share their "top of the article" information on a recent  federal court case.   It looks like it tracks back to a case covered in December at Epstein Becker Green by Jampol et al. here.

"A federal appeals court upheld the idea that clinical laboratories can rely on physician orders to justify medical necessity."

Sounds like big news for labs.

https://www.g2intelligence.com/recent-md-labs-ruling-has-implications-for-test-requisitions-and-commissions/

"A recent appeals court ruling is giving clinical labs more clarity on medical necessity, but it also raises important questions about how test requisitions are designed and how sales commissions are structured."


Linked in link.

See more and link to the original judgment at Epstein Law.  The case is, United States ex rel. OMNI Healthcare, Inc. v. MD Spine Solutions LLC. 36pp, 12/1/2025.

Amazing: AI Can CLean-Up the "APPENDIX S" AI Policy for AMA CPT Meeting

Several years ago, AMA CPT introduced "Appendix S," a policy statement about language and coding for software-intensive services.   From May 2025 to the upcoming May 2026 CPT meeting, AMA has released multiple very extensive revisions of "Appendix S," one of which will probably be voted on, at the May 1 Boston AMA CPT meeting.

  •  See a 2021 Linked In announcement here. See a 2022 publication about the work and its goals, here.  See a 2025 online article by Dr. Frank here.

One recurring problem: AMA releases each new version of Appendix S as an extremely heavy markup of the prior draft. Frankly, I find it all but unreadable. It looks like an ordinary MS Word redline, but it isn’t. You cannot simply click “accept all changes” and produce a clean copy.

Over the last several months, I’ve given successive versions of Appendix S to ChatGPT for assessment and comparison, which has been very helpful. But today I tried something different. I gave it the heavily marked-up mid-April version of Appendix S and asked it to return a clean version. It did so, producing a new, downloadable Word .docx file.

I can’t share the document itself, because access is controlled by AMA CPT and subject to confidentiality restrictions. However, I can show you how ChatGPT interacted with me during the cleanup process, as it turned an almost unreadable markeup into a usable clean copy.

##

Here's what the AMA review document looks like (ouch!);


Here's my dialog with Chat GPT:

If possible, produce a clean copy of this redlined document with changes accepted. Note it is not a normal Word document where changes can be quickly accepted within Word. Producing a clean copy of it will require interpreting the redline markup as it is and then producing a clean copy, if you can. let me know if dividing it in parts (e.g. 3 equal parts) would help.

##

I’m going to inspect the uploaded .docx directly and reconstruct a clean version from the visible redline text, then I’ll give you a revised file to download. If the markup turns out too tangled in a few spots, I’ll still produce the cleanest defensible version and flag any lines that may need a human check.

I’ve confirmed the file is readable as a DOCX, but the visible “redline” is largely baked into the body text rather than behaving like ordinary Track Changes. I’m now reconstructing the accepted version paragraph by paragraph and then I’ll render the result to make sure the layout stayed clean.

I found one useful wrinkle: the document includes an older comparison table that is entirely struck through, so it should disappear in the accepted copy. I’m removing that deleted table, tightening a few places where accepted/deleted fragments left awkward phrasing, and then I’ll QA the rendered pages.

[Thought for 5 minutes 47 seconds]

I produced a clean accepted copy from your uploaded redline document

[copy of upload link here]

Download the clean DOCX [this is also a cloud link]

A couple of notes:

  • I treated the visible markup as acceptance of the inserted text and removal of struck text.
  • I also removed the fully struck comparison table that appeared to be deleted content.
  • One table row still flows onto a final short page; the content is intact, just split across pages.

If you want, I can next do a second-pass version in 3 sections to polish any judgment-call wording line by line.



CMS Announces "RAPID" Coverage for Breakthrough Devices - More to Follow

CMS Announces RAPID Pathway for Breakthrough Devices;  "Cuts Red Tape"

On April 23, CMS issued a press release annoucing a new CMS-FDA collaboration for Breakthrough Devices, called the RAPID Pathway.   The pathway will RAPIDly see the release of a proposed NCD, for 30 days comment, concurrent with FDA approval.   CMS will "pause" the TCET NCD-CED pathway ("Transitional Coverage for Emerging Technologies") to focus on RAPID.   

We'll know more about the RAPID program when a full description is published in the Federal Register in the coming days.

https://www.cms.gov/newsroom/press-releases/cms-fda-announce-rapid-coverage-pathway-accelerate-patient-access-life-changing-medical-devices

The Federal Register publication should appear here when it's ready:

https://www.federalregister.gov/agencies/centers-for-medicare-medicaid-services


A couple days ago Politico noted that RKF Jr had mentioined pending pathways for FDA BT devicecs in one of his Hill testimony sessions.   

Wednesday, April 22, 2026

Connecting the Dots is Fun: From FOCR Webinar to WSJ FDA Article

 It’s always interesting when policy dots connect across events that at first seem unrelated.

A few weeks ago, Friends of Cancer Research (FOCR) held a webinar on its workstream addressing new horizons in external control arm (ECA) studies—that is, the broader problem of how regulators and clinicians should interpret evidence from single-arm trials. My April 13 blog provides a fuller report on that discussion.

Then came the April 21 Wall Street Journal editorial, framed as “Oncologists vs. FDA vs. RFK Jr.” The article argues that many oncologists were frustrated by FDA’s rejection of Replimune’s melanoma therapy RP1. But stripped of the politics and headlines, the dispute turns on a familiar methodological issue: FDA’s concern about the limits of a single-arm study and the difficulty of inferring treatment benefit from comparisons to external or historical controls.

That is essentially the same terrain explored in the FOCR webinar. The Wall Street Journal article, although filtered through a news and editorial lens, brings into practical focus many of the scientific and policy questions FOCR was discussing in a more technical setting. Read together, the two episodes underscore how central this issue has become. The key question is no longer whether external control arms will be used, but rather when they are persuasive enough, and under what conditions, to support major regulatory decisions.




Tuesday, April 21, 2026

United Health Group, $3 Billion Bet on AI (see Stat Plus)

 On April 6, Casey Ross of STAT PLUS published a deep-dive aritcle on United Healthcare's massive investments in AI.

See the subscription article here:

https://www.statnews.com/2026/04/06/unitedhealth-group-massive-artificial-intelligence-push-patient-implications/



I don't want to infringe on his article, so here are just a couple highlights.

###

  1. This is not “AI in health care” in the abstract; it is AI moving into the claims payment stack.
    The key point is not that UnitedHealth has a chatbot. The key point is that AI is being inserted into claims adjudication, coding, fraud edits, prior authorization workflow, and coverage-facing operations. That places AI very close to the actual machinery that determines whether care is paid, delayed, downcoded, or denied.

  2. United is using AI not only as an insurer tool, but as a market-facing platform product.
    Through Optum, United is not merely optimizing internal workflow. It is also selling AI-enabled tools outward to providers and other payers. That matters because United is shaping the reimbursement environment both as a plan and as a vendor of operational infrastructure. In policy terms, that raises the stakes from a company story to a system architecture story.

  3. The article points to the emergence of an “algorithmic arms race” around payment integrity and coding.
    United is applying AI to billing codes, claims review, fraud detection, and prior auth. Patients and providers may hear the language of efficiency, but the reimbursement subtext is clear: AI can be used to accelerate edits, intensify scrutiny, standardize utilization controls, and tighten payment leakage. That may reduce administrative waste, but it also creates new ways to industrialize adverse payment outcomes at scale.

  4. The central policy problem is not whether AI is present, but whether it is auditable.
    United says physicians—not AI—make medical necessity decisions, and it describes an internal responsible-AI review board. But the article underscores the practical problem: patients and providers often cannot see what the algorithm did, what inputs it relied on, how much human review occurred, or whether AI shaped the decision path upstream. In reimbursement policy, that is the difference between ordinary utilization management and a potentially opaque new form of automated coverage control.

  5. CMS, state regulators, and courts will eventually have to decide whether AI-assisted payment decisions need a new disclosure and accountability framework.
    The article describes a world in which AI deployment is racing ahead while regulation remains patchy. That is unlikely to be stable. One can easily imagine future pressure for rules requiring disclosure of when AI was used, what role it played in denials or edits, what error rates were observed, what specialty oversight existed, and what appeal rights attach. For reimbursement policy, this could become as important as prior authorization reform, audit transparency, or program-integrity guardrails.

 

Roche/FMI to Acquire SAGA: See the Chat GPT Research Memo [AMAZING]

In April 2026, Roche announced that, via FMI, it would aquire SAGA Diagnostics, a Swedish MRD company that got MolDx coverage in 2025.

I put Chat GPT into its "Deep Research" mode and it spent about an hour checking hundreds of links and then writing a single, 13-page, 3400-word memo on SAGA, including a number of original figures.  This is what's called "agentic" research mode - Chat GPT developed the research plan, wrote the memo, selected footnotes, and planned and created new figures.  I'm presenting the memo "as is," without fact-checking it.   That is, this blog is about showing readers what Chat GPT can do in research mode, more than about Bruce's ability to fact check.  



####

Here's a Google Drive copy of the report:

https://docs.google.com/document/d/1o7FVNSgqpVUQZ2nen5TiHwvmwlptBJWO/edit?usp=sharing&ouid=110053226805181888143&rtpof=true&sd=true

I clip the entire report below; remember it's a 13 page document.

Note: Detailed tables are meant to be seen in the original Chat output OR as a Word export OR as a PDF export; they don't cut-paste well here in html.

If correct, the company raised circa SEK300M or $30M before acquisition.  The US test pays about $4400 (MolDx, including setup genome) and about $1200 per plasma test.  It's dPCR so it doesn't fall under the infamous CMS NGS NCD 90.2.

###

Monday, April 20, 2026

Humor: How Hospitals Handle CMS 14 Day Rule

The CMS 14-day rule for "date of service" has different versions for inpatients and outpatients, and for human molecular tests versus other molecular tests.   It's often a source of confusion (look for some publications on this, in the next year).

Here's a Chat GPT illustration of how hospital billing staff deal with the 14 Day Rule.



Sunday, April 19, 2026

The Future Decade in Sequencing: Learning from Past Decades?

I was given a questionairre on the future of clinical sequencing in the next decade - whether FFPE, or MRD LBx, and so on.   Of course, any projections ten years ahead are tricky.   What might we have predicted in 2000, or 2010, and how would reality have looked by comparison?

Three clinical landmarks in genomics might be introduction of the Oncotype Dx test around 2005, the widespread acceptance of BRCA sequencing (rather than hot spots by circa 2010, and the clinical launch of Foundation Medicine around 2012. 

But it would be hard to imagine that first decade of rising demand without also considering the billions in fund-raising and investments (both R&D and manufacturing) by Illumina (and others).   There wasn't some secular trend to ten-times as much clinical sequencing without all those investments.

(Similarly, there wasn't a natural demand curve for robotic surgery, separate from Da Vinci's massive investments in R&D, quality, and sophistications Nor was there some natural demand curve for glass phones, separate from Apple's massive investments in innovation and manufacturing.)

Similarly, today, we will want to project growth of MRD testing, etc, and second, come back and predict how that affects Natera's top line.  But it's hard to see that growth as a secular trend somehow independent of huge clinical investments by Natera, Guardant, Freenome, Caris, and many others.  (Add in Pac Bio, Thermo Fisher, Roche, 10X, Owkin, and others).   If industry funding cut back substantially, the pace of clinical science & growth would slow, and vice versa.

I asked Chat GPT what it makes of these ideas, results below.

###

  

ILMN Market Cap

From a 20,000 foot view, ILMN had a nearly continuous market ramp up from its IPO in 2000, especially, from 2005 to 2021, before falling in 2023 back to 2015 levels.

Thursday, April 16, 2026

Get Them Here: Detailed Notes on the April 16 CMS Webinar on PAMA Reporting

 On April 16, 2026, CMS provided a one hour webinar on PAMA lab price data reporting, which runs May June July, 2026.

CMS PAMA Page

CMS stated it would soon post the video on its PAMA resources page.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-pama-reporting-resources

Find the CMS Video

Scroll down on this page to see April 16 video at CMS:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs/events

The PPT Deck of Video

https://www.cms.gov/files/document/april-16-clfs-data-collection-reporting-webinar.pdf

Back up Copy of Video

If of interest, I posted a copy of the video on YouTube:

https://youtu.be/k0zaMNwpFvs


Very Detailed Conference Notes

And, below, see a detailed AI notes summary of the webinar.

##

Some Thoughts on the Revisions to AMA Appendix S for Software-Intensive Services

 

Header - For a year, AMA has circulated larger and larger revisions of Appendix S, its position on software-intensive services.    Here, I argue not for or against specific redlines (which may be copyright or confidential) but rather I discuss some general principles.

###

In 2023, the AMA added an “Appendix S” to the AMA CPT procedures and codes handbook.  Appendix S does not define what advanced software is, but when it occurs, Appendix S proposes that each instance of such software services can be classified as “Assistive, Augmentive, or Autonomous.”   In 2025 and now 2026, AMA has proposed multiple versions of a revised – and now very heavily revised – Appendix S.  

In this note, I’m not commenting on particular changes (which may be copyright or confidential) but on the overall themes and attendant problems.   The main references, then, are the current public Appendix S (fn 1).  I also recommend readers see a publication about Appendix S (fn 2) and a recent article by Dr. Richard Frank (fn 3).  AMA may later introduce a new coding category CMAA. (fn 4).

From my perspective as a physician-MBA working in strategy consulting, here are some of the important, but less-discussed challenges.

##

First, readers need a clean copy!

While the latest of many dense revisions belongs to AMA, I don’t think it violates confidentiality to say it is virtually unreadable, a dense mat of revisions, strikeouts, inserts, and more revisions atop one another.   At a minimum, AMA should also issue a CLEAN version of their most recent proposal at any given time.

 

Second... Start over??

The extent of revisions suggests that piling many layers of revisions on top of the original is no longer productive or optimal.  For example, the US Constitution is not a 99% redline of the Articles of Confederation, they wrote a new document.   It would likely be more fluent and lucid to simple write a new document and not have a hodgepodge of preserved word pairs or phrases under mountains of redlining.

Third... Living with A, A, and A.

We should address whether the continued use of an early idea, “assistive, augmentive, autonomous” makes sense.  AMA attempts – and it’s nearly impossible – to address disjunct concepts at once, such as detection, parameter generation, interpretation, physician involvement, and machine-initiated actions.  

We may be better calling things Software Services Type 1, Type 2, Type 3.   (This is exactly what App S already does for the 3 divisions of Autonomous! Numbered not named, letting you shape the meaning exactly as you want to.  Or, Category III codes are just called "that," and then defined.)

Commandeering existing adjectives that don’t naturally fit together (but happen to have clever alliteration) may confuse more than help, by pulling in legacy meanings of the words chosen.

Fourth,Beta Testing as a Priority.

The projects needs much more than a couple examples out of the  universe of software solutions.   Machines, and computer programs, need extensive “beta testing” which in this case might be 20 or 30 examples fed into the most current revision.   Do 4 experts independently agree which of the 20 or 30 examples belong as subjects of Appendix S?  Likely not.  (AI is not defined, etc).   How high is the independent reviewer agreement on how the 30 examples parse in to 3 buckets (A1, A2, A3)?  Likely the kappa statistic for agreement would be distressingly low. This kind of firetesting or beta-testing points out the weak spots, to allow fixing. But we have no data.  

Fifth, a Decision Tree = Logic.

A logical, structured approach to using the concepts  might be very helpful.  Take software X, and ask first, does it fit “augmentative?’  If yes, stop.   If not, then there are only two choices: is it assistive or argumentative?   Bringing algorithmic structure to the usage of the appendix might be helpful.  

Sixth, No Defining AI - Was That Really a Good Idea?

And this circles back to the beginning.  Appendix S purports to subtly categorize and subcategorize certain types of software-intensive services that qualify for this treatment.

We would assume and hope with high precision in the results, ...for some cloudy universe of kind of software-intensive services like AI and Machine learning, while not defining at all what does or doesn’t come into the domain of Appendix S is the first place.  

Yet we expect Appendix S rules to work ono all those inputs, precisely and consistently.   That may be crazy and lead to all kinds of downstream problems that we aren’t facing. 

___

I believe that Appendix S was designed originally for software-only services.  If so, we need to codify that.  We should state clearly that physical services (like whole genome sequence) that may use a lot of AI, remaining coded based on the physical service component (81425, genome) and not pulled onto CMAA by the AI within.   For example, WGS may yield a variant of unknown significance VUS (C>T), which is resolved as benign by AI proteomic software that validates there is no protein configuration change.  I would expect the service is still WGS with resolution of variants, billed as 81425.  I would be chilled if the WGS service got extracted to a payor-not-recognized code over in CMAA, because the service contains AI in part.  At some point Appendix S should make this clear, the physical part of the code governs coding and a folded-inside AI component does not drag everything onto the CMAA listing.  

 

 ___

 

1.  https://www.ama-assn.org/system/files/cpt-appendix-s.pdf

2. Frank, 2022, PMID 36463327.  https://pmc.ncbi.nlm.nih.gov/articles/PMC9719561/ 

3. https://www.frankhealthcareadvisors.com/post/cpt-appendix-s-the-missing-link-between-ai-innovation-reimbursement-1 

4.  https://www.ama-assn.org/practice-management/cpt/cpt-codes-offer-language-report-ai-enabled-health-services   [CMAA]

Wednesday, April 15, 2026

CMS Inpatient Proposed Rule: Canning the Fast Track to NTAP (!)

In 2020, the first Trump adminstrationi proposed fast track routes to Inpatient NTAP (new technology add on payment) and fast track to coverage for breakthrough devices (under MCIT).   MCIT was replaced, under Biden, with TCET, an inferior plan.

Now, the Trump administration itself threatens to nix the fast track methods to NTAP.  See AI discussion below.

Find the CMS source material via here.  See an article by Manatt's Ross Margulies here.  Coverage at STAT here.



##

AI CORNER

###

The FY 2027 IPPS proposed rule contains a significant, and for many stakeholders unwelcome, reversal in Medicare innovation policy: CMS proposes to repeal the alternative pathway that has made it easier for certain FDA-designated technologies to qualify for inpatient new technology add-on payments (NTAP), and it proposes a parallel repeal for the outpatient device pass-through pathway as well. The current alternative pathway was created in the Trump administration through the FY 2020 and FY 2021 IPPS/LTCH PPS final rules for inpatient NTAP, and the CY 2020 OPPS/ASC final rule for outpatient device pass-through. Under that pathway, Breakthrough Devices, QIDP drugs, and later LPAD drugs are treated as not substantially similar to existing technology and do not have to separately prove the usual Medicare standard of substantial clinical improvement, although they still must meet the other applicable regulatory requirements. CMS now proposes that, beginning with FY 2028 NTAP applications, all applicants would have to meet the same ordinary standards, while technologies already approved, and technologies already under review for FY 2027, would be grandfathered and remain eligible under the existing pathway.

Tuesday, April 14, 2026

CMS Releases Inpatient Proposed Rule

 CMS has released the Inpatient Proposed Rule.   Find it here:

https://www.federalregister.gov/documents/2026/04/14/2026-07203/medicare-program-hospital-inpatient-prospective-payment-systems-for-acute-care-hospitals-ipps-and

It's in Fed Reg, April 14, 2026, 91:19312 (576pp).  16 mb.

Note: This downloads for me in a locked copy I can't annotate.  However, you can "extract all pages" to a new document as re-save, and it will be ok for markup.

See fact sheet:

https://www.cms.gov/newsroom/fact-sheets/fy-2027-hospital-inpatient-prospective-payment-system-ipps-long-term-care-hospital-prospective

Stripping Breakthrough from NTAP

CMS proposed to make it harder for new antibiotics and for breakthrough devices to meet NTAP (new tech add on payment) criteria.  See discussion by Ross Margulies.  See my blog on the topic.

See press release on the joint program:

https://www.cms.gov/newsroom/press-releases/cms-improve-patient-care-experience-lower-costs-hip-knee-ankle-replacements

There's  quite a bit of discussion of an expanded national joint bundled care program, and for those of us dismayed with the 14-day-rule regarding genomics (such as CGP), note that the joint surgery demonstration would include lab tests for up to 90 days (as well as many other services up to 90 days).  See p. 19678-79.  However, it doesn't withhold line-item payment, it just hold hospitals responsible for high and low costs in their penalties and bonuses, much like in ACOs now.

###

AMA Releases the Biggest Batch of PLA Applicants Ever

Just two weeks ago, AMA released 29 new and finalized PLA codes, April 1 (here).

Here's the PLA home page:

https://www.ama-assn.org/practice-management/cpt/cpt-pla-codes

According to the calendar there, AMA releases draft codes April 14, wants feedback by April 21, and will finalize by April 21.  The codes are then voted on by the Editorial Panel regular spring meeting (April 30-May 1).

Here is the code list, which reiterates that comment is due by April 21:

https://www.ama-assn.org/system/files/may-2026-pla-public-agenda.pdf

I count over 100 new codes (circa 105).

Historically, PLA codes voted by AMA CPT around May 1, are included in the summertime CMS crosswalk-gapfill meetings.

##
The next application date is June 9 for codes finalized October 1.

Monday, April 13, 2026

Friends of Cancer Research: 3-Hr Webinar on "External Control Arms"

 When you have a one-arm trial, you need a comparison group - the general problem presented by "External Control Arms."

Catch up with the newest thinking via a 3-arm webinar hosted by Friends of Cancer Research (streamed live on April 7, 2026).

Find it on YouTube here:

https://www.youtube.com/watch?v=fh7y3J2xyYY

Find the project home page at FoCR here:

https://friendsofcancerresearch.org/eca/

Below, Chat GPT summarizes the streaming auto-transcript.

###

On a personal note, I've seen several examples over the years where an entity used a one-arm real world study and an "external control arm" - 'administrative controls," patients in the same health plan with some similar diagnoses become "the control arm."   And there can be various kinds of risk-balancing aka propensity score matching.   Often the test group did so much better it was hard to attribute to the intervention and easier to attribute to a healthier population to start with.   (This rarely gets buy-in from the inventor of the intervention).   Personally, I found this the best explanation for the 2016 CMMI diabetes prevention project at YMCA's (Alva et al.) which used a propensity or diagnosis matched Medicare population.   As I recall, even having 1 YMCA class was associated with big health gains in the following year.   To me, this violated common sense dose-response expectations and left behind the suspicion that Medicare folk at YMCAs and tennis clubs might be healthither than the average beneficiary.   But Alva et al. would be a "external control arm," as FoCR will discuss.

###

AI CORNER

##

Here is a detailed report on the Friends of Cancer Research webinar, “Application of External Control Arms in Oncology Drug Development,” held April 7, 2026.

Summary

This April 7, 2026 Friends of Cancer Research webinar showed a field moving from enthusiasm about external control arms to a more disciplined focus on fit-for-purpose data, prespecified methods, documentation, and early FDA engagement. The meeting’s core message was that ECAs are not a shortcut around rigorous evidence standards, but a practical tool when randomized trials are infeasible, especially in rare cancers. Speakers highlighted encouraging pilot work across multiple data partners, while stressing persistent challenges in missing data, endpoint alignment, population comparability, and regulatory predictability. Overall, the webinar suggested real progress in oncology ECA methods, but not a settled playbook for routine regulatory use. 

###