Sunday, May 3, 2026

RFK Blog 02: Chat GPT Assesses the Seven RFK-Hill Transcripts (April 2026)

 

RFK Jr.’s Seven-Hearing Hill Marathon: April 16–22, 2026

Blog 01 - I asked Chat GPT to find all 7 hearings and identify transcripts. Blog 01 is all the source material for Blog 02.

Blog 02 - Here - I ask  Chat GPT to write about the 7 documents, based on 200,000 words in 7 transcripts (see Blog 01 including ZIP file).

##
RFK Jr.’s Seven-Hearing Hill Marathon: April 16–22, 2026

Robert F. Kennedy Jr.’s late-April swing through Congress was not one hearing but a seven-stop stress test of his second-year leadership of HHS. The hearings ranged across House and Senate authorizers and appropriators: Ways & Means, House Appropriations, Energy & Commerce, Senate Appropriations, Senate Finance, and Senate HELP. The master file lists the seven events, their dates, committees, topics, and video links; despite the file title’s “April 14 to April 22” framing, the seven hearings documented in the source set begin on April 16 and run through April 22.

The result was a compressed portrait of Kennedy’s HHS: ambitious, disruptive, rhetorically potent, and politically combustible. His message was consistent. America is sick; the old health system is broken; HHS has been captured by profit-driven and status-quo institutions; chronic disease, nutrition, fraud, rural hospital decline, drug prices, and bureaucratic friction must be attacked at the root. He repeatedly framed the department’s agenda as a generational correction, including food policy, prior authorization, rural health, faster FDA approvals, ultra-rare disease programs, drug pricing, and fraud enforcement. In his own Ways & Means opening, he said the country was at a “generational turning point,” that children were the “sickest generation in modern history,” and that HHS was replacing policies that fueled chronic disease with policies that put Americans’ health first.

But the hearings also showed the limits of that message. Democrats repeatedly focused on vaccines, measles, CDC changes, NIH and CDC cuts, Medicaid reductions, contraception, grant terminations, and whether Kennedy’s public-health worldview had moved from skepticism into governance. Republicans, by contrast, often treated him as the rare cabinet officer who was willing to say the system was rotten and needed to be rebuilt. The result was a week in which Kennedy’s strengths and weaknesses were almost identical: he was most effective when talking broadly about incentives, chronic disease, rural care, and fraud; he was weakest when cornered on precise yes/no policy commitments, technical details, or scientific claims.

click to enlarge


Chat GPT Writes Complex Memo on Policy History: Twenty Years of Date of Service Rules

 

Memorandum on the Medicare Laboratory Date of Service Rule

Executive summary

 

This memorandum was written by Chat GPT in its “deep research” mode, after I gave it brief remarks on some of the key dates to look for (1997, 2001, 2006.)  

I would add an explanation of how outpatient hospital patients  have been handled:

Outpatient Hospital Specimens

1) Before 2006, tests from hospital outpatients to outside labs were generally billed by the outside lab to its local MAC.

2) The 2006 rule abruptly changed this, requiring billing by the hospital only.  This raised the risk its local MAC might not cover, or might inappropriately price, the test run by the distant lab when billed by the local hospital.

3) In 2013, CMS made most routine lab tests bundled to hospital outpatient events (like surgery or E.R.)   However,  CMS stated that molecular pathology tests were payable – if they were billed only by the local hospital to its local MAC and on the patient's biopsy DOS.

4) In 2017, CMS stated that molecular pathology tests could be billed by whichever lab performed them, and, on the date they were performed.   Hospital labs would bill CMS using their local date of test performance, and outside reference labs would bill CMS using their date of test performance, and billing the lab’s own MAC.

The rest of this memo is written by Chat GPT in “deep research” mode.

 

The operative congressional push for national laboratory uniformity, requested by industry, came from the 1997 Balanced Budget Act, enacted by the U.S. Congress[1]. The required negotiated-rulemaking process began in 1998, produced a proposed rule in March 2000, and culminated in a November 23, 2001 final rule by the CMS.   That 2001 rule, not the 2006 rule, is where CMS first selected specimen collection as the default national date of service for laboratory claims. CMS implemented the policy in 2002 program memoranda, clarified the archived-specimen concept in 2005, and codified the modern DOS framework in 42 CFR 414.510 in December 2006, effective January 1, 2007. Strictly speaking, the “14-day rule” is only one branch of the broader Date of Service policy, not the whole policy. [2]

The strongest modern reform case is not that CMS acted irrationally in 2006. It is that the rule was built for a late-1990s uniformity and anti-unbundling problem, while today’s oncology market relies far more heavily on post-discharge molecular decision support. In 2006 CMS focused on relatively narrow stored-specimen and chemo-sensitivity scenarios and repeatedly emphasized fear of unbundling hospital care. In 2017 CMS itself acknowledged that the existing DOS policy could delay outpatient molecular testing, then created a direct-billing outpatient exception for qualifying molecular pathology tests and Criterion A ADLTs, while expressly declining to extend the same logic to inpatient specimens because of DRG/IPPS ratesetting implications. That means any serious stakeholder effort to change inpatient policy must combine the clinical-access argument with a narrow test list, anti-gaming safeguards, and a credible answer on inpatient payment policy. [3]

RFK Blog 01: Chat GPT Saves Hours: Finding the 7 RFK Jr Hearings

Blog 01 - HERE - I asked Chat GPT to find all 7 hearings and identify transcripts.  HERE.

Zip file of the 7 transcripts at google drive here.  For example, upload the transcripts into your AI and ask it questions (E.g., "Did the meetinsg discuss telehealth?")

Blog 02 - I ask  Chat GPT to write about the 7 documents.
https://www.discoveriesinhealthpolicy.com/2026/05/rfk-blog-02-chat-gpt-assesses-seven.html


###

 Over the past few weeks, the Secretary of Health, RFK Jr, appeared before seven different Hill committes, sessions which were covered in variable levels of detail by the mainstream media and the health journalism media.

I put Chat GPT in "Deep Research" mode and asked it to find all seven hearings, list them by date, and wherever possible provide links to archived video of each whole hearing.   At many sources, like YouTube, those archived hearings will ahve autotranscript text available too.

Here's the result.  We summarize the 7 hearings and locations first, then a deeper dive follows.

Friday, May 1, 2026

CMS Announces Summer CLFS Pricing Meetings + Special PAMA Pricing Meeting

CMS has posted dates for its regular summer meetings for new lab code pricing.  In addition, CMS has planned a September two-day meeting for pricing those codes with missing or anomalous results in the current PAMA pricing survey.

click to enlarge

Public Comment June 10

CMS will hold its public comment meeting for pricing new lab tests for 2027, on June 10, 2026.  That's quicker than the normal late-June date.  Submit your comments by May 29.  

Normally, the summer pricing cycle has about 100 new codes (mostly PLA codes.)  This year, the number may be closer to 200 new codes, becaues of a bumper crop of PLA code applications being handled by AMA CPT right now (April-May 2026).

https://www.govinfo.gov/content/pkg/FR-2026-05-01/pdf/2026-08511.pdf

CMS Experts July 14-15

CMS convenes its expert panel to discuss each new code and recommend a price (or gapfil) for each new code.   You can watch by livestream.

https://www.govinfo.gov/content/pkg/FR-2026-05-01/pdf/2026-08512.pdf

No Meeting: CMS Releases Proposed Prices: 1H September

During 1H September, CMS will release its proposed prices, having taken account of the public meeting and the expert meeting.

Third Meeting: PAMA Pricing Problems, Stpember 15-16

CMS will hold a meeting to discuss how it should price tests that were part of the PAMA data collection process (codes activated by second quarter 2025), but for which no data was submitted in the May-June-July current PAMA data survey.   This is September 15-16, with a registration date of "August 3 to August 31."  CMS will not know what tests had no PAMA data until after the close of the PAMA submisison process, May 1 - July 31. (Read more about the data collection process itself here.)

https://www.govinfo.gov/content/pkg/FR-2026-05-01/pdf/2026-08513.pdf




Wednesday, April 29, 2026

Can AI Understand Medicare Policy? Can It Explain the Three-Day Rule (for DRG)

 Can Chat GPT explain gnarly tortuous Medicare policies?   One example is the 3-day-rule.  Services to a patient by a hospital are bundled for 3 days backwards from the admission date.   This includes time in the emergency room, but potentially quite a bit more.  It's  a tricky policy and I asked Chat GPT to research and explain it.   

click to enlarge


###

Medicare’s three-day rule sounds obscure, but it shapes hospital billing for labs, imaging, and some outpatient services before admission. This short essay traces its post-DRG history, the 2010 expansion to nondiagnostic services, Condition Code 51, ownership boundaries, and why legal structure—not clinical logic—often drives payment.

###

Medicare’s three-day payment window is one of those rules that sounds like a technical billing footnote but actually teaches a great deal about how Medicare thinks about hospitals. The basic idea is familiar: if a Medicare beneficiary receives outpatient diagnostic services at a hospital—say, a CT scan, laboratory tests, or imaging studies—and is then admitted to that same hospital within the payment window, those outpatient services are not separately paid but are swept into the inpatient DRG claim. Historically, this was not part of the original 1983 DRG/IPPS statute. The modern three-day rule traces mainly to OBRA 1990, which expanded an older same-day policy into a broader preadmission payment window. CMS describes the rule today as applying to services furnished by the admitting hospital, or by an entity wholly owned or wholly operated by that hospital, during the three calendar days before admission for IPPS hospitals, or one day for certain non-IPPS hospitals. (Centers for Medicare & Medicaid Services)

The first core rule is that preadmission diagnostic services are bundled very broadly. This includes clinical laboratory tests and imaging, and the policy does not turn on whether the diagnostic service later proves to be clinically related to the admission. If the patient has a CT scan Tuesday and is admitted Wednesday to the same hospital, the CT scan is effectively wrapped forward into the inpatient stay. The policy is therefore not just a clinical-relatedness doctrine; for diagnostics, it is a payment-boundary rule. It prevents hospitals from carving out preadmission diagnostic workups as separately payable outpatient services immediately before an inpatient DRG stay.

The less familiar twist came in 2010, when Congress addressed nondiagnostic outpatient services. The relevant law was the Preservation of Access to Care for Medicare Beneficiaries and Pension Relief Act of 2010, Pub. L. 111-192, section 102. CMS says this law did not change the diagnostic-service rule, but it did change the handling of admission-related nondiagnostic services. Before 2010, nondiagnostic services were treated as related largely through a diagnosis-code matching approach. After 2010, the rule became broader: outpatient nondiagnostic services furnished during the window are generally treated as related to the admission unless the hospital determines and attests that they are clinically unrelated. CMS later summarized the rule as requiring hospitals to include on the inpatient claim all outpatient diagnostic services and admission-related outpatient nondiagnostic services furnished within the window. (Centers for Medicare & Medicaid Services)

That 2010 change creates a practical distinction. For diagnostics, the rule is essentially automatic. For nondiagnostic services, it is a rebuttable presumption. Suppose a patient receives outpatient physical therapy on Monday for chronic osteoarthritis, then is in a car accident on Wednesday and is admitted to the trauma service. It would make no clinical sense to bundle the arthritis PT into the trauma admission. Under the post-2010 system, the hospital can treat those nondiagnostic services as unrelated, but it must be able to support that determination. Operationally, the hospital uses Condition Code 51, formally an attestation of unrelated outpatient nondiagnostic services. CMS guidance says unrelated nondiagnostic services are not subject to the window, and the hospital should retain documentation supporting the unrelated determination. (Centers for Medicare & Medicaid Services)

This is where the rule becomes operationally interesting. Condition Code 51 is not modifier 51, which is a CPT modifier for multiple procedures. It is a condition code on the institutional claim. In practice, coding or revenue-cycle staff make the determination, often under HIM or compliance protocols, and may escalate gray cases. The claim is then processed with the attestation; CMS does not generally adjudicate the clinical relationship in real time. The system is essentially pay now, audit later. The hospital’s separate outpatient claim survives unless later challenged by a MAC, RAC, or other reviewer. This means the 2010 rule is less like a purely algorithmic claims edit and more like hospital billing jurisprudence: a broad presumption, a self-attested exception, and post-payment audit risk.

Ownership is the other key boundary. The rule clearly applies when the preadmission services are furnished by the same hospital. It also applies when services are furnished by an entity that is wholly owned or wholly operated by the admitting hospital. That phrase matters. A hospital-owned outpatient department or wholly controlled entity can be swept into the rule. But loose affiliations, referral alliances, branding relationships, clinically integrated networks, or academic partnerships generally do not create the same result if they fall short of ownership or operation. Thus, a CT scan at Hospital A followed by admission to Hospital A is bundled; a CT scan at a legally separate Hospital B followed by admission to Hospital A may be separately paid, even if the two institutions cooperate closely. CMS and MAC guidance repeatedly frame the rule around the admitting hospital and wholly owned or wholly operated entities, not around general affiliation. (Centers for Medicare & Medicaid Services)

The policy lesson is that Medicare’s three-day rule follows legal and billing structure, not always clinical reality. It was grafted onto the DRG system after DRGs had already been created, as an anti-unbundling rule. It expanded over time from a same-day concept, to a three-day diagnostic window, to a broader 2010 framework for nondiagnostic services. For laboratories and hospital strategy, the rule is a reminder that inpatient and outpatient payment boundaries are often artificial but financially decisive. A test performed just before admission may be clinically identical to one performed elsewhere, but its payment treatment can turn on timing, ownership, site of service, and claim coding. In that sense, the three-day rule is a small Medicare rule with a large teaching value: it shows how DRG payment, outpatient bundling, hospital ownership, revenue-cycle attestation, and audit risk all intersect in the ordinary life of a hospital bill.

Chat GPT Explains Its Understanding of Where AMA Appendix S (AI Services) Comes From

For a year, AMA has been extensively revising its Appendix S, which is guidance for categories and coding of software-intensive services like AI.

See the current Appendix S here.  See a peer reviewed article on its goals here.  (PMC9719561.)  See an online article by its co-author Dr Frank, November 2025, here.

AMA has multiple bodies - like the Digital Medicine Payment Advisory Group - which can be confusing to outsiders.   I asked Chat GPT 5 to explain how it all works.   So far as I understand them, its feedback is correct.   Let me know if any errors and I will update.

###

Here is Chat GPT's understanding:

The AMA’s digital medicine coding structure appears to have evolved in two stages. 

First Stage

First, in 2017, AMA formed the Digital Medicine Payment Advisory Group (DMPAG) after identifying, in 2016, a gap in expertise around digital medicine. DMPAG was designed as a broad advisory group to AMA, drawing on physicians, industry experts, consultants, regulatory expertise, and other qualified health care professionals. Its work has focused on barriers to digital medicine adoption and on proposed solutions involving coding, payment, coverage, clinical data, use cases, and AI payment frameworks. (American Medical Association)

Insert Appendix S Here

Appendix S itself originated from this broader DMPAG ecosystem. AMA has stated that Richard Frank, MD, PhD, a member of the CPT Editorial Panel and former Siemens Healthineers chief medical officer, co-chaired the AI Working Group of the AMA-convened DMPAG, and that this working group “generated the content for Appendix S.” Appendix S is the CPT appendix that supplies the AI taxonomy—assistive, augmentative, autonomous—to describe the relationship between machine work and physician/QHP work in AI-enabled services. (American Medical Association)

Second Stage - DMCC

The second stage was the creation of the Digital Medicine Coding Committee (DMCC). AMA states that the CPT Editorial Panel created the DMCC in 2024. Its charge is not to own CPT policy independently, but to respond to requests from the CPT Editorial Panel and support the Panel on coding issues involving digital medicine and/or artificial intelligence by providing or obtaining subject-matter expertise. AMA is explicit that the CPT Editorial Panel retains sole authority to create, revise, and update CPT codes, descriptors, and applicable guidelines. (American Medical Association)

[DMCC may seat experts with expertise not found on the AMA Editorial Panel itself - BQ]

Thus, my understanding is that the DMCC comments on, advises on, and helps shape Appendix S revisions, but does not formally “own” Appendix S. The governing authority is the CPT Editorial Panel. The practical drafting process appears to be driven by AMA CPT staff and subject-matter leaders, especially Dr. Frank, with DMCC input, and then routed back through the CPT Editorial Panel process for review and approval.

Current relevant roles are as follows. Christopher Jagmin, MD, is the current chair of the CPT Editorial Panel; Barbara Levy, MD, is listed as vice chair. Richard Frank, MD, PhD, is currently listed as a member of the CPT Editorial Panel and is a leading architect of the Appendix S AI taxonomy. Mark Synovec, MD, is a pathologist, former chair of the CPT Editorial Panel, and currently serves as co-chair of the AMA Digital Medicine Coding Committee with Dr. Frank. (American Medical Association) (American Medical Association)

In practical terms, the quarterly Appendix S revisions appear to be a CPT Editorial Panel-controlled iterative process: draft revisions are circulated, the DMCC and stakeholders provide input, Dr. Frank/Synovec and AMA staff help refine the language, and the material goes back to the CPT Editorial Panel. The DMCC is therefore best understood as the expert coding advisory body, while the CPT Editorial Panel is the formal decision-making authority.

###

This may be a shareable link.

Sunday, April 26, 2026

Two Articles on the Medicare CRUSH Anti-fraud Effort

 Much has been written about CRUSH in the past month, here are two more.

American Journal of Managed Care covers Medicare fraud hearings.  From Giuliana Grossi, April 24.

https://www.ajmc.com/view/-fraud-pays-congressional-hearing-exposes-deep-cracks-in-medicare-s-defenses

See also an essay at the consultancy ADVI, Stacey Gilbert, March 17.

https://advi.com/insight/crush-and-genetic-molecular-laboratories-a-potential-shift-in-medicare-oversight/

AI Summary - 

The two articles frame Medicare fraud as both an immediate enforcement crisis and a future policy-risk issue for molecular labs. The AJMC article summarizes an April 2026 House Ways and Means hearing where witnesses argued that Medicare’s “pay-and-chase” design enables fraudulent providers to enroll easily, bill rapidly, and evade front-end scrutiny. Hospice fraud was a major example, including sham sites and patients improperly enrolled in hospice, with real clinical harm. 

Witnesses urged stronger provider vetting, license verification, beneficiary claim notices, AI analytics, and better data-sharing.

The ADVI article focuses on CMS’s proposed CRUSH framework, emphasizing that it is not yet a rule or enforcement program, but an RFI exploring future oversight. It links CMS concern to rising 2024 Part B lab spending, especially genetic and molecular tests, and warns labs to prepare for scrutiny of utilization patterns, medical necessity, panels, reflex testing, and referral relationships


Friday, April 24, 2026

Six Powerful Ways AI Helped Me Understand AMA CPT AI Policy (Appendix S)

 How AI Has Greatly Helped Me Understand AMA CPT “APPENDIX S” For AI

 For over a year, AMA has been working on a heavy overhaul of its Appendix S, coding and classification policy for software-intensive services like AI.   At many points, the journey has been a confusing one, trending toward better revisions even as the revisions have come to occupy most of the original document.

All year, and even right now, I’m getting huge benefits by using AI to understand what is happening.

1.  COPY COMPARE.   Chat GPT has done an outstanding job of comparing revision versions, and offering reasonable summaries of the results and perceived authors’ goals.

2.  CLEAN COPY.  The versions are so massively marked-up I find them almost unreadable.  Just by asking, Chat GPT was able to give me a clean copy with all the generations of changes accepted.  Far easier to read.

3.  LOGIC TREE.  At an April 16 workshop, AMA said it may try at some point to make a “logic tree’ or “flow chart” for software code classification.  I asked Chat GPT to try to do so, today, and in a couple minutes it gave me the results, including where a logic tree is possible and where it fails.   Knowing that could help edit the document.  Timeline: 5 minutes instead of several months.

4.  BETA TESTING.   Also on April 16, AMA said that later in the year, it may try to “fire test” or “beta test” whether the new Appendix S leads to consistent and clear classifications.  (But see point 3).  Chat GPT volunteered to find 10 or more use cases (it found many on Category IiI rosters) and try to fit them into the Appendix S template.  Result: Sometimes worked, sometimes failed, and explained how and why. Timeline: 5 minutes instead of several months.

5.  DEALING WITH CONCEPTUAL PROBLEMS.   The appendix doesn’t define AI or ML and it’s not always clear what services should be brought into and through Appendix S and which not.  Chat GPT offered possible remedies with model text. 

6.  REWRITING THE WHOLE DAMN THING.   I asked it to rewrite a new Appendix S, not by redlineing the 2022/23 version, but starting from a white page.  It did so, and took various individual paragraphs through extra versions at my guidance.  Timeline; 5 minutes instead of 6 hours by hand.

In short, there were myriad ways that AI itself was super-helpful on the Appendix S journey.



##

Appendix S as-is

https://www.ama-assn.org/practice-management/cpt/cpt-appendix-s-ai-taxonomy-medical-services-procedures

2025 Revisions article by AMA panelist/author

https://www.frankhealthcareadvisors.com/post/cpt-appendix-s-the-missing-link-between-ai-innovation-reimbursement-1

App S anew.

Thursday, April 23, 2026

Chat GPT Assessing an Important Court Case for Labs on Medical Necessity

For years, labs have had variable success during audits (or during prior authorization), depending on how much original physician-side chart documentation must be procured by a lab.  

Notably, MolDx published an article that a physician's order (often with ICD10 codes, etc) is an integral and valid part of the medical records as a whole.   A59798.  My blog here.

A recent court case revisits this.  See the case online here, 36pp.   See expert discussion by Jampol et al at Epstein Becker Green here.

###

Here is an AI assessment of the documents.

###

AI CORNER

###

The First Circuit’s OMNI v. MD Spine decision is important for labs because it supports a practical rule: a laboratory may generally rely on a physician’s test order as evidence of medical necessity, absent red flags or lab misconduct. 

That matters in audits, where labs are often pressed to obtain underlying physician chart notes, not just orders. The case does not give blanket immunity, but it helps distinguish the lab’s billing responsibility from the physician’s underlying clinical judgment.

###

Amazing: AI Can CLean-Up the "APPENDIX S" AI Policy for AMA CPT Meeting

Several years ago, AMA CPT introduced "Appendix S," a policy statement about language and coding for software-intensive services.   From May 2025 to the upcoming May 2026 CPT meeting, AMA has released multiple very extensive revisions of "Appendix S," one of which will probably be voted on, at the May 1 Boston AMA CPT meeting.

  •  See a 2021 Linked In announcement here. See a 2022 publication about the work and its goals, here.  See a 2025 online article by Dr. Frank here.

One recurring problem: AMA releases each new version of Appendix S as an extremely heavy markup of the prior draft. Frankly, I find it all but unreadable. It looks like an ordinary MS Word redline, but it isn’t. You cannot simply click “accept all changes” and produce a clean copy.

Over the last several months, I’ve given successive versions of Appendix S to ChatGPT for assessment and comparison, which has been very helpful. But today I tried something different. I gave it the heavily marked-up mid-April version of Appendix S and asked it to return a clean version. It did so, producing a new, downloadable Word .docx file.

I can’t share the document itself, because access is controlled by AMA CPT and subject to confidentiality restrictions. However, I can show you how ChatGPT interacted with me during the cleanup process, as it turned an almost unreadable markeup into a usable clean copy.

##

Here's what the AMA review document looks like (ouch!);


Here's my dialog with Chat GPT:

If possible, produce a clean copy of this redlined document with changes accepted. Note it is not a normal Word document where changes can be quickly accepted within Word. Producing a clean copy of it will require interpreting the redline markup as it is and then producing a clean copy, if you can. let me know if dividing it in parts (e.g. 3 equal parts) would help.

##

I’m going to inspect the uploaded .docx directly and reconstruct a clean version from the visible redline text, then I’ll give you a revised file to download. If the markup turns out too tangled in a few spots, I’ll still produce the cleanest defensible version and flag any lines that may need a human check.

I’ve confirmed the file is readable as a DOCX, but the visible “redline” is largely baked into the body text rather than behaving like ordinary Track Changes. I’m now reconstructing the accepted version paragraph by paragraph and then I’ll render the result to make sure the layout stayed clean.

I found one useful wrinkle: the document includes an older comparison table that is entirely struck through, so it should disappear in the accepted copy. I’m removing that deleted table, tightening a few places where accepted/deleted fragments left awkward phrasing, and then I’ll QA the rendered pages.

[Thought for 5 minutes 47 seconds]

I produced a clean accepted copy from your uploaded redline document

[copy of upload link here]

Download the clean DOCX [this is also a cloud link]

A couple of notes:

  • I treated the visible markup as acceptance of the inserted text and removal of struck text.
  • I also removed the fully struck comparison table that appeared to be deleted content.
  • One table row still flows onto a final short page; the content is intact, just split across pages.

If you want, I can next do a second-pass version in 3 sections to polish any judgment-call wording line by line.



CMS Announces "RAPID" Coverage for Breakthrough Devices - More to Follow

CMS Announces RAPID Pathway for Breakthrough Devices;  "Cuts Red Tape"

On April 23, CMS issued a press release annoucing a new CMS-FDA collaboration for Breakthrough Devices, called the RAPID Pathway.   The pathway will RAPIDly see the release of a proposed NCD, for 30 days comment, concurrent with FDA approval.   CMS will "pause" the TCET NCD-CED pathway ("Transitional Coverage for Emerging Technologies") to focus on RAPID.   

We'll know more about the RAPID program when a full description is published in the Federal Register in the coming days.

https://www.cms.gov/newsroom/press-releases/cms-fda-announce-rapid-coverage-pathway-accelerate-patient-access-life-changing-medical-devices

The Federal Register publication should appear here when it's ready:

https://www.federalregister.gov/agencies/centers-for-medicare-medicaid-services


A couple days ago Politico noted that RKF Jr had mentioned pending pathways for FDA BT devicecs in one of his Hill testimony sessions.   

Julie Ramage on RAPID, here.  Foley Hoag here.   STAT here.  Fierce Healthcare here.

Wednesday, April 22, 2026

Connecting the Dots is Fun: From FOCR Webinar to WSJ FDA Article

 It’s always interesting when policy dots connect across events that at first seem unrelated.

A few weeks ago, Friends of Cancer Research (FOCR) held a webinar on its workstream addressing new horizons in external control arm (ECA) studies—that is, the broader problem of how regulators and clinicians should interpret evidence from single-arm trials. My April 13 blog provides a fuller report on that discussion.

Then came the April 21 Wall Street Journal editorial, framed as “Oncologists vs. FDA vs. RFK Jr.” The article argues that many oncologists were frustrated by FDA’s rejection of Replimune’s melanoma therapy RP1. But stripped of the politics and headlines, the dispute turns on a familiar methodological issue: FDA’s concern about the limits of a single-arm study and the difficulty of inferring treatment benefit from comparisons to external or historical controls.

That is essentially the same terrain explored in the FOCR webinar. The Wall Street Journal article, although filtered through a news and editorial lens, brings into practical focus many of the scientific and policy questions FOCR was discussing in a more technical setting. Read together, the two episodes underscore how central this issue has become. The key question is no longer whether external control arms will be used, but rather when they are persuasive enough, and under what conditions, to support major regulatory decisions.




Tuesday, April 21, 2026

United Health Group, $3 Billion Bet on AI (see Stat Plus)

 On April 6, Casey Ross of STAT PLUS published a deep-dive aritcle on United Healthcare's massive investments in AI.

See the subscription article here:

https://www.statnews.com/2026/04/06/unitedhealth-group-massive-artificial-intelligence-push-patient-implications/



I don't want to infringe on his article, so here are just a couple highlights.

###

  1. This is not “AI in health care” in the abstract; it is AI moving into the claims payment stack.
    The key point is not that UnitedHealth has a chatbot. The key point is that AI is being inserted into claims adjudication, coding, fraud edits, prior authorization workflow, and coverage-facing operations. That places AI very close to the actual machinery that determines whether care is paid, delayed, downcoded, or denied.

  2. United is using AI not only as an insurer tool, but as a market-facing platform product.
    Through Optum, United is not merely optimizing internal workflow. It is also selling AI-enabled tools outward to providers and other payers. That matters because United is shaping the reimbursement environment both as a plan and as a vendor of operational infrastructure. In policy terms, that raises the stakes from a company story to a system architecture story.

  3. The article points to the emergence of an “algorithmic arms race” around payment integrity and coding.
    United is applying AI to billing codes, claims review, fraud detection, and prior auth. Patients and providers may hear the language of efficiency, but the reimbursement subtext is clear: AI can be used to accelerate edits, intensify scrutiny, standardize utilization controls, and tighten payment leakage. That may reduce administrative waste, but it also creates new ways to industrialize adverse payment outcomes at scale.

  4. The central policy problem is not whether AI is present, but whether it is auditable.
    United says physicians—not AI—make medical necessity decisions, and it describes an internal responsible-AI review board. But the article underscores the practical problem: patients and providers often cannot see what the algorithm did, what inputs it relied on, how much human review occurred, or whether AI shaped the decision path upstream. In reimbursement policy, that is the difference between ordinary utilization management and a potentially opaque new form of automated coverage control.

  5. CMS, state regulators, and courts will eventually have to decide whether AI-assisted payment decisions need a new disclosure and accountability framework.
    The article describes a world in which AI deployment is racing ahead while regulation remains patchy. That is unlikely to be stable. One can easily imagine future pressure for rules requiring disclosure of when AI was used, what role it played in denials or edits, what error rates were observed, what specialty oversight existed, and what appeal rights attach. For reimbursement policy, this could become as important as prior authorization reform, audit transparency, or program-integrity guardrails.

 

Roche/FMI to Acquire SAGA: See the Chat GPT Research Memo [AMAZING]

In April 2026, Roche announced that, via FMI, it would aquire SAGA Diagnostics, a Swedish MRD company that got MolDx coverage in 2025.

I put Chat GPT into its "Deep Research" mode and it spent about an hour checking hundreds of links and then writing a single, 13-page, 3400-word memo on SAGA, including a number of original figures.  This is what's called "agentic" research mode - Chat GPT developed the research plan, wrote the memo, selected footnotes, and planned and created new figures.  I'm presenting the memo "as is," without fact-checking it.   That is, this blog is about showing readers what Chat GPT can do in research mode, more than about Bruce's ability to fact check.  



####

Here's a Google Drive copy of the report:

https://docs.google.com/document/d/1o7FVNSgqpVUQZ2nen5TiHwvmwlptBJWO/edit?usp=sharing&ouid=110053226805181888143&rtpof=true&sd=true

I clip the entire report below; remember it's a 13 page document.

Note: Detailed tables are meant to be seen in the original Chat output OR as a Word export OR as a PDF export; they don't cut-paste well here in html.

If correct, the company raised circa SEK300M or $30M before acquisition.  The US test pays about $4400 (MolDx, including setup genome) and about $1200 per plasma test.  It's dPCR so it doesn't fall under the infamous CMS NGS NCD 90.2.

###

Monday, April 20, 2026

Humor: How Hospitals Handle CMS 14 Day Rule

The CMS 14-day rule for "date of service" has different versions for inpatients and outpatients, and for human molecular tests versus other molecular tests.   It's often a source of confusion (look for some publications on this, in the next year).

Here's a Chat GPT illustration of how hospital billing staff deal with the 14 Day Rule.