Wednesday, April 22, 2026

Connecting the Dots is Fun: From FOCR Webinar to WSJ FDA Article

 It’s always interesting when policy dots connect across events that at first seem unrelated.

A few weeks ago, Friends of Cancer Research (FOCR) held a webinar on its workstream addressing new horizons in external control arm (ECA) studies—that is, the broader problem of how regulators and clinicians should interpret evidence from single-arm trials. My April 13 blog provides a fuller report on that discussion.

Then came the April 21 Wall Street Journal editorial, framed as “Oncologists vs. FDA vs. RFK Jr.” The article argues that many oncologists were frustrated by FDA’s rejection of Replimune’s melanoma therapy RP1. But stripped of the politics and headlines, the dispute turns on a familiar methodological issue: FDA’s concern about the limits of a single-arm study and the difficulty of inferring treatment benefit from comparisons to external or historical controls.

That is essentially the same terrain explored in the FOCR webinar. The Wall Street Journal article, although filtered through a news and editorial lens, brings into practical focus many of the scientific and policy questions FOCR was discussing in a more technical setting. Read together, the two episodes underscore how central this issue has become. The key question is no longer whether external control arms will be used, but rather when they are persuasive enough, and under what conditions, to support major regulatory decisions.




Tuesday, April 21, 2026

United Health Group, $3 Billion Bet on AI (see Stat Plus)

 On April 6, Casey Ross of STAT PLUS published a deep-dive aritcle on United Healthcare's massive investments in AI.

See the subscription article here:

https://www.statnews.com/2026/04/06/unitedhealth-group-massive-artificial-intelligence-push-patient-implications/



I don't want to infringe on his article, so here are just a couple highlights.

###

  1. This is not “AI in health care” in the abstract; it is AI moving into the claims payment stack.
    The key point is not that UnitedHealth has a chatbot. The key point is that AI is being inserted into claims adjudication, coding, fraud edits, prior authorization workflow, and coverage-facing operations. That places AI very close to the actual machinery that determines whether care is paid, delayed, downcoded, or denied.

  2. United is using AI not only as an insurer tool, but as a market-facing platform product.
    Through Optum, United is not merely optimizing internal workflow. It is also selling AI-enabled tools outward to providers and other payers. That matters because United is shaping the reimbursement environment both as a plan and as a vendor of operational infrastructure. In policy terms, that raises the stakes from a company story to a system architecture story.

  3. The article points to the emergence of an “algorithmic arms race” around payment integrity and coding.
    United is applying AI to billing codes, claims review, fraud detection, and prior auth. Patients and providers may hear the language of efficiency, but the reimbursement subtext is clear: AI can be used to accelerate edits, intensify scrutiny, standardize utilization controls, and tighten payment leakage. That may reduce administrative waste, but it also creates new ways to industrialize adverse payment outcomes at scale.

  4. The central policy problem is not whether AI is present, but whether it is auditable.
    United says physicians—not AI—make medical necessity decisions, and it describes an internal responsible-AI review board. But the article underscores the practical problem: patients and providers often cannot see what the algorithm did, what inputs it relied on, how much human review occurred, or whether AI shaped the decision path upstream. In reimbursement policy, that is the difference between ordinary utilization management and a potentially opaque new form of automated coverage control.

  5. CMS, state regulators, and courts will eventually have to decide whether AI-assisted payment decisions need a new disclosure and accountability framework.
    The article describes a world in which AI deployment is racing ahead while regulation remains patchy. That is unlikely to be stable. One can easily imagine future pressure for rules requiring disclosure of when AI was used, what role it played in denials or edits, what error rates were observed, what specialty oversight existed, and what appeal rights attach. For reimbursement policy, this could become as important as prior authorization reform, audit transparency, or program-integrity guardrails.

 

Roche/FMI to Acquire SAGA: See the Chat GPT Research Memo [AMAZING]

In April 2026, Roche announced that, via FMI, it would aquire SAGA Diagnostics, a Swedish MRD company that got MolDx coverage in 2025.

I put Chat GPT into its "Deep Research" mode and it spent about an hour checking hundreds of links and then writing a single, 13-page, 3400-word memo on SAGA, including a number of original figures.  This is what's called "agentic" research mode - Chat GPT developed the research plan, wrote the memo, selected footnotes, and planned and created new figures.  I'm presenting the memo "as is," without fact-checking it.   That is, this blog is about showing readers what Chat GPT can do in research mode, more than about Bruce's ability to fact check.  



####

Here's a Google Drive copy of the report:

https://docs.google.com/document/d/1o7FVNSgqpVUQZ2nen5TiHwvmwlptBJWO/edit?usp=sharing&ouid=110053226805181888143&rtpof=true&sd=true

I clip the entire report below; remember it's a 13 page document.

Note: Detailed tables are meant to be seen in the original Chat output OR as a Word export OR as a PDF export; they don't cut-paste well here in html.

If correct, the company raised circa SEK300M or $30M before acquisition.  The US test pays about $4400 (MolDx, including setup genome) and about $1200 per plasma test.  It's dPCR so it doesn't fall under the infamous CMS NGS NCD 90.2.

###

Monday, April 20, 2026

Humor: How Hospitals Handle CMS 14 Day Rule

The CMS 14-day rule for "date of service" has different versions for inpatients and outpatients, and for human molecular tests versus other molecular tests.   It's often a source of confusion (look for some publications on this, in the next year).

Here's a Chat GPT illustration of how hospital billing staff deal with the 14 Day Rule.



Sunday, April 19, 2026

The Future Decade in Sequencing: Learning from Past Decades?

I was given a questionairre on the future of clinical sequencing in the next decade - whether FFPE, or MRD LBx, and so on.   Of course, any projections ten years ahead are tricky.   What might we have predicted in 2000, or 2010, and how would reality have looked by comparison?

Three clinical landmarks in genomics might be introduction of the Oncotype Dx test around 2005, the widespread acceptance of BRCA sequencing (rather than hot spots by circa 2010, and the clinical launch of Foundation Medicine around 2012. 

But it would be hard to imagine that first decade of rising demand without also considering the billions in fund-raising and investments (both R&D and manufacturing) by Illumina (and others).   There wasn't some secular trend to ten-times as much clinical sequencing without all those investments.

(Similarly, there wasn't a natural demand curve for robotic surgery, separate from Da Vinci's massive investments in R&D, quality, and sophistications Nor was there some natural demand curve for glass phones, separate from Apple's massive investments in innovation and manufacturing.)

Similarly, today, we will want to project growth of MRD testing, etc, and second, come back and predict how that affects Natera's top line.  But it's hard to see that growth as a secular trend somehow independent of huge clinical investments by Natera, Guardant, Freenome, Caris, and many others.  (Add in Pac Bio, Thermo Fisher, Roche, 10X, Owkin, and others).   If industry funding cut back substantially, the pace of clinical science & growth would slow, and vice versa.

I asked Chat GPT what it makes of these ideas, results below.

###

  

ILMN Market Cap

From a 20,000 foot view, ILMN had a nearly continuous market ramp up from its IPO in 2000, especially, from 2005 to 2021, before falling in 2023 back to 2015 levels.

Thursday, April 16, 2026

Get Them Here: Detailed Notes on the April 16 CMS Webinar on PAMA Reporting

 On April 16, 2026, CMS provided a one hour webinar on PAMA lab price data reporting, which runs May June July, 2026.

CMS PAMA Page

CMS stated it would soon post the video on its PAMA resources page.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-pama-reporting-resources

Find the CMS Video

Scroll down on this page to see April 16 video at CMS:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs/events

The PPT Deck of Video

https://www.cms.gov/files/document/april-16-clfs-data-collection-reporting-webinar.pdf

Back up Copy of Video

If of interest, I posted a copy of the video on YouTube:

https://youtu.be/k0zaMNwpFvs


Very Detailed Conference Notes

And, below, see a detailed AI notes summary of the webinar.

##

Some Thoughts on the Revisions to AMA Appendix S for Software-Intensive Services

 

Header - For a year, AMA has circulated larger and larger revisions of Appendix S, its position on software-intensive services.    Here, I argue not for or against specific redlines (which may be copyright or confidential) but rather I discuss some general principles.

###

In 2023, the AMA added an “Appendix S” to the AMA CPT procedures and codes handbook.  Appendix S does not define what advanced software is, but when it occurs, Appendix S proposes that each instance of such software services can be classified as “Assistive, Augmentive, or Autonomous.”   In 2025 and now 2026, AMA has proposed multiple versions of a revised – and now very heavily revised – Appendix S.  

In this note, I’m not commenting on particular changes (which may be copyright or confidential) but on the overall themes and attendant problems.   The main references, then, are the current public Appendix S (fn 1).  I also recommend readers see a publication about Appendix S (fn 2) and a recent article by Dr. Richard Frank (fn 3).  AMA may later introduce a new coding category CMAA. (fn 4).

From my perspective as a physician-MBA working in strategy consulting, here are some of the important, but less-discussed challenges.

##

First, readers need a clean copy!

While the latest of many dense revisions belongs to AMA, I don’t think it violates confidentiality to say it is virtually unreadable, a dense mat of revisions, strikeouts, inserts, and more revisions atop one another.   At a minimum, AMA should also issue a CLEAN version of their most recent proposal at any given time.

 

Second... Start over??

The extent of revisions suggests that piling many layers of revisions on top of the original is no longer productive or optimal.  For example, the US Constitution is not a 99% redline of the Articles of Confederation, they wrote a new document.   It would likely be more fluent and lucid to simple write a new document and not have a hodgepodge of preserved word pairs or phrases under mountains of redlining.

Third... Living with A, A, and A.

We should address whether the continued use of an early idea, “assistive, augmentive, autonomous” makes sense.  AMA attempts – and it’s nearly impossible – to address disjunct concepts at once, such as detection, parameter generation, interpretation, physician involvement, and machine-initiated actions.  

We may be better calling things Software Services Type 1, Type 2, Type 3.   (This is exactly what App S already does for the 3 divisions of Autonomous! Numbered not named, letting you shape the meaning exactly as you want to.  Or, Category III codes are just called "that," and then defined.)

Commandeering existing adjectives that don’t naturally fit together (but happen to have clever alliteration) may confuse more than help, by pulling in legacy meanings of the words chosen.

Fourth,Beta Testing as a Priority.

The projects needs much more than a couple examples out of the  universe of software solutions.   Machines, and computer programs, need extensive “beta testing” which in this case might be 20 or 30 examples fed into the most current revision.   Do 4 experts independently agree which of the 20 or 30 examples belong as subjects of Appendix S?  Likely not.  (AI is not defined, etc).   How high is the independent reviewer agreement on how the 30 examples parse in to 3 buckets (A1, A2, A3)?  Likely the kappa statistic for agreement would be distressingly low. This kind of firetesting or beta-testing points out the weak spots, to allow fixing. But we have no data.  

Fifth, a Decision Tree = Logic.

A logical, structured approach to using the concepts  might be very helpful.  Take software X, and ask first, does it fit “augmentative?’  If yes, stop.   If not, then there are only two choices: is it assistive or argumentative?   Bringing algorithmic structure to the usage of the appendix might be helpful.  

Sixth, No Defining AI - Was That Really a Good Idea?

And this circles back to the beginning.  Appendix S purports to subtly categorize and subcategorize certain types of software-intensive services that qualify for this treatment.

We would assume and hope with high precision in the results, ...for some cloudy universe of kind of software-intensive services like AI and Machine learning, while not defining at all what does or doesn’t come into the domain of Appendix S is the first place.  

Yet we expect Appendix S rules to work ono all those inputs, precisely and consistently.   That may be crazy and lead to all kinds of downstream problems that we aren’t facing. 

___

I believe that Appendix S was designed originally for software-only services.  If so, we need to codify that.  We should state clearly that physical services (like whole genome sequence) that may use a lot of AI, remaining coded based on the physical service component (81425, genome) and not pulled onto CMAA by the AI within.   For example, WGS may yield a variant of unknown significance VUS (C>T), which is resolved as benign by AI proteomic software that validates there is no protein configuration change.  I would expect the service is still WGS with resolution of variants, billed as 81425.  I would be chilled if the WGS service got extracted to a payor-not-recognized code over in CMAA, because the service contains AI in part.  At some point Appendix S should make this clear, the physical part of the code governs coding and a folded-inside AI component does not drag everything onto the CMAA listing.  

 

 ___

 

1.  https://www.ama-assn.org/system/files/cpt-appendix-s.pdf

2. Frank, 2022, PMID 36463327.  https://pmc.ncbi.nlm.nih.gov/articles/PMC9719561/ 

3. https://www.frankhealthcareadvisors.com/post/cpt-appendix-s-the-missing-link-between-ai-innovation-reimbursement-1 

4.  https://www.ama-assn.org/practice-management/cpt/cpt-codes-offer-language-report-ai-enabled-health-services   [CMAA]

Wednesday, April 15, 2026

CMS Inpatient Proposed Rule: Canning the Fast Track to NTAP (!)

In 2020, the first Trump adminstrationi proposed fast track routes to Inpatient NTAP (new technology add on payment) and fast track to coverage for breakthrough devices (under MCIT).   MCIT was replaced, under Biden, with TCET, an inferior plan.

Now, the Trump administration itself threatens to nix the fast track methods to NTAP.  See AI discussion below.

Find the CMS source material via here.  See an article by Manatt's Ross Margulies here.  Coverage at STAT here.



##

AI CORNER

###

The FY 2027 IPPS proposed rule contains a significant, and for many stakeholders unwelcome, reversal in Medicare innovation policy: CMS proposes to repeal the alternative pathway that has made it easier for certain FDA-designated technologies to qualify for inpatient new technology add-on payments (NTAP), and it proposes a parallel repeal for the outpatient device pass-through pathway as well. The current alternative pathway was created in the Trump administration through the FY 2020 and FY 2021 IPPS/LTCH PPS final rules for inpatient NTAP, and the CY 2020 OPPS/ASC final rule for outpatient device pass-through. Under that pathway, Breakthrough Devices, QIDP drugs, and later LPAD drugs are treated as not substantially similar to existing technology and do not have to separately prove the usual Medicare standard of substantial clinical improvement, although they still must meet the other applicable regulatory requirements. CMS now proposes that, beginning with FY 2028 NTAP applications, all applicants would have to meet the same ordinary standards, while technologies already approved, and technologies already under review for FY 2027, would be grandfathered and remain eligible under the existing pathway.

Tuesday, April 14, 2026

CMS Releases Inpatient Proposed Rule

 CMS has released the Inpatient Proposed Rule.   Find it here:

https://www.federalregister.gov/documents/2026/04/14/2026-07203/medicare-program-hospital-inpatient-prospective-payment-systems-for-acute-care-hospitals-ipps-and

It's in Fed Reg, April 14, 2026, 91:19312 (576pp).  16 mb.

Note: This downloads for me in a locked copy I can't annotate.  However, you can "extract all pages" to a new document as re-save, and it will be ok for markup.

See fact sheet:

https://www.cms.gov/newsroom/fact-sheets/fy-2027-hospital-inpatient-prospective-payment-system-ipps-long-term-care-hospital-prospective

Stripping Breakthrough from NTAP

CMS proposed to make it harder for new antibiotics and for breakthrough devices to meet NTAP (new tech add on payment) criteria.  See discussion by Ross Margulies.  See my blog on the topic.

See press release on the joint program:

https://www.cms.gov/newsroom/press-releases/cms-improve-patient-care-experience-lower-costs-hip-knee-ankle-replacements

There's  quite a bit of discussion of an expanded national joint bundled care program, and for those of us dismayed with the 14-day-rule regarding genomics (such as CGP), note that the joint surgery demonstration would include lab tests for up to 90 days (as well as many other services up to 90 days).  See p. 19678-79.  However, it doesn't withhold line-item payment, it just hold hospitals responsible for high and low costs in their penalties and bonuses, much like in ACOs now.

###

AMA Releases the Biggest Batch of PLA Applicants Ever

Just two weeks ago, AMA released 29 new and finalized PLA codes, April 1 (here).

Here's the PLA home page:

https://www.ama-assn.org/practice-management/cpt/cpt-pla-codes

According to the calendar there, AMA releases draft codes April 14, wants feedback by April 21, and will finalize by April 21.  The codes are then voted on by the Editorial Panel regular spring meeting (April 30-May 1).

Here is the code list, which reiterates that comment is due by April 21:

https://www.ama-assn.org/system/files/may-2026-pla-public-agenda.pdf

I count over 100 new codes (circa 105).

Historically, PLA codes voted by AMA CPT around May 1, are included in the summertime CMS crosswalk-gapfill meetings.

##
The next application date is June 9 for codes finalized October 1.

Monday, April 13, 2026

Friends of Cancer Research: 3-Hr Webinar on "External Control Arms"

 When you have a one-arm trial, you need a comparison group - the general problem presented by "External Control Arms."

Catch up with the newest thinking via a 3-arm webinar hosted by Friends of Cancer Research (streamed live on April 7, 2026).

Find it on YouTube here:

https://www.youtube.com/watch?v=fh7y3J2xyYY

Find the project home page at FoCR here:

https://friendsofcancerresearch.org/eca/

Below, Chat GPT summarizes the streaming auto-transcript.

###

On a personal note, I've seen several examples over the years where an entity used a one-arm real world study and an "external control arm" - 'administrative controls," patients in the same health plan with some similar diagnoses become "the control arm."   And there can be various kinds of risk-balancing aka propensity score matching.   Often the test group did so much better it was hard to attribute to the intervention and easier to attribute to a healthier population to start with.   (This rarely gets buy-in from the inventor of the intervention).   Personally, I found this the best explanation for the 2016 CMMI diabetes prevention project at YMCA's (Alva et al.) which used a propensity or diagnosis matched Medicare population.   As I recall, even having 1 YMCA class was associated with big health gains in the following year.   To me, this violated common sense dose-response expectations and left behind the suspicion that Medicare folk at YMCAs and tennis clubs might be healthither than the average beneficiary.   But Alva et al. would be a "external control arm," as FoCR will discuss.

###

AI CORNER

##

Here is a detailed report on the Friends of Cancer Research webinar, “Application of External Control Arms in Oncology Drug Development,” held April 7, 2026.

Summary

This April 7, 2026 Friends of Cancer Research webinar showed a field moving from enthusiasm about external control arms to a more disciplined focus on fit-for-purpose data, prespecified methods, documentation, and early FDA engagement. The meeting’s core message was that ECAs are not a shortcut around rigorous evidence standards, but a practical tool when randomized trials are infeasible, especially in rare cancers. Speakers highlighted encouraging pilot work across multiple data partners, while stressing persistent challenges in missing data, endpoint alignment, population comparability, and regulatory predictability. Overall, the webinar suggested real progress in oncology ECA methods, but not a settled playbook for routine regulatory use. 

###

Sunday, April 12, 2026

House Hearing on Medicare Fraud: Comparison to CMS CRUSH Proposal

CMS released its CRUSH anti-fraud proposals around February 26.   At the beginning of the month, February 3, the House Energy & Commerce Committee held a 3 hour hearing on healthcare fraud.  Find it here:

I asked Chat GPT to compare the contents and emphases of the February hearing with the late-February policy proposal.
###
AI CORNER
CHAT GPT
###

The February 3 House hearing treated Medicare and Medicaid fraud as a broad systemic problem, emphasizing beneficiary harm, weak front-end screening, and the limits of “pay-and-chase” enforcement. It clearly discussed DME fraud and also mentioned genetic-testing fraud, but did not appear to mention MolDx

By contrast, CMS’s late-February CRUSH initiative was more operational and specific, explicitly targeting DME and laboratory/genetic fraud and directly asking whether MolDx should play a broader national anti-fraud role.

College of American Pathologists, CRUSH & Genomics Fraud; Doesn't Like National MolDx

Since comments closed on March 31, I've highlighted several groups that commented, such as ACLA and the BCBS Association.  Both supported "nationalization" of MolDx, a topic on which HHS was seeking input.   (ACLA had some guardrails, though).

College of American Pathologists (CAP) posted its comments, 7 pages long.  A substantial proportion of the comment recounts problems that CAP sees with MolDx, including the whole Z-code system.

Find the CAP comments here:

https://documents.cap.org/documents/FINAL-CRUSH-RFI-Comment-v.5.pdf

##

The national Medicaid medical directors' organization NAMD submitted lengthy comments on crush, with no mention of genetic test fraud, suggesting they don't view it as a problem and have little difficulty in avoiding or preventing it.  (Here).  This makes Medicare Part B's billion-dollar fraud problems in Texas and Florida even more embarassing.

Thursday, April 9, 2026

Tamara Syrek Jensen - The Medtech Strategist Interview

Medtech Strategist has released an interesting podcast interview.   Learn about Tamara Syrek Jensen's career at CMS, much of it in the Coverage Group, and the insights she brings to clients today as a Principal of Rubrum Advising.

(Note, she seems to say a couple times, CMS is where I am; but it's clear she was out-of-government at this interview and working with Lee Fleisher at Rubrum Consulting.)

Find the podcast here:

https://www.medtechstrategist.com/podcast-content

Look for Podcast #34.  She's interviewed by Stephen Levin, edtior-in-chief.

Or find their YouTube audio channel here:

https://www.youtube.com/watch?v=dNK8ZhJZEYQ



Chat GPT listened in and reports for us:

In a candid Market Pathways interview, former CMS coverage chief Tamara Syrek Jensen argues that reimbursement bottlenecks are structural, not merely bureaucratic—spanning evidence, coding, payment, Medicare Advantage, and payer coordination—while urging earlier, honest, disciplined engagement among industry, CMS, and FDA.

##

Tamara Syrek Jensen on CMS After 25 Years:
A Reimbursement Insider Explains What Industry Still Gets Wrong

Market Pathways’ interview with Tamara Syrek Jensen is valuable not because it offers a grand reimbursement fix, but because it strips away a few durable myths. Jensen, who spent roughly 25 years at CMS and the last decade leading the Coverage and Analysis Group, speaks with unusual directness about the agency’s constraints, the industry’s misconceptions, and the widening gap between regulatory success and payment success. The conversation, drawn from the San Diego Innovation Summit, is framed by Stephen Levin as a rare chance to hear a former CMS official speak frankly about Medicare reimbursement, parallel review, and the practical difficulties companies face in working with CMS.

The central point is one sophisticated readers already suspect but often understate: reimbursement is not “FDA, but slower.” Jensen emphasizes that payment is an ecosystem problem, not a single-agency problem. Coverage, coding, and payment remain the classic “three-legged stool,” but in her telling that stool has effectively become four-legged because Medicare Advantage now sits on top of the traditional fee-for-service architecture. Add private payers, the AMA, specialty societies, and MAC behavior, and the contrast with FDA becomes stark: regulation can be grueling, but it is still largely one-agency navigation; reimbursement is a distributed negotiation across multiple institutions, standards, and incentives.

That matters because Jensen rejects the cartoon version of CMS as simply the lagging “problem child” after FDA approval. She says CMS had been trying, including behind the scenes, to explore whether certain NCDs could be made much shorter and faster, closer to the timing of FDA action. But even a rapid coverage decision is insufficient if coding is absent or payment is effectively zero. Her point is not defensive so much as architectural: reimbursement failures are often compounded failures. A positive regulatory event does not automatically propagate into a usable reimbursement pathway, and companies that model it that way are modeling the wrong system.

For companies, Jensen’s most practical message may be cultural rather than procedural. She openly acknowledges that bias against industry has existed inside government, just as industry carries its own bias against CMS. But she also makes a sharper distinction: distrust is manageable, dishonesty is corrosive. Her formulation is memorable in its simplicity—“Just be honest”—and it comes with an implicit warning. When manufacturers tell CMS one story and FDA another, they are not just creating confusion; they are degrading the possibility of creative problem-solving. By contrast, she describes a more recent “paradigm shift” in which companies are more willing to admit evidentiary imperfection while still arguing that a technology benefits patients. That, she suggests, is the kind of conversation from which CED and TSET-type solutions can actually emerge.

Her comments on transparency are equally pointed. Jensen does not deny that CMS has historically been experienced as opaque. She more or less concedes the criticism, while arguing that openness—within legal limits—was necessary precisely because silence breeds mythology. If companies cannot get clear signals from the agency, they fill the void with stories about hostility, indifference, or hidden rules. Her preferred answer was repeated conversation, not because repeated meetings are efficient, but because they are trust-building. That is a notable stance from a former Coverage and Analysis Group leader: not that CMS could or should say yes more often, but that it needed to talk more clearly, earlier, and more often about why the answer might be yes, no, or not yet.

On substance, Jensen is especially strong on the question of what CMS is actually evaluating. She pushes back on the industry complaint that “reasonable and necessary” is too undefined to be operational. Her response is that the record is hardly empty: hundreds of NCDs already reveal the endpoints and evidentiary instincts CMS uses, especially as they relate to the Medicare population and its comorbidity burden. More interestingly, she warns that formalizing the standard too tightly could backfire. A rigid statutory or regulatory definition might deliver more certainty in theory while freezing out future technologies in practice. For an audience steeped in coverage policy, that is one of the interview’s more consequential arguments: ambiguity is frustrating, but some ambiguity may be the price of adaptability.

Her skepticism toward reimbursement legislation follows the same logic. Jensen does not dismiss legislative reform reflexively; she worries about implementability. She cites prior statutory efforts that failed because Congress wrote requirements CMS could not realistically operationalize. In the current debate over accelerated coverage concepts, her concern is that proposals can place the entire burden on government—pay first, sort out the evidence later—without symmetrical obligations on manufacturers to produce meaningful Medicare-relevant endpoints within a defined period. For Jensen, temporary coverage without credible downstream accountability is not a bridge but a drift state. That argument will resonate with readers who have watched enthusiasm for transitional coverage repeatedly collide with evidence generation problems in the real world.

The same realism shapes her criticism of parallel review. She does not dispute the idea; she disputes the physics. Early FDA-CMS collaboration is good, she says, but true parallelism is extraordinarily difficult when CDRH operates at a vastly different scale and under rigid review timelines while CMS coverage staff are far smaller in number and are trying to assess Medicare-specific value, not just safety and effectiveness. In one of the interview’s starkest operational details, she notes that the Coverage and Analysis Group had about 30 people, with only around 10 writing NCDs. For experts accustomed to discussing “alignment” at the policy level, this is the grounding reminder: some reimbursement problems are not conceptual failures but capacity mismatches.

What gives the episode extra relevance is timing. Syrek Jensen now speaks from outside government, having recently joined Rubrum Consulting, led by former CMS chief medical officer Lee Fleisher, and that shift gives her remarks both freedom and consequence. She is no longer explaining CMS from behind the seal; she is translating it from just beyond the door. The result is not a manifesto and not a grievance session. It is something rarer: a high-level reimbursement practitioner explaining that the system’s biggest problems are real, that many are structural, and that progress will depend less on rhetorical demands for “faster CMS” than on earlier evidence planning, more honest cross-agency engagement, and a better grasp of what Medicare is actually being asked to buy.

##

Her Background

She describes her path as “organic” rather than planned. In her telling, she was a fun-loving undergraduate who later “grew up” and got serious. Her first job after college was on Capitol Hill, where she worked in policy; her boss sat on Ways and Means, which exposed her early to the legislative side of CMS and Medicare. From there, she decided to pursue law, but did it the hard way: she worked at CMS while attending law school at night.

Inside CMS, she says she then rose through a series of roles rather than following a master plan. She started as an analyst on conditions of participation, moved into a special assistant role for the Chief Medical Officer, gained a broad view of the agency, and eventually landed in the Coverage and Analysis Group, where she later became its leader. She also emphasizes the importance of informal mentors who guided her along the way.

The Most Surprising Three Remarks

1. Her direct statement that parallel review essentially “doesn’t work” in practice.

That is striking because parallel review is often discussed as a high-level policy solution, but she reduced it to operational reality: FDA’s device center has thousands of staff, while the Coverage and Analysis Group had about 30 people, with only about 10 writing NCDs. She said the idea is good, but the timing and staffing mismatch make true parallel review extremely hard to execute.

2. Her unusually candid admission that anti-industry bias at CMS was real — and that she herself probably had some of it.
Former officials almost never say that so plainly. She added that a major source of mistrust was when manufacturers told CMS one thing and FDA another, and she framed honesty as the key condition for productive engagement. That was a notably direct acknowledgment from a former top CMS coverage official.

3. Her remark that the “four years” in the accelerated-coverage style legislation was basically “a random number we made up.”
That is a remarkable thing to say out loud. More broadly, she argued that if government is required to cover a technology for four years, industry must carry a real burden to generate evidence, with consequences if it fails. That cuts against any simplistic “coverage first, evidence later” narrative.

Honorable mention: her blunt line that “the claim system is a yes system.” That is a very revealing description of why CMS is drawn to prepayment integrity tools like Wiser.

CMS Webinar on PAMA Lab Data Submissions: Video April 16,

Here's new from CMS about PAMA sales price reporting, due at CMS in May-June-July 2026.
The live webinar is sold out, but hopefully they will post archive video.
It was sold out but they opened a larger channel.
###

CMS press release:


Clinical Diagnostic Laboratories: Get Ready to Report Starting May 1

Are you an independent laboratory, physician office laboratory, or hospital outreach laboratory that meets the definition of an applicable laboratory under the Clinical Laboratory Fee Schedule (CLFS)? If so, you must report data from May 1 – July 31, 2026, based on an updated data collection period of January 1 – June 30, 2025, including:

  • Applicable HCPCS codes
  • Associated private payor rates 
  • Volume data

How do I report?

  1. Review CLFS Data Collection System resources:
  2. View the applicable HCPCS codes (ZIP) 
  3. Use the Data Reporting Template (ZIP) 

More Information:

 Clinical Lab Fee Schedule Data Collection Webinar – April 16

Thursday, April 16 from 3–4 pm ET

[Sold out as of April 9]

Register for the webinar.

Clinical laboratory representatives: You may be required to submit data to CMS in the Fee-for-Service Data Collection System Clinical Lab Fee Schedule (CLFS) Module starting May 1. During this webinar, we’ll:

  • Provide an overview of this data collection initiative
  • Highlight how clinical laboratories can determine whether they’re applicable labs
  • Discuss preparation activities and resources
  • Demonstrate the CLFS Module, including user roles and how to register

We encourage you to submit your questions in advance to CLFS_Inquiries@cms.hhs.gov with “CLFS Webinar” in the subject line.

More Information:

  • Visit the CLFS webpage for official guidance on reporting data
  • Read the FAQs