Monday, March 23, 2026

AI Experiment: How Alex Dickinson Describes the CARIS MCED ACHEIVE Report

In March, Caris released top-line results of its ACHIEVE study, testing its MCED test in real cases.  Press release here.  Active Linked In author Alex Dickinson wrote a set of 5 articles about the results.  One, two, three, four, five.

Out of curiousity, I asked what Chat GPT could make out of the six documents.



AI CORNER

###

Overview

Caris reports striking interim performance for its Detect MCED assay using deep whole-genome sequencing, with unexpectedly strong early-stage sensitivity in common cancers. However, enriched cohorts, limited follow-up, and incomplete blinded validation constrain interpretation. Dickinson’s analyses highlight a differentiated WGS multi-signal strategy with potential advantages over methylation-first approaches.


Consolidated Article (Caris + Dickinson)

Focusing first on the press release, the key point is that Caris reported an interim analysis, not a completed prospective screening validation. The Achieve 1 dataset includes 2,122 subjects (1,505 undiagnosed; 617 cancers), but the undiagnosed group is enriched, not general-population screening. 

Only 22.5% had ~1-year follow-up, with ~7% later diagnosed with cancer—again indicating high-risk enrichment. About 865 samples remain in blinded validation, so current results are signal-generating, not definitive.

The reported performance is notable. Stage-specific sensitivity was 56.8% (I), 70.1% (II), 77.1% (III), 99.1% (IV), with 61.3% for stage I–II. Early-stage sensitivity in key cancers included 53% breast, 78.9% prostate, 86.7% lung, and 62.2% colorectal. Specificity was 99.1% in a small asymptomatic subset (n=121) and 95.3% in the broader undiagnosed cohort. These are the central empirical results.

An Expert Discusses The Data

Dickinson’s posts provide useful context. He frames Caris as entering MCED from a position of scale and infrastructure—large tumor databases, clinical profiling, and sequencing capacity—suggesting Detect is an extension of an existing oncology data platform rather than a stand-alone assay.

Scientifically, Dickinson highlights the assay design: ~250x whole-genome sequencing of plasma with paired buffy coat sequencing to remove CHIP, extracting mutational, fragmentomic, and nucleosome-positioning signals for ML classification. This multi-signal WGS framework plausibly explains the observed sensitivity pattern.

His most provocative point concerns tumor-type performance, especially breast and prostate. He argues methylation-first MCED approaches may underperform in these high-incidence cancers, making Caris’s relatively strong early-stage sensitivity clinically meaningful. This reframes MCED evaluation toward incidence-weighted performance, not aggregate sensitivity alone.

Key Limitations

However, key limitations remain. The 99.1% specificity estimate rests on only 121 asymptomatic subjects. The broader 95.3% specificity is more relevant operationally but less reassuring for screening. The cohort is enriched, follow-up incomplete, and blinded validation pending—raising the risk of optimistic interim estimates.

Caris also claims superiority over methylation approaches, but provides no head-to-head comparison, making this an interpretive rather than evidentiary claim. Cross-study comparisons, including Dickinson’s, remain hypothesis-generating.

Strategically, Dickinson emphasizes Caris’s ability to pursue a sequencing-intensive design due to its scale. He notes current costs may be high (e.g., overlapping 2x150 reads), but could fall with alternative platforms. This underscores that economic feasibility will be as important as analytical performance.

Finally, Caris signals future expansion to whole transcriptome integration, reinforcing its view of MCED as a multimodal inference problem, though at the cost of added complexity.

Bottom line: Detect represents a biologically coherent and potentially important MCED approach with strong interim early-stage signals. But given cohort design, small screening subsets, and pending validation, the appropriate stance is serious interest with disciplined caution, not definitive conclusion.



CMS Reports on First Rural Health Summit under Tech Transformation

One of the lesser-known outcomes of last summer's budget bill was $50B for rural health transformation (RHT) focused on technology.  Last fall CMS established the Office of RHT and in December $50B of awards were announced.  Home page here.

On March 18, CMS convened leaders from all 50 states to discuss.  Find the online report here:

https://www.cms.gov/newsroom/press-releases/readout-cms-convenes-first-rural-health-transformation-summit-advance-state-led-innovation



AI CORNER

Here's a 100 word summary.

CMS convened its first Rural Health Transformation Summit on March 18, 2026, bringing leaders from all 50 states together to advance a $50 billion initiative created under recent federal legislation. The program aims to strengthen rural health systems through 

  • expanded access to care, 
  • workforce development, 
  • technology modernization, and 
  • innovative payment models. 

Discussions focused on five priorities: 

  • prevention and chronic disease, 
  • sustainable access, 
  • workforce pipelines, 
  • value-based care, and 
  • health IT. 
States shared early strategies such as telehealth, mobile units, and regional partnerships. CMS emphasized aligning efforts with Medicare and Medicaid financing and embedding long-term workforce and infrastructure solutions to ensure durable improvements in rural health outcomes.

Saturday, March 21, 2026

Journal Club: Value of WGS in Real-World Cancers (Van Putten, Nat Med)

 What's the value of going upscale to whole genome sequencing (WGS) in solid cancers?  Van Putten et al. assemble date from their experience with 888 solid cancers.  The work is from Hartwig Medical Foundation / Netherlands Cancer Institute.

Find the paper here and a Linked In essay here by Joseph Steward.  And here by Alex Dickinson.


Most samples in this study were frozen tissue (89% success rate), but they remark that when archived samples were used, they had the same success rate (90%).

Chat GPT Discusses the Paper:

Friday, March 20, 2026

Can AI Re-Think Health Policy? Example Using WSJ Policy Essay (& MolDx)

Can AI read an article and project its possible applications into a different field?  That's today's question. 

Starting point: WSJ runs an essay by Harvard economics professor and Manhattan Institute authority Roland Fryer.  Fryer here, essay here.   


While his article was on "regulating AI," it clearly had ramifications or applications in other policy domains.  I asked Chat GPT 5 to read the essay and discuss its projection onto healthcare policy such as CMS.   I deliberately left my main initial request vague.   

At bottom, I asks it some Q&A, including how this applies to MolDx.

Here comes the initial response to my request, "apply Fryer's thinking to healthcare policy."

Thursday, March 19, 2026

NCCN Recommends NGS in All Stages of Pancreatic Cancer: Direct Conflict with Outdated Medicare NCD

Tuesday, I was in a webinar where stakeholders were discussing the badly-outdated Medicare NCD for NGS testing in cancer.  Thursday of the same week, more proof of the problem hit my inbox.

See the March 18, 2026, release of new pancreatic cancer management guidelines from the National Cancer Center Network - NCCN.   Here.   See coverage in Genomeweb, here.



Tuesday, March 17, 2026

Waiv (Former Owkin Dx) Brings In $33M For AI Precision Diagnostics

It's not every day you hear of a new $33M going into spatial diagnostics - but it happened this week.

WAIV Diagnostics, Paris (former Owkin Dx) is now a spinout from Owkin and pulling in its own investors and funding. 


https://wearewaiv.com/news/waiv-secures-33-million-to-scale-ai-precision-testing

See also a post from Meriem Sefta, WAIV CEO.

https://www.linkedin.com/posts/meriemsefta_waiv-waiv-secures-33-million-to-scale-share-7437851429651935232-WHmk/

"We exist to catalyze precision medicine with clinical-grade, AI-powered tests designed to detect biomarkers, predict outcomes, and better understand treatment response in oncology."

###

It's a fit with the Bloodpac webinar and white paper, "it's not just DNA mutations anymore" - here.

https://www.discoveriesinhealthpolicy.com/2026/03/bloodpac-releases-webinar-and-white.html


Illumina Publishes "Diagnostics Year in Review" CY2025 (48pp)

 Once again Illumina has published its Diagnostics Year in Review.   See CY2025 in a birdseye view, edited by Mara Aspinall, 48pp.

https://www.illuminaventures.com/wp-content/uploads/2026/02/Diagnostics-Year-in-Review-2025-Version-for-Distribution.pdf


##
AI CORNER
##

TL;DR ? Should you read this 48-page “Diagnostics Year in Review”?

Yes—if you care about where genomics, AI diagnostics, and reimbursement policy are actually heading, not just headlines. This is one of the clearest “industry synthesis” decks of the year, and it quietly encodes several policy-relevant signals that CMS/FDA watchers will recognize immediately.

The report argues that 2025 marks a shift from post-COVID recovery to a true “diagnostics renaissance.” That’s not hype—it’s grounded in three converging forces:

  • Regulatory relief (LDT rule vacated → innovation unlocked)

  • Platform shift (tests → data/AI-driven clinical decision engines)

  • Capital re-consolidation (massive M&A + “Terrific Ten” winners emerging)


BLOODPAC Releases Webinar and White Paper: New Frontiers in Therapy Selection / Beyond DNA Mutations

BLOODPAC offered a multi speaker seminar last fall on the topic, New Frontiers in Therapy Selection: Beyond DNA Mutations.  Find the online resources now.



Here's the home page: link.

Find the two-hour YouTube webinar here: https://www.youtube.com/watch?v=yuYdhbdVcpU

As you scroll the home page, you'll also reach the 37 page white paper.

###

Saturday, March 14, 2026

Chris Klomp Now Near Top of HHS; See His One-Hour Recent Interview

Wall Street Journal, Politico, Washington Post have all been covering the shake-up in senior management at HHS - here.    Chris Klomp rises to #2 at HHS.  Here's a profile of Klomp from several news articles.

Which gives extra importance to a one-hour interview that Paragon Institute posted just a few weeks ago.  

  • Find the text here
  • the YouTube archive here.  
  • He's interviewed by Brian Blase, President of Paragon Institute, and policymaker Demetrios Kouzakas.

Here's an AI article based on the interview transcript. [Chat GPT 5.4]

###

Chris Klomp’s Policy Playbook: 

Markets, Incentives, and the Power to Convene at CMS

In a wide-ranging Paragon interview, new HHS deputy Chris Klomp outlines a Medicare strategy built on incentives, market signals, and stakeholder convening rather than regulation, offering insight into emerging federal health policy direction. (January 27, 2026).



The Strange Place of FIT Testing Between FDA Label and Medicare Screening

 Header:  CMS is reviewing its coverage standards for CRC screening biomarkers - stool, blood, etc.   But CMS explicitly will  ignore fitting FIT testing into the new system.   (FIT testing will remain untouched and as-is.)

There's quite a story there - FIT testing is FDA-regulated much differently than Cologuard, Shield, etc.  Here's an essay planned by me, but written by Chat GPT 4 in a few seconds.

###

FIT Testing: Why It Has a Special, Hands-Off Status in the Screening NCD

Chat GPT 5.4

CMS’s current proposal on non-invasive colorectal cancer biomarker tests is notable not only for what it addresses, but also for what it leaves untouched. CMS is proposing new evidence standards for emerging biomarker tests, but it is not reopening its longstanding coverage of FIT and guaiac FOBT under the colorectal cancer screening benefit. That omission is interesting—at least to CMS policy nerds—because FIT looks simple from a distance but becomes surprisingly complicated if one tries to revisit the benefit rigorously. CMS may have decided it is better not to stir that particular hornet’s nest. [1][2]

The key issue is regulatory. FIT is not regulated like Cologuard. 

WSJ: White House Shakes Up HHS Management

 Per the WSJ March 13 and 14, White House has multiple pathways for "shaking up HHS" at the top management level.

https://www.wsj.com/politics/policy/white-house-pushes-shake-up-at-hhs-ahead-of-midterms-6ad882a5      and also  https://www.wsj.com/politics/policy/trump-rfk-jr-hhs-midterm-elections-cef51179

See also WaPo and Politico.

Chris Klomp, November




  • White House installs Chris Klomp as HHS No. 2 under Kennedy. (Jim O'Neill displaced) (General Counsel Mike Stuart is out.).

  • See my detailed article on a recent one-hour interview with Klomp - here.

  • Klomp to oversee operations, messaging, and management coordination.

  • Three senior counselors added across CMS and FDA leadership.

  • Shake-up aims to speed execution of “Make America Healthy Again.”

  • Leadership changes follow operational problems, including reversed grant cancellations.

  • Administration seeks disciplined messaging ahead of healthcare-focused midterm elections.

Friday, March 13, 2026

Mapping the Colorectal Cancer Screening Proposal: Why Use an Efficiency Frontier

CMS has a current NCD for biomarker CRC screening, using 74% sensitivity and 90% specificity as a benchmark.  This means you pick up about 3/4 of cancers (relative to colonoscopy) and you send about 10 patients per 100 to a false positive based colonoscopy.

Here I expand on a prior blog and show the two new CMS options graphically.

We can show the statistical space on a probability chart.  The vertical axis is specificity (and also shows "FP per 100").   The horizontal axis is the inverse of sensitivity.  It also shows "cancers missed per 100."   The IDEAL PLACE to be is the far upper left corner.


Since the required conditions are expressed as ≥, the look like an x,y point but define a rectangular solution space.  Any given clinical trial will represent a point with a cloud for SD (such as 90% spec +-2, 85% spec +- 3).

AI History You Can Use: MACs, BCBS Plans, Corporate Structures in Review

There are many complex relationships among Medicare contractors, Blues plans, holding entities, novelty names (Elevance), and more.   Chat GPT works hard to sort it all out.

I've read it all and it's directionally correct and consistent with what I know - but don't guarantee every word is correct.  It's a for-example of what AI research and AI writing can create, as of 3/2026.

###

The Blue System, Medicare Contractors, and the Curious Case of MolDX

At first glance, entities such as Novitas, First Coast Service Options (FCSO), Palmetto GBA, CGS, and MolDX can look like a tangle of shells, aliases, and contractual masks. In reality, the structure is more intelligible than it first appears, though still sufficiently layered to invite confusion. The key is to distinguish among three different kinds of relationships: first, the relationship between the Blue Cross Blue Shield Association (BCBSA) and local or regional Blue plans; second, the relationship between those Blue plans and their government-services subsidiaries or affiliates; and third, the difference between a corporate entity and a programmatic framework such as MolDX. Once those distinctions are kept in view, the web of Novitas, FCSO, Palmetto, CGS, WPS, Noridian, Florida Blue, South Carolina Blues, Anthem, and Elevance becomes much easier to parse.[1][2] (Blue Cross Blue Shield Association)

Thursday, March 12, 2026

AI History You Can Use: Relive Amazing 2007-2010 Debates about FDA, LDT, CDX

In the past couple years, we've lived through FDA regulation of LDTs, court cases, and expanding capabilities of genomic diagnostics, many of them LDTs. 

However, it's worth while to recall a period 2007-2010, when a PGx test to predict rituximab responders led to pushback from Genentech against LDTs, a Citizen's Petition to FDA, and a National Academies review in 2010. The PGX FCGR rituximab test largely sank out of view by then, and later meta-analysis were negative. Here is a retelling of the whole story from Chat GPT 5.4.

The article below is written entirely by Chat GPT and as a side bar I provide a link to the whole Chat GPT dialog in its original form of prompts and answers:  Here.

It would have taken me hours to research and write this essay by hand.  With AI, it took a half hour from my first vague prompt about a half-remembered something.

###

 


Tuesday, March 10, 2026

CMS Posts New Idea for Colorectal Screening Biomarkers - But Should Use an Equivalence Frontier

Update - I walk through the  old and new SENS-SPEC spaces, graphically, here

###

On March 10, 2026, CMS released a new proposal for covering non-invasive CRC screening tests.   Currently, CMS uses a threshold of sensitivity 74% (picking up about 3/4 of all colon cancers relative to colonoscopy) and a specificity of 90% (sending about 1 in 10 patients to colonoscopy due to a false positive FP biomarker.)  CMS uses one NCD for DNA FIT testing (Cologuard) and another NCD for blood-based CRC screening.    

In its opening of the NCD revision six months ago, CMS proposed to change the title to "Non-Invasive Biomarker Tests," suggesting they could merge coverage of blood-based and stool-based tests.  That is what they are in fact attempting to do.  CMS expects to issue its final version June 8, 2026.

However, it looks like CMS is making a cognitive error.  Although they seem to understand there is a continuous tradeoff between SENS and SPEC (just by  sliding the cut point up and down) they proposed to allow only two particular "bins" for coverage - SENS 90, SPEC87, or else option two, which is SENS  79, SPEC  90.   

(Pick up 90% of cancers, while sending 13 patients to a false positive FP colonoscopy; or pick up 79% of of the cancers, while sending only 10 patients to a FP colonoscopy).

Clearly, you should be able to pick up 89% of cancers, but send 11 patients to colonoscopy.  But that would fail.  You'd fail he 90% rule of option 1, you'd fail the 10 rule of option 2. 

The probably is, companies can get preliminary data, set predetermined cutpoints to meet one or the other bin, and then "miss" the two bins, although actually having a more accurate test that the NCD requires.  That is, the test performs well against a continous quality frontier.  

This is not hard to express algebraically, and CMS could use a simple formula by which anyone could tell in 30 seconds if a test meets the true (frontier-based) performance or not.   This is also much less wasteful, since you don't have to discard super-costly trials that miss a "bin" while exceeding the implied accuracy frontier.

I'll let Chat GPT explain it.