Monday, March 30, 2026

Horizons in Diagnostics Value: Case Study: Rethinking Value for Infection Diagnostics

Here's a paper that is worth discussion, and potentially applicable to many areas of diagnostics, not just infection.

In a 2025 paper at Open Forum Infectious Diseases, some excellent thought capital is created by Claeys, Prinzi, and Timbrook.  Here.



It's also a great example of a good abstract - I can't do better than quoting it.

  • Evaluating the clinical impact of in vitro diagnostic tests (IVDs) for infectious diseases is complex given their effectiveness depends on context, implementation, and provider behavior. 
  • Traditional methodologies for therapy interventions do not adequately capture this complexity, necessitating novel analytical approaches and study designs. 
  • This review highlights methodological considerations for improving evidence generation for infectious diseases IVDs. 
    • Design and analysis challenges leading to bias and related solutions are reviewed such as the target trial framework. 
    • Moreover, novel frameworks such as Benefit–Risk Evaluation of Diagnostics: A Framework, Desirability of Outcome Ranking Management of Antimicrobial Therapy, and Desirability of Outcome Ranking and study designs such as hybrid effectiveness–implementation designs are discussed which allow for holistic ways to assess real-world outcomes.
  •  By evaluating IVDs with practical, real-world evidence, tests can better inform clinical decision making, policy, and ultimately patient outcomes.

###

I had an amateur experiment in this type of thinking in 2025.  Over the years I'd heard on and off about "Shannon Information Theory," and how it is fundamental to modern communications.  I asked Chat GPT to think hard about Shannon Information Theory, and then explore how it might provide new insights into the value and valuation of Point of Care Tests.   I thought the results were, at least, thought-provoking, and an interesting use of AI to explore the horizon of ideas.

Here's some of what I wrote last fall:

Within one blog, I asked Chat GPT AI to debug a Windows problem - it did.  And to think about Shannon-Meets-Dx - it did.  See the blog here and focus on the second half:

https://www.discoveriesinhealthpolicy.com/2025/12/two-fascinating-ai-tasks-windows-repair.html

More detail about the Shannon Project,

https://bqwebpage.blogspot.com/2025/11/ai-poct-and-shannon-info-theory-in.html

###

I asked Chat GPT to compare the two projects (Claeys and Shannon).

###

Both Claeys et al. and the November 2025 Shannon essay try to rethink diagnostics from first principles. 

Claeys argues that diagnostics should not be evaluated like drugs, because their value depends on implementation, stewardship, workflow, and provider behavior, not accuracy alone. 

The Shannon essay makes a parallel conceptual move, arguing that POCT changes the information architecture of care by reducing delay, noise, memory loss, and failed follow-up.

 Together, the two pieces are synergistic: Claeys offers the modern methods for proving diagnostic value in real-world settings, while Shannon explains more deeply why rapid, well-embedded diagnostics can create more usable clinical value.

###

Claeys et al. move the diagnostics-value discussion beyond accuracy. Their central argument is that infectious disease diagnostics should not be judged the way drugs are judged. A drug acts directly; a diagnostic acts indirectly, through clinician interpretation, implementation, stewardship, workflow, and local practice patterns. That means a test with excellent analytical performance may still show weak or inconsistent clinical impact if the surrounding care system is poorly designed. In their framing, the real object of study is not just the assay, but the assay embedded in a care pathway.

That is highly relevant to readers focused on value. Claeys et al. are effectively saying that value is produced by a chain: test result, interpretation, treatment change, timing, downstream outcomes, and local implementation. They explicitly argue that accuracy alone is not enough, and that reimbursement, guideline adoption, and market access require evidence about patient outcomes and real-world use. They also emphasize diagnostic stewardship and implementation science as integral, not decorative, parts of the evidence package.

Methodologically, the paper is sophisticated and unusually practical. It urges baseline local data before launching outcomes studies, because a test cannot show much benefit if the clinical opportunity for improvement is already small. It recommends explicit PICOTS framing, avoiding subjective adjudicated primary outcomes when reliability is poor, and using causal tools such as DAGs rather than loose, stepwise model-building. It also stresses the target trial framework for observational studies, in part to reduce familiar biases like immortal time bias and conditioning on future events. That is a very modern message: diagnostics studies should stop being casual before-after exercises and start behaving like careful causal inference.

  • PICOTS - Population, Intervention, Comparison, Outcome, Timing, and Setting.  DAG, Directed Acyclic Graph. It is a causal diagram: boxes for variables, arrows showing which things may cause which other things,

Claeys also makes a subtle but important point about heterogeneity. A diagnostic RCT does not settle the matter once and for all, because the effect of the test varies by center, prescribing culture, epidemiology, business-hours coverage, stewardship maturity, and user trust. Their discussion of the ADEQUATE trial is revealing: overall benefit may appear modest, yet center-level effects can range from strong benefit to no benefit to paradoxical worsening. For diagnostics, that is not a nuisance variable. It is part of the biology of value creation.

The paper’s alternative frameworks are especially important for value-oriented readers. Claeys et al. discuss BED-FRAME, DOOR-MAT, and DOOR because conventional endpoints often miss what diagnostics actually do. A panel may have similar positive percent agreement to a comparator but produce materially different antimicrobial decisions; DOOR-MAT is meant to capture that downstream therapeutic desirability. DOOR then broadens to patient-level ranked outcomes. In other words, the field is trying to measure not just whether the test is “right,” but whether it drives better management in context.

Your November 2025 Shannon essay attacks the same problem from a different angle. It argues that POCT changes the information architecture of care. The classic central-lab pathway is described as delayed, noisy, and erasure-prone: the clinician’s memory of the original encounter degrades, the patient may no longer be reachable in a high-bandwidth way, and much of the potential value leaks out between result release and successful action. 

POCT, by contrast, turns testing into a real-time, feedback-enabled dialogue in which the result can immediately reshape questioning, examination, explanation, and next-step action.

This is where the two publications are genuinely synergistic. Claeys gives the methodological and evidentiary scaffolding; Shannon gives the deeper theory of why those methods matter. Claeys says clinical impact depends on context, implementation, and provider behavior. Shannon explains that this is because the diagnostic is part of a communication-and-control system, not a stand-alone object. The test is valuable insofar as it increases usable information at the right moment, reduces transmission loss, and changes decisions before biological thresholds are crossed. Your essay therefore supplies a conceptual physics for the empirical observations that Claeys catalogs.

One powerful overlap is the idea of stewardship as channel management. Claeys emphasizes diagnostic stewardship and antimicrobial stewardship because a result only matters if used by the right clinician, in the right patient, at the right point in the pathway. Shannon reframes this elegantly: stewardship is the design of an improved, lower-noise, lower-erasure channel from assay output to clinical action. That is a more fundamental statement than “stewardship improves adoption.” It says stewardship is part of the information yield of the test itself.

A second overlap is sequentiality. Claeys criticizes simplistic diagnostic studies and points toward designs that respect timing, care pathways, and real-world decisions. Your Shannon paper says POCT converts diagnosis from a one-pass process into an adaptive experiment, where one result prompts new questions, focused examination, or second-line testing. That suggests a next-generation value framework: diagnostics should sometimes be valued not only by the information content of the first result, but by how well the result orchestrates the next decision node. That is very close to Claeys’ broader concern with pathways and downstream management, but Shannon sharpens it by showing why same-result/same-accuracy can still mean different total information harvested from the encounter.

A third overlap is timing relative to disease kinetics. Claeys repeatedly treats timing in the care pathway as central. Your paper goes further and says the same bit of information has different control value depending on whether it arrives before or after an irreversible biological threshold. In infectious disease, that is especially potent: hours matter in sepsis, escalation matters in resistant pathogens, and early de-escalation matters for stewardship harms. Claeys provides the outcomes-research toolkit; Shannon explains why a same-day result can have disproportionate value rather than merely incremental convenience.

A fourth overlap is system entropy. Claeys hints that local care systems, mixed-methods factors, and implementation conditions shape outcomes. Your Shannon essay makes the systems claim explicit: delayed clarification increases downstream uncertainty, congestion, misrouting, and rework, while same-visit clarification reduces the entropy of care pathways. 

For hospital leaders, this is gold. It links the familiar operational language of length of stay, isolation decisions, unnecessary admissions, and routing errors to a deeper theory of information flow. That could make a diagnostics value story more legible to administrators and payers, not just ID specialists.

So the combined message is stronger than either paper alone. Claeys et al. say: study diagnostics as complex interventions whose impact depends on implementation, stewardship, causal design, and downstream management. Your Shannon piece says: yes, because diagnostics create value by changing the fidelity, timing, bandwidth, and feedback properties of the clinical information channel. Put together, they imply that the “value of diagnostics” is not merely sensitivity plus specificity plus turnaround time. It is the amount of clinically usable information successfully transmitted into action, at the right moment, within a real care network.

If I were turning this into a DIHP-style synthesis, the headline would be something like this: Claeys provides the methods for proving that diagnostics matter in the real world; Shannon provides the theory for understanding why rapid, well-embedded diagnostics can extract more value from the same raw assay performance. The synergy is that one paper tells us how to measure value beyond accuracy, and the other tells us what value actually is.

###

Both Claeys and Shannon are examples of, as Apple said, "Think Different."

###

Exactly.

Both pieces are trying to escape the default mental model. The default model says a diagnostic is basically a little machine for generating an answer, and then we judge it mainly by accuracy, maybe with some secondary nod to speed. Both of these works push back on that.

Claeys et al. think different methodologically. They are saying: stop treating diagnostics like miniature drugs. A diagnostic does not “work” in isolation. It works only through behavior, workflow, timing, stewardship, and implementation. So the study design has to think at the level of the care system, not just the test cartridge.

Your Shannon piece thinks different conceptually. It says: stop treating a diagnostic as a detached fact-generator. Think of it as part of an information channel inside a live clinical encounter. The issue is not just whether the result is correct, but whether the result arrives with high enough fidelity, low enough noise, and early enough control value to actually change what happens.

So yes—same spirit. Claeys breaks out of the therapeutic-trial mindset. Shannon breaks out of the lab-benchmark mindset. One reimagines how we prove value; the other reimagines what value is.

That is why they feel so compatible. They are each saying, in different dialects, that diagnostics are being undervalued because we are asking the wrong first question.

A nice one-line formulation might be:

  • Claeys says diagnostics should be studied as complex interventions. 
  • Shannon says diagnostics should be understood as complex information events.

That is very much a “Think Different” move.



Saturday, March 28, 2026

Chris Klomp, Health Policy Expert for CMS and HHS - Some Notes on His Experience

Over the last few weeks, shake-ups at HHS have brought Chris Klomp to the #2 position next to Secretary Kennedy.  See news reports here; see an annotated one-hour interview with Klomp here.

I asked Chat GPT to discuss his educational background and professional experience through the lens of his current top-level health policy roles.

Endpoints discusses Klomp on AI, Klomp on biotech/China, Klomp on TrumpRx.

Friday, March 27, 2026

Korie et al. 2026: What Drives Next Gen Sequencing Denials at Yale Pathology?

Header:  A Yale pathology study presented at USCAP 2026 shows that NGS reimbursement denials are less about overuse and more about administrative failure—especially ICD-10 miscoding. Only 20% of cases were denied (275/1392), and most denials occurred despite guideline-concordant testing. The authors conclude, the fix is operational, not clinical.




Reimbursement Denials for NGS:
A Systems Problem, Not a Clinical One

[By Chat GPT 5.4]

At the March 2026 USCAP meeting, Korie et al. (Yale Pathology) presented a timely analysis of reimbursement denials for next-generation sequencing (NGS) in solid tumors:

Link (abstract PDF):
https://www.laboratoryinvestigation.org/action/showPdf?pii=S0023-6837%2825%2901936-1

The study evaluated 1,392 NGS tests performed between 2022–2023 at a large academic center. Of these, 275 cases (20%) were denied—a meaningful but not overwhelming fraction. That denominator matters: the system is not broadly failing, but the failures are highly patterned and correctable.

Register for AMA Meeting on Coding & AI: "Appendix S Revisions" - April 16

 AMA has big, big plans for changing how it handles AI services (potentially affecting digital pathology and genomics) in terms of policy and coding, possibly even with whole new classes of codes.  

These come under the headline of "Revising Appendix S," which has been a topic for several AMA CPT meetings in a row.   

You can register with AMA to view and comment on Appendix S plans, under the heading "Tab 67" of the next AMA CPT meeting.  Instructions here.  

New News: April 16:

AMA has just announced a special public meeting on Thursday April 16, from 430-600pm Central Time (530-700 ET, 230-400 PT).

Here's the AMA text and links.   Further below, I give you a very short AI summary of Appendix S.

See an essay from AMA policy participant Richard Frank MD - here

Thursday, March 26, 2026

AI, Advanced Software, and AMA CPT Policy: Deadline March 31: Appendix S for Upcoming CPT Meeting

 For several quarters, AMA CPT has been debating major amendments to the AMA CPT "Appendix S," which may have enormous implications for how AI- or software-dominant healthcare services are reimbursed.

At the upcoming AMA CPT meeting in Chicago [virtual registration still available], a new round of revisions to Appendix S will be debated.  You can sign up now to read the current revisions and make public comment.  Debate was vigorous at the AMA CPT last September and this past February.  

Revisions to Appendix S may be followed by creating a new coding section called "CMAA," Clinically Meaningful Algorithmic Analyses.   

The deadline to comment is Tuesday, March 31.

My main concern is, they'll bring in policies adapted for radiology, cardiology, etc, and they may be a poor fit for genomics, which makes universal heavy use of extremely sophisticated software including AI and which already does not require "physician work" as its main input.

Here's how to comment:

First, go to the online PDF agenda for the April CPT meeting:

https://www.ama-assn.org/system/files/cpt-panel-may-2026-agenda.pdf

Click on the boldface link for INTERESTED PARTY COMMENT.  This should take you to AMA website here, but click on the PDF's link if the one below doesn't work.

https://cptsmartapp.ama-assn.org/ipdashboard

You may need to email register with AMA to access AMA functions like this comment dashboard.

When you get to the AMA CPT Smart App, be sure and click the tab near the top for "INTERESTED PARTY" view. Scroll down to bottom.


Note that for Tab 67, Appendix S, it sends you to the "Ballot" option (far right column) which is where you find the actual markup version of a new Appendix S.

Use the progress button near the bottom to scroll ahead to Agenda 67 (Appendix S).


So you've tapped Interested Party Portal, advanced to where you find Appendix 67.   There are four columns:

  • IP Interested Party Access (to CPT application and supporting documents like publications)
  • IP Comment (you get a fixed form on which to write your comments)
  • View Comments
  • BALLOT (in the case of Appendix S, you gotta get this, the actual 4-page appendix)
Appendix S is damn hard to read - it's nearly entirely fields of struck-out text and inserted text from beginning to end.   But it's important.

See snapshot of the heavy edits throughout:





Comment on CRUSH, CMS Policy, Genomic Testing: Due Monday, March 30

On February 27, 2026, CMS announced a vigorous plan for anti-waste, anti-fraud measures in Medicare, with strong highlighting given to two areas: (1) durable medical equipment DME, and (2) genomic testing.   The initiative, abbreviated CRUSH ("crush fraud") is open for public comment until Monday, March 30, 2026.

See my blog and links here:

https://www.discoveriesinhealthpolicy.com/2026/02/cms-issues-rfi-on-fraud-highlighting.html

Genomics fraud includes highly improper billing of hundreds of millions of dollars for genetic testing that is impossible or pointless in a Medicare population.   (These occurences were vastly dominated by the states of Florida and Texas where Medicare payment controls were amazingly weak, for years.) An example of a $52M genetic test scheme is here.

There have been 183 comments to date, but comments often pour in on the final day.   See the policy discussion here and find a "submit a public comment" checkbox.  Anti-fraud options discussed include nationalizing MolDx.

https://www.federalregister.gov/documents/2026/02/27/2026-03968/request-for-information-rfi-related-to-comprehensive-regulations-to-uncover-suspicious-healthcare



Tuesday, March 24, 2026

Nerd Note: 2017 PAMA Raw Data File is Still Posted

Header:  CMS still stores publicly available cloud data on lab test pricing surveyed in 2017 and representing CY2016.

###

 Congress and CMS are re-activating the PAMA reporting process.  For reporting laboratories who had >$12500 in Medicare payments in 1H 2025, they will report data on all claims paid by commercial payors in 1H2025, and they will report in May-June-July 2025.  See websites and announcements at CMS.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting

The prior survey reporting 1H2016, reported and posted in 2017.  This set a new fee schedule for 2018 forward.

See the 2016/2017 Cloud Data 

At the time, CMS published a gigantic cloud database of reported prices.  I thought that was no longer available, but it seems it is.   The data can be pretty interesting.

On this page:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs

see "CLFS Applicable Raw Data File from 2017 Reporting."


That sends you here:

https://data.cms.gov/provider-characteristics/hospitals-and-other-facilities/medicare-clinical-laboratory-fee-schedule-private-payer-rates-and-volumes


For example, if you search 81211 (a popular BRCA code in 2016), you get 374 rows of data.   

I wasn't sure how that squares with a contemporary old 2017 data file I havea for PAMA 81211, which has 2364 rows of pricing data for 81211. From 2364 (my file) to 374 (online file today), about 85% of the rows are msising.  This is because the current cloud data leaves out all price reporting with less than 10 units per line, and my old file with 2364 lines includes many lines with only 1 or 2 payments at that price.  The overall shape of the data would be the same, just scaled down.

For example, the 2016 data I have (with 2364 rows) for BRCA 81211 shows a large price peak around $2900, which I assume reflect Myriad payments (this was not long after the BRCA Supreme Court case), and relatively few commercial payments at the CMS rate back then (around $2200) but another peak of payments around $1800, which was 85% of the CLFS at the time.   So I inferred (this is just armchair guessing) that Myriad was cruising along with numerous legacy contracts in the $3000 range for BRCA 81211, while new entrants were entering the newly opened BRCA market often at 85% of the CLFS.   



It was also notable that the thin tail of cheapest payments went below $100 and thin thin top end had the rare payment over $6000.

81455

AMA CPT created code 81445 (5-50 tumor genes) and 81455 (51+ tumor genes) at the same time.  81445 was gapfill priced to about $600, but 81455 was not priced by MACs.   However, in 2016 PAMA claims reported in 2017, the (current) database shows 53 claims.  (Recall it excludes any line with 9 or less units).

For 81455, there were 14 claims at $9,900, 11 claims at $1,600, 18 claims at $3579, and 10 claims at $4500.   From this, you'd expect the median was $3579, but it was actually $2916 when all claims (including those prices used 9 times or less) were counted.   We learned from BRCA claims for 81211 that the exclusion of price levels with 9 or less paid claims omitted over 80% of the total data.  

Monday, March 23, 2026

AI Experiment: How Alex Dickinson Describes the CARIS MCED ACHEIVE Report

In March, Caris released top-line results of its ACHIEVE study, testing its MCED test in real cases.  Press release here.  Active Linked In author Alex Dickinson wrote a set of 5 articles about the results.  One, two, three, four, five.

Out of curiousity, I asked what Chat GPT could make out of the six documents.



AI CORNER

###

Overview

Caris reports striking interim performance for its Detect MCED assay using deep whole-genome sequencing, with unexpectedly strong early-stage sensitivity in common cancers. However, enriched cohorts, limited follow-up, and incomplete blinded validation constrain interpretation. Dickinson’s analyses highlight a differentiated WGS multi-signal strategy with potential advantages over methylation-first approaches.


Consolidated Article (Caris + Dickinson)

Focusing first on the press release, the key point is that Caris reported an interim analysis, not a completed prospective screening validation. The Achieve 1 dataset includes 2,122 subjects (1,505 undiagnosed; 617 cancers), but the undiagnosed group is enriched, not general-population screening. 

Only 22.5% had ~1-year follow-up, with ~7% later diagnosed with cancer—again indicating high-risk enrichment. About 865 samples remain in blinded validation, so current results are signal-generating, not definitive.

The reported performance is notable. Stage-specific sensitivity was 56.8% (I), 70.1% (II), 77.1% (III), 99.1% (IV), with 61.3% for stage I–II. Early-stage sensitivity in key cancers included 53% breast, 78.9% prostate, 86.7% lung, and 62.2% colorectal. Specificity was 99.1% in a small asymptomatic subset (n=121) and 95.3% in the broader undiagnosed cohort. These are the central empirical results.

An Expert Discusses The Data

Dickinson’s posts provide useful context. He frames Caris as entering MCED from a position of scale and infrastructure—large tumor databases, clinical profiling, and sequencing capacity—suggesting Detect is an extension of an existing oncology data platform rather than a stand-alone assay.

Scientifically, Dickinson highlights the assay design: ~250x whole-genome sequencing of plasma with paired buffy coat sequencing to remove CHIP, extracting mutational, fragmentomic, and nucleosome-positioning signals for ML classification. This multi-signal WGS framework plausibly explains the observed sensitivity pattern.

His most provocative point concerns tumor-type performance, especially breast and prostate. He argues methylation-first MCED approaches may underperform in these high-incidence cancers, making Caris’s relatively strong early-stage sensitivity clinically meaningful. This reframes MCED evaluation toward incidence-weighted performance, not aggregate sensitivity alone.

Key Limitations

However, key limitations remain. The 99.1% specificity estimate rests on only 121 asymptomatic subjects. The broader 95.3% specificity is more relevant operationally but less reassuring for screening. The cohort is enriched, follow-up incomplete, and blinded validation pending—raising the risk of optimistic interim estimates.

Caris also claims superiority over methylation approaches, but provides no head-to-head comparison, making this an interpretive rather than evidentiary claim. Cross-study comparisons, including Dickinson’s, remain hypothesis-generating.

Strategically, Dickinson emphasizes Caris’s ability to pursue a sequencing-intensive design due to its scale. He notes current costs may be high (e.g., overlapping 2x150 reads), but could fall with alternative platforms. This underscores that economic feasibility will be as important as analytical performance.

Finally, Caris signals future expansion to whole transcriptome integration, reinforcing its view of MCED as a multimodal inference problem, though at the cost of added complexity.

Bottom line: Detect represents a biologically coherent and potentially important MCED approach with strong interim early-stage signals. But given cohort design, small screening subsets, and pending validation, the appropriate stance is serious interest with disciplined caution, not definitive conclusion.



CMS Reports on First Rural Health Summit under Tech Transformation

One of the lesser-known outcomes of last summer's budget bill was $50B for rural health transformation (RHT) focused on technology.  Last fall CMS established the Office of RHT and in December $50B of awards were announced.  Home page here.

On March 18, CMS convened leaders from all 50 states to discuss.  Find the online report here:

https://www.cms.gov/newsroom/press-releases/readout-cms-convenes-first-rural-health-transformation-summit-advance-state-led-innovation



AI CORNER

Here's a 100 word summary.

CMS convened its first Rural Health Transformation Summit on March 18, 2026, bringing leaders from all 50 states together to advance a $50 billion initiative created under recent federal legislation. The program aims to strengthen rural health systems through 

  • expanded access to care, 
  • workforce development, 
  • technology modernization, and 
  • innovative payment models. 

Discussions focused on five priorities: 

  • prevention and chronic disease, 
  • sustainable access, 
  • workforce pipelines, 
  • value-based care, and 
  • health IT. 
States shared early strategies such as telehealth, mobile units, and regional partnerships. CMS emphasized aligning efforts with Medicare and Medicaid financing and embedding long-term workforce and infrastructure solutions to ensure durable improvements in rural health outcomes.

Saturday, March 21, 2026

Journal Club: Value of WGS in Real-World Cancers (Van Putten, Nat Med)

 What's the value of going upscale to whole genome sequencing (WGS) in solid cancers?  Van Putten et al. assemble date from their experience with 888 solid cancers.  The work is from Hartwig Medical Foundation / Netherlands Cancer Institute.

Find the paper here and a Linked In essay here by Joseph Steward.  And here by Alex Dickinson.  Dr. Cuppen here, scientific director of Hartwig Foundation.


Most samples in this study were frozen tissue (89% success rate), but they remark that when archived samples were used, they had the same success rate (90%).

Chat GPT Discusses the Paper:

Friday, March 20, 2026

Can AI Re-Think Health Policy? Example Using WSJ Policy Essay (& MolDx)

Can AI read an article and project its possible applications into a different field?  That's today's question. 

Starting point: WSJ runs an essay by Harvard economics professor and Manhattan Institute authority Roland Fryer.  Fryer here, essay here.   


While his article was on "regulating AI," it clearly had ramifications or applications in other policy domains.  I asked Chat GPT 5 to read the essay and discuss its projection onto healthcare policy such as CMS.   I deliberately left my main initial request vague.   

At bottom, I asks it some Q&A, including how this applies to MolDx.

Here comes the initial response to my request, "apply Fryer's thinking to healthcare policy."

Thursday, March 19, 2026

NCCN Recommends NGS in All Stages of Pancreatic Cancer: Direct Conflict with Outdated Medicare NCD

Tuesday, I was in a webinar where stakeholders were discussing the badly-outdated Medicare NCD for NGS testing in cancer.  Thursday of the same week, more proof of the problem hit my inbox.

See the March 18, 2026, release of new pancreatic cancer management guidelines from the National Cancer Center Network - NCCN.   Here.   See coverage in Genomeweb, here.



Tuesday, March 17, 2026

Waiv (Former Owkin Dx) Brings In $33M For AI Precision Diagnostics

It's not every day you hear of a new $33M going into spatial diagnostics - but it happened this week.

WAIV Diagnostics, Paris (former Owkin Dx) is now a spinout from Owkin and pulling in its own investors and funding. 


https://wearewaiv.com/news/waiv-secures-33-million-to-scale-ai-precision-testing

See also a post from Meriem Sefta, WAIV CEO.

https://www.linkedin.com/posts/meriemsefta_waiv-waiv-secures-33-million-to-scale-share-7437851429651935232-WHmk/

"We exist to catalyze precision medicine with clinical-grade, AI-powered tests designed to detect biomarkers, predict outcomes, and better understand treatment response in oncology."

###

It's a fit with the Bloodpac webinar and white paper, "it's not just DNA mutations anymore" - here.

https://www.discoveriesinhealthpolicy.com/2026/03/bloodpac-releases-webinar-and-white.html


Illumina Publishes "Diagnostics Year in Review" CY2025 (48pp)

 Once again Illumina has published its Diagnostics Year in Review.   See CY2025 in a birdseye view, edited by Mara Aspinall, 48pp.

https://www.illuminaventures.com/wp-content/uploads/2026/02/Diagnostics-Year-in-Review-2025-Version-for-Distribution.pdf


##
AI CORNER
##

TL;DR ? Should you read this 48-page “Diagnostics Year in Review”?

Yes—if you care about where genomics, AI diagnostics, and reimbursement policy are actually heading, not just headlines. This is one of the clearest “industry synthesis” decks of the year, and it quietly encodes several policy-relevant signals that CMS/FDA watchers will recognize immediately.

The report argues that 2025 marks a shift from post-COVID recovery to a true “diagnostics renaissance.” That’s not hype—it’s grounded in three converging forces:

  • Regulatory relief (LDT rule vacated → innovation unlocked)

  • Platform shift (tests → data/AI-driven clinical decision engines)

  • Capital re-consolidation (massive M&A + “Terrific Ten” winners emerging)


BLOODPAC Releases Webinar and White Paper: New Frontiers in Therapy Selection / Beyond DNA Mutations

BLOODPAC offered a multi speaker seminar last fall on the topic, New Frontiers in Therapy Selection: Beyond DNA Mutations.  Find the online resources now.



Here's the home page: link.

Find the two-hour YouTube webinar here: https://www.youtube.com/watch?v=yuYdhbdVcpU

As you scroll the home page, you'll also reach the 37 page white paper.

###