Friday, March 13, 2026

Mapping the Colorectal Cancer Screening Proposal: Why Use an Efficiency Frontier

CMS has a current NCD for biomarker CRC screening, using 74% sensitivity and 90% specificity as a benchmark.  This means you pick up about 3/4 of cancers (relative to colonoscopy) and you send about 10 patients per 100 to a false positive based colonoscopy.

Here I expand on a prior blog and show the two new CMS options graphically.

We can show the statistical space on a probability chart.  The vertical axis is specificity (and also shows "FP per 100").   The horizontal axis is the inverse of sensitivity.  It also shows "cancers missed per 100."   The IDEAL PLACE to be is the far upper left corner.


Since the required conditions are expressed as ≥, the look like an x,y point but define a rectangular solution space.  Any given clinical trial will represent a point with a cloud for SD (such as 90% spec +-2, 85% spec +- 3).

AI History You Can Use: MACs, BCBS Plans, Corporate Structures in Review

There are many complex relationships among Medicare contractors, Blues plans, holding entities, novelty names (Elevance), and more.   Chat GPT works hard to sort it all out.

I've read it all and it's directionally correct and consistent with what I know - but don't guarantee every word is correct.  It's a for-example of what AI research and AI writing can create, as of 3/2026.

###

The Blue System, Medicare Contractors, and the Curious Case of MolDX

At first glance, entities such as Novitas, First Coast Service Options (FCSO), Palmetto GBA, CGS, and MolDX can look like a tangle of shells, aliases, and contractual masks. In reality, the structure is more intelligible than it first appears, though still sufficiently layered to invite confusion. The key is to distinguish among three different kinds of relationships: first, the relationship between the Blue Cross Blue Shield Association (BCBSA) and local or regional Blue plans; second, the relationship between those Blue plans and their government-services subsidiaries or affiliates; and third, the difference between a corporate entity and a programmatic framework such as MolDX. Once those distinctions are kept in view, the web of Novitas, FCSO, Palmetto, CGS, WPS, Noridian, Florida Blue, South Carolina Blues, Anthem, and Elevance becomes much easier to parse.[1][2] (Blue Cross Blue Shield Association)

Thursday, March 12, 2026

AI History You Can Use: Relive Amazing 2007-2010 Debates about FDA, LDT, CDX

In the past couple years, we've lived through FDA regulation of LDTs, court cases, and expanding capabilities of genomic diagnostics, many of them LDTs. 

However, it's worth while to recall a period 2007-2010, when a PGx test to predict rituximab responders led to pushback from Genentech against LDTs, a Citizen's Petition to FDA, and a National Academies review in 2010. The PGX FCGR rituximab test largely sank out of view by then, and later meta-analysis were negative. Here is a retelling of the whole story from Chat GPT 5.4.

The article below is written entirely by Chat GPT and as a side bar I provide a link to the whole Chat GPT dialog in its original form of prompts and answers:  Here.

It would have taken me hours to research and write this essay by hand.  With AI, it took a half hour from my first vague prompt about a half-remembered something.

###

 


Tuesday, March 10, 2026

CMS Posts New Idea for Colorectal Screening Biomarkers - But Should Use an Equivalence Frontier

Update - I walk through the  old and new SENS-SPEC spaces, graphically, here

###

On March 10, 2026, CMS released a new proposal for covering non-invasive CRC screening tests.   Currently, CMS uses a threshold of sensitivity 74% (picking up about 3/4 of all colon cancers relative to colonoscopy) and a specificity of 90% (sending about 1 in 10 patients to colonoscopy due to a false positive FP biomarker.)  CMS uses one NCD for DNA FIT testing (Cologuard) and another NCD for blood-based CRC screening.    

In its proposal six months ago, CMS proposed to change the title to "Non-Invasive Biomarker Tests," suggesting they could merge coverage of blood-based and stool-based tests.  That is what they are in fact attempting to do.

However, it looks like CMS is making a cognitive error.  Although they seem to understand there is a continuous tradeoff between SENS and SPEC (just by  sliding the cut point up and down) they proposed to allow only two particular "bins" for coverage - SENS 90, SPEC87, or else option two, which is SENS  79, SPEC  90.   

(Pick up 90% of cancers, while sending 13 patients to a false positive FP colonoscopy; or pick up 79% of of the cancers, while sending only 10 patients to a FP colonoscopy).

Clearly, you should be able to pick up 89% of cancers, but send 11 patients to colonoscopy.  But that would fail.  You'd fail he 90% rule of option 1, you'd fail the 10 rule of option 2. 

The probably is, companies can get preliminary data, set predetermined cutpoints to meet one or the other bin, and then "miss" the two bins, although actually having a more accurate test that the NCD requires.  That is, the test performs well against a continous quality frontier.  

This is not hard to express algebraically, and CMS could use a simple formula by which anyone could tell in 30 seconds if a test meets the true (frontier-based) performance or not.   This is also much less wasteful, since you don't have to discard super-costly trials that miss a "bin" while exceeding the implied accuracy frontier.

I'll let Chat GPT explain it.

The CRUSH Initiative and Medicare's Bone-Headed Stupid Payments for 81408 and Other Insane Codes

In June 2023, OIG published that Medicare's highest paid genomic test code, 81408, was likely unbelievable and fraudulent from day 1.   Here.  Practically a billion dollars had gone out from 2018 to 2022, when Medicare payments for 81408 were stopped  The code was never billed in the NGS MAC and MolDx regions, and nearly all payments were in Texas (Novitas MAC) and Florida (FCSO MAC).

Payments look like this:


If you know that 81408 is medically unbelievable in a Medicare population, let's add that these labs billed and were paid 81408 in units of 2 per patient, 81407 in units of 1 per patient, and 81406 in units of 2 per patient.   So patients actually had not 1, but 5 or more unbelievable codes ON EACH CLAIM.

I first referred to 81408 as the "fraudomatic code" in the fall of 2020Here.  Over the next 5 years, I published about a dozen follow up blogs.

Four more insights into the MAC insanity here:

NEW INSANITY #1

Some labs in Florida had huge payments under 81408 in 2022.  When that gusher of money stopped in 2023 (bar chart above), the SAME LABS just switched to other costly, unbelievable codes like 81419 (epilepsy gene panel).   Here.  OMG.

 NEW INSANITY #2

More on the codes that were switched to.

Despite seeing the massive risks of uncontrolled, costly genetic codes in Texas and Florida by 2022, and publishing on this in 2023, the same insane explosive growth continued in Texas and Florida in 2024, on the codes 81419 (epilepsy $2449), 81440 (mitochondrial $3324), and 81443 (Ashkenazi Panel $2449).


  Whereas the natural (original) spending on these codes in Medicare should be and is, close to zero, the 2024 spending was $161M.

Puzzle - which is worse, DME fraud or Genetics fraud?  I would argue DME fraud bills for implausible volumes of services, while this genetics fraud bills for impossible types of services, which should be easier to detect.  

NEW INSANITY #3

The older rate for 87798, other pathogen, $35, was significant in 2019, at $100M.  But nothing can explain the skyrocketing value adding $200M from 2022 to 2024.  The extra and sudden $200M was similar to booming rates for inexplicable codes 81408, 81419, 81440, 81443.   


Billing by LabCorp and Quest was about nil.  But in Texas and Florida...watch out.   (MolDx largely cut off 87798 payments by around 2022.)

NEW INSANITY #4

This whole time, the program integrity people at CMS left the "Medically unlikely edit" at N=2 for code 81408.   If it had been reset to "1", which would have taken five minutes, in 2019 or 2020, CMS would have saved $400M.   

Even today, March 2026, the medically reviewed and passable units per claim on 81408 is ... TWO.  Here.  This is a supervised edit.  Someone had to look at this and decide the allowable edits were TWO.  

And even after it was a top fraud investigation - surely, by 2022, based on the 2023 OIG publication - nobody at OIG, or a MAC, or a UPIC, or the big CMS program integrity group, could be bothered to reset the MUE units to 0 or 1, saving hundreds of millions of dollars.  

In March 2026, it's still...medically allowable as TWO units.


So when I'm interviewed about CMS fraud, like the new CRUSH initative, I say you don't need a supercomputer and the idiocy is a mile deep.

###

Related:

Ought of curiousity, i asked "Chat GPT" to write an essay about potential adverse events from CRUSH in the 'legit' lab industry.



Monday, March 9, 2026

TriCon: Cutting Edge Conference in SF, May 4-5, 2026

The conference TRICON is in its 33rd year, and will be held in San Francisco May 4-5, 2026.  The conferences has three main tracks, "Diagnostics Innovation," "Artificial Intelligence," and "Precision Medicine."  

(And it comes right on the heels of Dark Report Pathology War College in New Orleans, April 27-29, and AMA CPT in Chicago, April 30-May 1.)

Find the conference website here:

https://www.triconference.com/

I gave the agenda(s) to Chat GPT and asked for a write-up.



####

AI CORNER

####

Summary:
The 2026 TRI-CON Precision Medicine conference highlights the rapid convergence of AI, multi-omic diagnostics, and digital pathology. Across three coordinated tracks—Artificial Intelligence, Diagnostics Innovation, and Precision Medicine—the meeting reflects a field moving toward AI-enabled interpretation of complex biological data and decentralized deployment of advanced molecular testing. For molecular pathologists and precision medicine specialists, the program signals a transition from isolated diagnostic tests to integrated computational systems guiding clinical decision-making.

------------------

The 33rd Annual TRI-CON Precision Medicine conference, returning to San Francisco in May 2026, brings together leaders in biotechnology, diagnostics, and computational medicine to explore how emerging technologies are reshaping healthcare. Organized around three overlapping tracks—Artificial Intelligence, Diagnostics Innovation, and Precision Medicine—the program illustrates how the next generation of diagnostics will increasingly depend on the integration of genomics, pathology, imaging, and clinical data within AI-driven analytical frameworks.

A dominant theme across the conference is the emergence of AI-driven multimodal biomarkers. Sessions in the Artificial Intelligence track explore how machine learning models can combine histopathology images, genomic sequencing data, radiology signals, and real-world clinical outcomes to improve biomarker discovery and therapeutic targeting. Digital pathology and computational pathology play a central role in this transformation, with speakers describing how foundation models trained on histology data may enable new biomarker strategies and accelerate clinical trial design. Several presentations also emphasize “agentic AI” systems, in which multiple AI models coordinate across datasets and clinical guidelines to support oncologists and multidisciplinary teams in treatment selection and clinical trial enrollment.

The Diagnostics Innovation track highlights another major shift: the migration of testing from centralized laboratories toward point-of-care and at-home diagnostic environments. New molecular technologies—including CRISPR-based detection platforms and portable multiplex testing systems—are being developed to bring complex molecular assays closer to the patient. However, presenters note that reimbursement policy, site-of-service restrictions, and regulatory frameworks remain major barriers to broader adoption of decentralized molecular diagnostics.

Meanwhile, the Precision Medicine program focuses heavily on liquid biopsy technologies, particularly minimal residual disease (MRD) testing and multi-cancer early detection. These sessions emphasize the growing importance of multi-omic signals—DNA mutations, methylation patterns, RNA expression, and protein markers—interpreted through AI-enabled analytics to detect cancer earlier and monitor disease progression more precisely. While oncology remains the dominant application, the conference also explores precision approaches in metabolic disease, neurology, and population health.

Taken together, the TRI-CON agenda suggests that diagnostics are evolving toward AI-mediated, multimodal clinical intelligence systems, combining laboratory science, computational modeling, and decentralized testing to support more personalized and proactive healthcare.

State Biomarker Bills: Germline Cancer Risk Edition; Plus USPSTF News

The breast cancer advocacy foundation, Susan G Komen for the Cure, is actively promoting the need for copay-free germline breast cancer risk testing, at the state insurance level.

See a headline article at Precision Medicine Online (subscription) by Alison Kanski.


https://www.precisionmedicineonline.com/precision-oncology/komen-push-no-cost-genetic-testing-inherited-cancer-risk-taking-hold-state

  • Two bills became law in Arkansas and Tennessee.
  • The newest three proposed bills have been introduced in 2026 by legislator advocates in Florida, Mississippi, and Tennessee.  Proposed bills are also in play in Georgia, Illinois, and Minnesota.
National

National legislation, the Reducing Hereditary Cancer Act (HRCA), HR 4752, was introduced last summer by Rep. Wasserman-Schultz (FL).

USPSTF?

USPSTF guidelines lag, referring only to BRCA1-2.  The current version is from 2019 (here), while an update has supposedly been underway since 2024 (here).  However, its research plan only promised to re-evaluate BRCA 1&2 per se.

The March meeting of USPSTF was just canceled, meaning USPSTF has not met for a long time (here).   MedpageToday recently published that HHS "may eliminate the USPSTF" here.

USPSTF guidelines are binding on commercial health insurance, per the ACA.   State laws generally only impact fully-insured plans and not large employer self insured ("ASO") plans.




State Biomarker Bills

For the latest on state-level biomarker coverage bills in general, see fightcancer.org (ACS-CAN) here.  A state biomarker law recently advanced to the governor's desk in Mississippi, here.

Friday, March 6, 2026

AMA Lists Full Agenda for April CPT Meeting: Appendix S Again

Web page for April meeting here. (Also registration, dates.) Agenda here.

Pathology comments are due March 12!  Other comments are due March 31.

###

It looks like there are about 60 total topics. Code 31627 for bronchoscopy is proposed for an add-on code for robot-assisted (Tab 10).  There is also a Category I add on code to reported standardized analysis of individual abnormalities in PET-CT (tab 14).  Tab 48 is "AI malignancy risk prediction."

###

Tab 67 is perenniel proposals to update Appendix S on software-dominant services and AI.

Revise Appendix S by refining the definitions of assistive, augmentative and autonomous; establish the type of clinical evidence needed to classify an AI medical service as assistive, augmentative or autonomous; clarify what constitutes clinically meaningful output.

As of March 9, you could download the code change application about Appendix S, but not the actual changs TO appendix S, yet.   

I walked through each new lab code when lab codes were posted a week back - here

Friends of Cancer Research; AI -Enabled Tumor Assessment, Accuracy-Precision

With a Linked In posting and a new 2026 white paper, Friends of Cancer Research (FoCR) addresses how we should evaluate - and then use - AI-enabled tumor assessment tools.  By which they mean, radiology tools (like RECIST).   

Find the posting here, with an embedded summary deck (6p) as well as a link out to the full white paper (17pp).

https://www.linkedin.com/posts/ai-enabled-approaches-to-tumor-assessment-ugcPost-7433240076404305920-4u1i/

Sidebar:
Adjacent, Linked In gave me an article on "pathomics" (quantitative pathology) by Swarnagouri Naganathanhalli at Johns Hopkins. Here.

Sidebar:
See also:  My blog on AMA new codes, AMA approves Cat III code for AI-assisted PET tumor sizing and longitudinal mapping.   Here.

###
AI CORNER
###

AI Tumor Measurement in Cancer Trials: What This New White Paper Actually Says

Overview. A new 17-page white paper from Friends of Cancer Research (2026) explores how artificial intelligence might change the way tumors are measured in oncology clinical trials. The short version is that the paper focuses almost entirely on AI analysis of radiology images—CT and MRI scans—not histology or digital pathology. The goal is to explore whether AI analysis of scans could eventually supplement or replace the longstanding RECIST system used in drug trials to measure tumor response. The document does not propose immediate regulatory change or introduce a specific algorithm. Instead, it lays out a roadmap for how the oncology ecosystem might validate and adopt AI imaging tools as clinical-trial endpoints. For most readers, it is best understood as a conceptual framework paper rather than a technical breakthrough.

Scope of the paper. One important clarification is what the paper does not cover. The authors limit their discussion to radiologic tumor assessment. AI tools discussed in the document analyze imaging data from CT or MRI scans, detecting tumors, segmenting them, tracking them over time, and quantifying tumor burden. The paper does not address AI applied to pathology slides or digital histology, which is a separate and rapidly growing field. The focus on radiology is deliberate because the current standard system used in oncology trials—RECIST—is itself based on radiology measurements.

The RECIST system. RECIST, or Response Evaluation Criteria in Solid Tumors, has been the dominant framework for evaluating tumor response in clinical trials for more than twenty-five years. Under RECIST, a small number of representative tumors are selected as “target lesions,” and radiologists measure the diameter of those tumors on imaging scans over time. Tumors are then classified as shrinking, stable, or progressing. The approach was originally designed for simplicity and reproducibility across clinical trials, but it has obvious limitations. Only a handful of lesions are measured, the measurements are one-dimensional rather than volumetric, and the process relies on human interpretation, which introduces variability. Perhaps most importantly, RECIST endpoints such as objective response rate and progression-free survival often correlate only imperfectly with overall survival, the ultimate clinical outcome regulators care about.

Why change RECIST. The white paper argues that advances in imaging and computing now make it possible to move beyond these constraints. Artificial intelligence can analyze scans in ways that were simply impractical when RECIST was developed in the 1990s. One relatively conservative application would be AI-assisted RECIST, in which algorithms help radiologists identify tumors and measure them more consistently. In this scenario, AI does not replace human interpretation but instead automates repetitive tasks and reduces measurement variability.

Volumetric measurement. More ambitious approaches involve abandoning one-dimensional tumor measurements entirely. AI systems can measure three-dimensional tumor volumes, allowing the entire tumor burden across the body to be quantified rather than focusing on a few selected lesions. In principle, this could provide a more accurate picture of disease progression and treatment response. Because tumor volume changes exponentially rather than linearly, volumetric measurements may detect treatment effects earlier than RECIST measurements based on diameter.

Radiomics. Another area discussed in the paper is radiomics, a technique in which AI extracts large numbers of quantitative features from medical images. These features can capture patterns such as tumor texture, vascular architecture, internal heterogeneity, and relationships with surrounding tissues. Researchers believe these imaging signatures may reflect underlying tumor biology, potentially revealing early signals of treatment response that simple size measurements cannot detect.

Growth kinetics. The paper also highlights the potential of modeling tumor growth kinetics. Instead of treating each scan as a static snapshot, AI models can analyze how tumors grow or shrink over time, estimating growth rates and response trajectories. This type of analysis could provide a more dynamic understanding of tumor behavior and help distinguish meaningful treatment effects from normal variability in tumor measurements.

Implications for drug development. The motivation behind these ideas is the possibility of improving clinical trial design. Early-phase cancer trials often involve relatively small patient populations, yet RECIST measurements are relatively crude indicators of response. Because RECIST endpoints correlate imperfectly with long-term survival, important decisions about drug development are sometimes made using weak signals. The authors suggest that AI-derived imaging measurements could potentially provide earlier and more biologically meaningful indicators of treatment benefit, enabling faster go-or-no-go decisions and more efficient clinical trials.

Regulatory pathway. A substantial portion of the paper focuses on the regulatory pathway required for such tools to become accepted endpoints in drug development. The authors outline a process similar to other biomarker qualification efforts. First, a specific context of use must be defined—for example, a particular cancer type or treatment setting where improved imaging endpoints could make a difference. Next comes analytical validation, demonstrating that the AI tool measures tumors consistently and reproducibly. This must be followed by clinical validation, showing that the measurement correlates with meaningful patient outcomes. Ultimately, large meta-analyses across multiple clinical trials would be needed to demonstrate that the new measurements predict survival or other key outcomes better than existing endpoints.

Historical precedents. The authors point to historical examples of biomarker qualification to illustrate the process. In breast cancer, pathologic complete response eventually became accepted as an early endpoint in certain neoadjuvant trials. In multiple myeloma, minimal residual disease measurements followed a similar trajectory. In both cases, years of collaborative research, standardization, and pooled analyses were required before regulators accepted these markers as credible indicators of treatment benefit. The paper suggests that AI imaging biomarkers may need a similar development pathway.

A regulatory nuance. An interesting nuance appears in the discussion of how these tools might initially be used. When AI measurements are used solely to analyze clinical trial endpoints—rather than to guide treatment decisions for individual patients—they may not require traditional FDA device clearance. In other words, AI tools could potentially be deployed in clinical trials before they are widely used in routine clinical care.

Tone of the paper. Despite the excitement surrounding artificial intelligence, the overall tone of the white paper is cautious. It does not claim that AI tumor measurement is ready to replace RECIST today. Instead, the document emphasizes the need for standardization, shared datasets, cross-tool validation, and collaboration among sponsors, regulators, imaging experts, and technology developers. Much of the paper is devoted to outlining methodological questions that must be resolved before AI-based imaging endpoints could become part of regulatory decision-making.

Bottom line. For readers deciding whether to tackle the entire document, the essential message can be summarized simply. First, the paper is about AI analysis of radiology images, not AI analysis of histology slides or digital pathology. Second, the current RECIST system for measuring tumor response is widely recognized as limited and somewhat outdated. Third, AI-based imaging analysis offers the possibility of richer and earlier indicators of treatment response, but substantial validation work will be required before regulators accept these measurements as formal endpoints in clinical trials.

Bigger picture. The broader significance of the paper is that it signals growing alignment across the oncology research ecosystem. Pharmaceutical companies, imaging specialists, technology firms, and regulators increasingly recognize that the traditional way of measuring tumors may not be adequate for the era of precision oncology. AI-enabled imaging tools could eventually modernize clinical trial endpoints, potentially making cancer drug development faster, more informative, and more efficient. For now, however, the field is still at the stage of defining standards and building the evidence needed to support that transition.

Thursday, March 5, 2026

Life Science AI Reaches Half of Life Science Investment (SVB)

Over at Linked In, David Crean provides highlights of the January 2026 Silicon Valley Bank life science investment report.  Providing links to a 29-page Silicon Valley Bank investment report, of $46B healthcare investment in 2025, $22B or 46% was healthcare AI.   Diagnostics/tools, biopharma, and devices were down and flat.

  • Find Crean here.
  • Find his Substack blog here.
  • Find the current article here.
  • Find the SVB report there or here.

Here are some quotes:

Strip out AI-related deals, and the rest of the market is in sharp contraction:

  • Biopharma: $21.6B (-19% YoY from $26.6B)
  • Healthtech: $13.8B (+5% YoY from $13.2B, AI-driven)
  • Dx/Tools: $4.3B (-35% YoY from $6.6B)
  • Device: $6.9B (flat YoY from $6.8B)

AI: When One Theme Consumes Half a Market

Healthcare AI is projected to reach $22B which equates to 46% of total healthcare VC. 

##

See his article, VC is Broken.


click to enlarge

x
##
AI Corner
##
Chat GPT 5.3 summarizes.

The Silicon Valley Bank Healthcare Investments and Exits H1 2026 report describes a venture ecosystem that has split into two distinct worlds: a booming AI-driven segment and a broader healthcare investment market that remains constrained by tighter capital and fewer exits. 

Venture fundraising for healthcare has fallen sharply from its pandemic-era peak. Healthcare-focused venture funds raised only about $6.9B in 2025, compared with $41B in 2021, reflecting limited partner caution as IPO and acquisition activity slowed and distributions back to investors declined.

Artificial intelligence now dominates healthcare investment. AI-related companies account for roughly 46% of healthcare venture funding, driven largely by extremely large financing rounds. In particular, deals exceeding $300 million—often tied to generative AI models, drug discovery platforms, or large healthcare data systems—have grown rapidly and now represent a major share of capital deployed. These unusually large rounds have reshaped funding dynamics and concentrated investment among a small number of companies.

Outside AI, the sector increasingly reflects a “haves versus have-nots” environment. Early-stage funding, particularly Series A rounds, remains accessible for strong teams with compelling ideas.

However, later-stage rounds such as Series B and C have become significantly harder to secure unless companies demonstrate clear clinical validation, revenue traction, or other strong fundamentals. Investors are writing fewer checks and concentrating capital in companies with defensible technologies and clear paths to value creation.

Sector trends vary. Biopharma remains the largest destination for investment, although overall funding has declined and investors are favoring validated biological targets and proven drug modalities. Healthtech investment is heavily driven by AI, especially tools that improve provider operations or provide clinical knowledge support. Diagnostics and tools companies face the most pressure due to reimbursement and commercialization challenges. Medical devices have remained comparatively stable, with activity centered on surgical robotics, brain–computer interfaces, and AI-enhanced imaging technologies.

Another emerging theme is longevity and healthspan, spanning geroscience research, consumer health optimization products, and technologies aimed at preserving functional ability in aging populations. Interest from major pharmaceutical companies—particularly around metabolic drugs such as GLP-1 therapies—suggests that this area may become a larger strategic focus.

Overall, healthcare venture investing appears to be resetting after the exuberance of the early 2020s. AI continues to attract large amounts of capital, but across the rest of the sector investors are demanding stronger fundamentals, clearer commercialization strategies, and credible exit pathways before committing funding.

Diagnostics?

The report portrays diagnostics and research tools (Dx/Tools) as one of the most pressured segments of healthcare venture investment right now. In contrast to AI-heavy healthtech or well-validated biopharma assets, diagnostics sits at the intersection of science risk, reimbursement risk, and commercialization complexity, which makes investors cautious. As a result, the sector experienced a noticeable decline in both deal volume and capital invested during 2025.

Investment contraction.
Total Dx/Tools venture investment fell substantially in 2025 to roughly $4.3B across about 388 deals, down from about $6.6B in 2024. The decline reflects a broad reset in healthcare venture markets: investors are writing fewer checks, raising fewer funds themselves, and concentrating capital in fewer companies with stronger fundamentals. Diagnostics is particularly exposed to this shift because many companies historically raised early rounds based on proof-of-concept science rather than near-term revenue.

Reimbursement and commercialization pressures.
A recurring theme in the report is that diagnostics companies face persistent uncertainty around revenue models and reimbursement pathways. Unlike therapeutics, where regulatory approval can create large value inflection points, diagnostics often depend on payer coverage decisions, clinical adoption, and integration into healthcare workflows. These factors make investors wary of long commercialization timelines and unpredictable pricing dynamics.

Shift toward platforms and data moats.
Despite the overall downturn, the deals that did occur tended to focus on companies with:

  • AI-driven biology platforms that accelerate drug discovery or biomarker identification

  • “Picks-and-shovels” infrastructure for research, such as spatial profiling or computational biology tools

  • Large proprietary datasets that create defensible competitive advantages

Examples highlighted include companies building foundational biological models, spatial biology platforms, or computational design tools that could shorten preclinical and clinical timelines.

Diagnostics evolving toward proactive care.
The report also notes a conceptual shift in diagnostics investment: some companies are positioning diagnostics not just as tests ordered during illness, but as tools for continuous monitoring, patient-initiated screening, and preventive medicine. Investors see potential in diagnostics integrated into consumer health, decentralized care, and early disease detection.

Bottom line.
Diagnostics is currently in a capital-constrained but strategically important phase. The easy funding for proof-of-concept diagnostic ideas has largely disappeared. Investors are instead focusing on companies that combine strong science with clear commercialization paths—especially those with scalable data platforms, AI integration, or the potential to reshape how diagnostics are used in proactive healthcare.


###
Sidebar - A national diagnostics conference in DC in August 2026 now includes a whole track on digital pathology and AI - 
click to enlarge


Tuesday, March 3, 2026

CAP Tells CMS: Z Codes Are Highly Disruptive

Header:  In written comments on annual Medicare Advantage rules (CMS-4212-P), College of American Pathologists refers to Z-codes as "highly disruptive."

___

In comments to Medicare Advantage proposals (CMS-4212-P), CAP discusses a number of problems, including network adequacy, prior authorization, and "unorthodox" coding requirements such as MolDx Z codes.

Find the CAP comment here:

https://documents.cap.org/documents/CAP-MA-RFI-Comment-FINAL.pdf

Coding and Reimbursement 

Non-standard coding and/or reimbursement practices have serious adverse consequences for pathologists and laboratories trying to comply with conflicting requirements. Guidelines that deviate from or distort standard billing practices not only limit the ability of laboratories to provide care for patients, but also create fraud concerns, issues with state health plan contracts, and – of particular concern to many patients – potential denials from secondary insurance coverage. 

Such nonstandard billing requirements risk interfering with the ability of patients to receive timely and appropriate services, and adversely affect patients, providers, and the entire health care system. For example, Medicare Advantage plans operated by UnitedHealthcare, Optum Care, and Humana have implemented the requirement that molecular pathology claims contain DEX Z-codes, which are proprietary alpha-numeric codes obtained from the Palmetto DEX Registry. 

The CAP considers this requirement to be highly disruptive, administratively burdensome, and cost prohibitive for pathologists and laboratories, and ultimately believes it will impede patient access to medically necessary testing. 

The CPT code set is universally used by the medical community and transparently developed with broad stakeholder input, including the CMS and other payers who are represented on the CPT Editorial Panel. 

CPT codes are also recognized by the US Department of Health and Human Services (HHS) as a HIPAAcompliant Level I HCPCS code set. 

By contrast, Z-codes do not fit these criteria or undergo this level of input and scrutiny. We also strongly advise adhering to the use of CPT for reporting of molecular pathology and genomic procedures, as this approach does not impose additional requirements or reporting complexity for processing claims for medically necessary services, and maintains alignment with the reporting requirements established by other private payers.

###
AI CORNER
###
I asked ChatGPT to take the role of MolDx and write a response to Dr. Oz.
###
* * The Fictional AI Response: * *

To: Dr. Mehmet Oz, Administrator
Centers for Medicare & Medicaid Services (CMS)

Re: Coding and Reimbursement Policy for Molecular Diagnostics

Dear Dr. Oz,

We appreciate the thoughtful comments submitted by the College of American Pathologists (CAP) regarding molecular diagnostic coding and reimbursement. CMS shares CAP’s commitment to patient access, transparency, and consistency in billing practices. However, several clarifications regarding the MolDx program and the use of DEX Z-codes may be helpful.

First, CPT codes alone are not sufficient to uniquely identify many molecular and genomic tests currently billed to Medicare. Numerous laboratories may bill the same CPT code while performing tests that differ substantially in analytes, methodology, analytical validity, and clinical evidence. This heterogeneity creates challenges for accurate coverage determination, payment integrity, and program oversight. The MolDx registry and associated DEX Z-codes provide a mechanism to uniquely identify each specific test, enabling CMS contractors to link claims to validated test dossiers and coverage policies.

Second, the Z-code requirement does not replace CPT coding and does not alter the HIPAA-recognized coding structure. Laboratories continue to bill using standard CPT codes; the Z-code simply serves as a supplemental identifier that allows Medicare Administrative Contractors to determine precisely which test is being performed. This additional level of transparency supports consistent coverage determinations, reduces inappropriate billing, and allows Medicare to distinguish between tests with differing levels of clinical evidence.

Third, MolDx experience over more than a decade suggests that test-level identification improves program integrity and beneficiary protection. Molecular diagnostics have been an area of documented fraud and billing irregularities in recent years, including instances where identical CPT codes were used for tests with dramatically different clinical validity or for services not performed. The registry approach enables CMS to ensure that covered tests meet appropriate evidentiary standards and that payment is directed toward validated services.

We recognize that the introduction of any additional administrative process can create operational challenges for laboratories. For this reason, MolDx has continued to streamline the DEX registry process, expand guidance for laboratories, and coordinate with other payers to promote consistent implementation. Our objective is not to create proprietary barriers, but rather to ensure that Medicare claims for complex molecular diagnostics can be accurately identified, evaluated, and reimbursed.

In summary, CPT codes remain the foundational billing framework for laboratory services. However, for modern genomic and molecular tests, CPT alone does not always provide sufficient granularity to support coverage policy, payment accuracy, and program integrity. The MolDx Z-code registry addresses this gap while preserving the existing CPT billing structure.

We welcome continued dialogue with CAP and other stakeholders to further refine processes that maintain patient access while ensuring accurate identification and evaluation of molecular diagnostic tests.

Respectfully,

MolDx Program Leadership  [WRITTEN BY AI, VIA BRUCE QUINN]
Palmetto GBA / Medicare Administrative Contractors
On behalf of the CMS MolDx Program

##
For a third round, see an essay, about both essays above, by Lewis Black here.

Friday, February 27, 2026

CMS Issues RFI on "Fraud" - Highlighting Genomic Test Fraud; MolDx Controls Featured

From 2018 to the present, there have been billions of dollars of genetic test fraud in Medicare Part B, the vast majority occuring from unscrupulous labs in Texas and Florida.   MACs there allow pop up labs to bill $10M, $20M, or more in the highest-cost genetic test codes, with no controls before payment.   In one documented case, the Florida MAC paid out huge amounts for obviously-impossible tests to a lab in one year, then stopped payment on those codes.   The brilliant masterminds at the lab then billed tens of millions of dollars, the next year, on nearby genetic codes, unfairly evading the hard-working fraud experts at their MAC.  

This doesn't happen at MolDx MACs nor the NGS MAC.

CMS announces plans for major new fraud efforts, and seeks public comment on them. Genetic test fraud is highlighted.  Comment to March 30.


  • Find the home page for the Federal Register publication here.   Includes html text of the whole publication.
    • See Section D. Reducing Medicare Fraud Related to Laboratory Tests Including Genetic Tests and Molecular Diagnostic Tests.
    • 91 Fed Reg 9803, 6pp, 2/27/2026, here.  2026 CMS 0227 91FR9803 Fraud RFI
  • See the Crushing Fraud home page at CMS here.
  • In case I've been too subtle in this and prior blogs for several years, this genetic test fraud hasn't ever required geniuses and AI to catch.  It could be detected by a ten-year old with Excel.
    • "Mommy, why did this lab bill Medicare $25M last month for genes like Tay-Sachs and cystic fibrosis in 90 year old nursing home patients?"
    • Using public CMS data, I've written about this in great detail since 2020, six years ago - here.
  • See a broad press release from CMS here.
  • See an article at Fierce Healthcare here.  Genomeweb highlights MolDx news here.
MolDx Featured

Here's the MolDx section:

In 2011, the Molecular Diagnostic Services Program (MolDX Program), administered by Palmetto GBA on behalf of CMS, was established to determine and streamline coverage and reimbursement for molecular diagnostic tests on behalf of the Medicare program. It aims to provide uniform policies across multiple Medicare Administrative Contractors (MACs) and currently spans 28 states.

CMS is interested in understanding if a requirement to register in the MolDX program, that other payors may have, has had an impact on the risk of fraud, waste and abuse related to laboratory testing. CMS also wants to understand the rationale as to why other payors and MA organizations require laboratories to be registered in MolDX even when they are not located in a MolDX state.
  • What prompted other payors and MA organizations to require registration in the MolDX program?
  • What safeguards or challenges has the MolDX program provided to your organization regarding laboratory testing?

###
Chat GPT summarizes the publication.
###

CMS “CRUSH Fraud” RFI —
What It Signals for Labs and Genetic Testing

On February 20, 2026, CMS released its “CRUSH Fraud” Request for Information, signaling a major escalation in program integrity efforts focused in part on laboratory testing, including genetic and molecular diagnostic tests. CMS notes that Medicare Part B spending on lab tests reached $8.4 billion in 2024 and that genetic tests, while a small percentage of total test volume, accounted for a disproportionately large share of spending. 

The agency explicitly references prior DOJ and OIG enforcement actions involving genetic testing fraud and highlights the work of its Fraud Defense Operations Center (FDOC), which accelerated payment suspensions in 2025, including actions involving suspect laboratories.

The document is not a pricing proposal and does not address coverage standards. Instead, it centers on expanding CMS’s program integrity authorities and operational tools.


Strengthened Fraud Detection and Payment Suspension

CMS asks whether it should modify enrollment, audit, medical review, and payment suspension authorities to act more quickly against suspected fraud. The RFI signals interest in stronger use of data analytics, automated claims edits, and faster suspension mechanisms where there are credible fraud indicators.

CMS is also exploring whether shorter claim filing deadlines (for example, 90–180 days instead of one year) might be appropriate for high-risk services. Although no services are specified, laboratory testing is clearly identified as a concern area.

The tone suggests a shift toward earlier intervention and more aggressive prepayment safeguards rather than post-payment “pay-and-chase.”


Possible Expansion of MolDX-Related Requirements

CMS specifically asks whether requiring laboratories to register in the MolDX program has had an impact on fraud, and why Medicare Advantage plans or other payers sometimes require MolDX registration even when laboratories are not located in MolDX states.

This line of questioning suggests CMS is evaluating whether MolDX registration and related safeguards function as effective program integrity tools. 

The RFI does not directly propose nationalization of MolDX, but it clearly signals interest in understanding whether such registration requirements mitigate fraud risk.


Enhanced Ownership and Identity Proofing

The RFI raises concerns about opaque ownership structures and foreign-linked fraud schemes. CMS asks whether it should require U.S. citizenship or permanent residency for individuals with ownership stakes of 5 percent or more, expand fingerprinting and background checks, or enhance identity verification processes more broadly.

For laboratories, this signals possible tightening of enrollment scrutiny and ownership transparency requirements, particularly in high-risk categories.


Medicare Advantage and Part D Oversight

CMS notes that it currently lacks explicit regulatory authority to direct Medicare Advantage (MA) and Part D sponsors to suspend payments under circumstances similar to Traditional Medicare. The agency asks whether such authority should be established.

If pursued, this could align MA and Part D fraud oversight more closely with Traditional Medicare and reduce opportunities for providers revoked in fee-for-service Medicare to continue billing in MA environments.


Artificial Intelligence and Advanced Analytics

CMS is seeking input on how artificial intelligence could improve claims review, coding oversight, and fraud detection. The emphasis is on improving efficiency and accuracy in identifying overpayments and suspicious billing patterns.

For laboratories, this signals continued investment in data-driven surveillance tools.


What This RFI Is Really About

The CRUSH RFI is fundamentally about expanding CMS’s fraud prevention and enforcement authority, not about revising coverage policy or recalibrating laboratory pricing. Laboratory testing — particularly genetic and molecular diagnostics — is explicitly identified as an area of concern because of its high spending concentration and documented enforcement history.

The overarching direction is toward:

  • Faster fraud detection

  • Expanded enrollment and ownership scrutiny

  • Stronger prepayment controls

  • Greater alignment of MA oversight with Traditional Medicare

  • Increased use of analytics and AI

For legitimate laboratories, the central theme is operational compliance and transparency. The document signals that CMS intends to strengthen its fraud control infrastructure, and laboratory stakeholders should anticipate a more intensive program integrity environment in the coming rulemaking cycle.


The FDA's Recent NEJM Papers on Evidence: Applying Them to Evidence for Diagnostics

In the last couple months, Prasad and Makary at FDA have published two major position papers on FDA and evidence for drug trials.

Meanwhile, there's a recent NEJM-Evidence paper by Stewart et al. on evidence levels for imaging [diagnostics].

I ask Chat GPT to discuss any insights that may result from comparing the three papers.

### AI Corner ### Chat GPT 5.2

Overview

Two recent NEJM papers by Prasad and Makary signal a philosophical shift at FDA: away from rigid “two-trial” frequentist dogma toward a more integrated, Bayesian-style evaluation of total evidence—mechanism, effect size, coherence, and postmarket data. 

Their 2026 paper makes one strong pivotal trial the new default; their 2025 “plausible mechanism pathway” allows approval of highly targeted therapies when biology and clinical response align, even without traditional RCTs. 

In contrast, Stewart et al. in NEJM Evidence 2024 caution that diagnostics—especially imaging—have historically been adopted on plausibility and accuracy without proof of improved patient outcomes, leading to overuse and harm.

Comparing the three highlights a central policy tension: how to avoid procedural rigidity that stifles innovation while preventing premature adoption based on weak surrogate logic. 

The discussion also explores how payors view “evidence plus judgment,” and what this means for genomic risk tests where RCTs may be unethical. Readers gain insight into evolving FDA philosophy and its implications for diagnostics, reimbursement, and evidence standards.

Thursday, February 26, 2026

Register for April CPT in Boston; Lab Codes Posted (April 26)

AMA CPT ahs opened the registration page for the next Editorial Panel meeting, April 29-May 2, 2026 in Boston.  That's Westin, Copley Place.

Find it here:

https://www.ama-assn.org/membership/events/cpt-editorial-panel-meeting

##

The full public agenda for all codes will be released on March 6 (link above.)  Because they travel through several, serial subcommittees, Lab Codes are released for comment on April 26, link above.

####

LAB CODES here:

https://www.ama-assn.org/system/files/may-2026-lab-mopath-agenda.pdf

You can request code packets to common on starting NOW and must submit comments back by March 12.   Click on the "submitting a request" in the agenda pdf prologue.

  • NOTICE –INTERESTED PARTY PROCESS Upon review of this agenda, if the reviewer believes that they will need to provide comment on an issue, they must seek Interested Party status by submitting a request for a copy of the application and associated materials. Requests are to be submitted through the Smart App’s CPT Interested Party dashboard. Those interested in review of the agenda materials are required to complete this form in full.

There are a LOT of Category I & Category III codes, far more than typical.  This is the last cycle of new codes that will still enter the summer 2026 pricing process.

  • Cat 1, "POLE" gene
  • MAAA, multiple sclerosis
  • MAAA, CNS methylation
  • MAAA, revise text 81518 re breast cancer
  • MAAA, revise text 81542, prostate, remove "microarray"
  • Cat 1, testing for GFAP (glial protein)
  • Cat 1, pTau217/Abeta1-42 ratio
  • Cat 1, candida fungi PCR
  • Cat 1, 87800, 87801, add "single result" (pathogen)
  • Cat 1, rubeola by PCR
Digital pathology codes;
  • Cat 1, PAP test, Digital algorithmic classification
  • Cat 1, revise 88363 select specimen for [bladder prognostic testing] [sic]
  • Cat III, histomorphology augmentative software [AI], pancreatic cancer
  • Cat III, ex-vivo profiling of fresh tissue
  • Cat III, AI image analysis of breast cancer HER2-HR+
  • Cat III, AI image analysis prostate cancer
Re 88363 revision, this is a code for review of sections and blocks, for sendout for molecular studies (e.g. DNA from FFPE). It's not specific to bladder cancer now and I suspect this is just one application.

Re the "fresh tissue," see this from Chat GPT:
    This might refer to technology such as ex vivo confocal microscopy (EVCM) or closely related platforms (e.g., reflectance confocal microscopy of freshly excised tissue). These systems image fresh, unfixed tissue without freezing, sectioning, or routine H&E processing. In dermatology and surgical oncology, they are being positioned—carefully—as a potential alternative or adjunct to frozen section histology, especially in the context of Mohs surgery. 
    A  freshly excised tissue specimen is placed on a microscope stage, sometimes stained with a rapid fluorescent dye (e.g., acridine orange), and imaged optically using confocal principles. The system generates high-resolution images that approximate histologic sections—often digitally rendered to resemble H&E.


####

You can register for in person or zoom attendance.  In-person usually sells out (450 people for this upcoming session.) Even if it didn't sell out sooner, they list April 13 as the deadline for in person registration. You have to set up an email logon at AMA itself (free) to get into the reservation system for CPT meetings.





Wednesday, February 25, 2026

On a Lighter Note: JAMA Interviews Chatbot on History of Chatbots

JAMA consolidates AI-related articles across its journal family - here.  There's also a twice-a-month podcast, open access on podcast managers (Apple podcasts here).

For January 29, 2026, the topic is:   Chatting With a Chatbot: The History of the First Clinical Chatbots, Straight From an LLM.

https://edhub.ama-assn.org/jn-learning/audio-player/19034472

The 12- minute interview with ChatGPT 4o tracks AI in medicine back to the 1960s.  There's a transcript.

Fun fact: When I was in med school in the 1980s (Stanford), I took a one-hour-a-week elective on AI in medicine taught by Ted Shortliffe.  

Here's recent JAMA on over- and under-regulation of AI, link.  And, "The death of the consult note," link.  JAMA AI is edited by MGH psychiatrist Roy Perlis, heard on the interview above.

###

Chat GPT 5.2 summarizes the transcript.

###

JAMA+ AI Conversations: Chatting With a Chatbot — The Nearly Forgotten Origins of Clinical Chatbots

In a recent 12-minute episode of JAMA+ AI Conversations, Roy Perlis interviews not a historian, but a large language model, to revisit the early—and often oversimplified—history of clinical chatbots. The takeaway is that today’s debates about AI in mental health are not new. They trace directly back to the 1960s, when two pioneers, working in parallel, reached sharply different conclusions about whether computers should ever function as therapists.

ELIZA: The Famous Beginning

Most histories begin with ELIZA, created in the mid-1960s by MIT computer scientist Joseph Weizenbaum. ELIZA used simple pattern matching to simulate a Rogerian psychotherapist. It reflected users’ statements back as questions, creating the illusion of empathy. Despite its technical simplicity, users often experienced it as surprisingly human.

But the podcast argues that the story shouldn’t stop there.

Kenneth Colby: The Overlooked Pioneer

Before building his more famous chatbot, psychiatrist and computer scientist Kenneth Colby had already published a 1966 paper titled A Computer Method of Psychotherapy. In it, he described a scripted therapy program designed to guide structured therapeutic dialogue. Crucially, Colby envisioned these systems running on time-sharing computers—meaning a single machine could support multiple simultaneous “therapy-like” interactions. Even in 1966, he was thinking about scalability.

Colby did not argue that computers should replace therapists. Rather, he framed them as tools—adjunctive supports that might extend access or handle structured components of care.

PARRY: Modeling Psychopathology

In the early 1970s, Colby developed PARRY, designed to simulate a person with paranoid schizophrenia. Unlike ELIZA’s neutral therapeutic stance, PARRY incorporated rule-based representations of suspicion, persecutory beliefs, and guarded conversational patterns.

In informal Turing-style tests, psychiatrists were asked to distinguish PARRY from real patients with paranoid schizophrenia. Some struggled to tell the difference. For the era, this was remarkable.

In one of AI history’s more whimsical moments, ELIZA and PARRY were made to “talk” to each other. The result was absurd: ELIZA calmly reflecting statements, PARRY responding with paranoid suspicion. It was humorous—but also revealing. Even early chatbots could generate the illusion of personality through rules and framing alone.

The Philosophical Divide

The real tension lies not in the code, but in the ethics.

Weizenbaum later became sharply critical of applying computers to psychotherapy. In his book Computer Power and Human Reason, he warned against delegating deeply human emotional work to machines. He worried about misplaced trust, emotional attachment, and moral outsourcing.

Colby, by contrast, saw potential—if used thoughtfully and as augmentation rather than replacement.

Today’s debates about AI therapy apps, large language models in mental health, and scaling behavioral support echo this exact divide. The 2020s are replaying a 1960s argument—with far more powerful tools.

Lessons for Today’s Builders

The podcast closes with advice to psychiatry residents and computer science students who want to build the next generation of health chatbots:

  • Understand the ethical stakes. Weizenbaum’s cautions about over-reliance and loss of human connection remain relevant.

  • Think about augmentation, not replacement. Colby’s framing still provides a workable model.

  • Design for scale responsibly. The original promise—time-sharing therapy at scale—has now become global LLM deployment.

  • Preserve transparency. Early systems were rule-based and understandable; today’s black-box models demand even greater ethical rigor.

Bottom Line

Clinical chatbots did not begin with generative AI. They began with psychiatrists and computer scientists asking whether machines could structure, simulate, or extend therapeutic interaction. The core questions—empathy, trust, replacement vs. augmentation, scale vs. humanity—were already on the table in 1966.

What has changed is not the argument.

It is the power of the technology.