Monday, March 9, 2026

TriCon: Cutting Edge Conference in SF, May 4-5, 2026

The conference TRICON is in its 33rd year, and will be held in San Francisco May 4-5, 2026.  The conferences has three main tracks, "Diagnostics Innovation," "Artificial Intelligence," and "Precision Medicine."  

(And it comes right on the heels of Dark Report Pathology War College in New Orleans, April 27-29, and AMA CPT in Chicago, April 30-May 1.)

Find the conference website here:

https://www.triconference.com/

I gave the agenda(s) to Chat GPT and asked for a write-up.



####

AI CORNER

####

Summary:
The 2026 TRI-CON Precision Medicine conference highlights the rapid convergence of AI, multi-omic diagnostics, and digital pathology. Across three coordinated tracks—Artificial Intelligence, Diagnostics Innovation, and Precision Medicine—the meeting reflects a field moving toward AI-enabled interpretation of complex biological data and decentralized deployment of advanced molecular testing. For molecular pathologists and precision medicine specialists, the program signals a transition from isolated diagnostic tests to integrated computational systems guiding clinical decision-making.

------------------

The 33rd Annual TRI-CON Precision Medicine conference, returning to San Francisco in May 2026, brings together leaders in biotechnology, diagnostics, and computational medicine to explore how emerging technologies are reshaping healthcare. Organized around three overlapping tracks—Artificial Intelligence, Diagnostics Innovation, and Precision Medicine—the program illustrates how the next generation of diagnostics will increasingly depend on the integration of genomics, pathology, imaging, and clinical data within AI-driven analytical frameworks.

A dominant theme across the conference is the emergence of AI-driven multimodal biomarkers. Sessions in the Artificial Intelligence track explore how machine learning models can combine histopathology images, genomic sequencing data, radiology signals, and real-world clinical outcomes to improve biomarker discovery and therapeutic targeting. Digital pathology and computational pathology play a central role in this transformation, with speakers describing how foundation models trained on histology data may enable new biomarker strategies and accelerate clinical trial design. Several presentations also emphasize “agentic AI” systems, in which multiple AI models coordinate across datasets and clinical guidelines to support oncologists and multidisciplinary teams in treatment selection and clinical trial enrollment.

The Diagnostics Innovation track highlights another major shift: the migration of testing from centralized laboratories toward point-of-care and at-home diagnostic environments. New molecular technologies—including CRISPR-based detection platforms and portable multiplex testing systems—are being developed to bring complex molecular assays closer to the patient. However, presenters note that reimbursement policy, site-of-service restrictions, and regulatory frameworks remain major barriers to broader adoption of decentralized molecular diagnostics.

Meanwhile, the Precision Medicine program focuses heavily on liquid biopsy technologies, particularly minimal residual disease (MRD) testing and multi-cancer early detection. These sessions emphasize the growing importance of multi-omic signals—DNA mutations, methylation patterns, RNA expression, and protein markers—interpreted through AI-enabled analytics to detect cancer earlier and monitor disease progression more precisely. While oncology remains the dominant application, the conference also explores precision approaches in metabolic disease, neurology, and population health.

Taken together, the TRI-CON agenda suggests that diagnostics are evolving toward AI-mediated, multimodal clinical intelligence systems, combining laboratory science, computational modeling, and decentralized testing to support more personalized and proactive healthcare.

State Biomarker Bills: Germline Cancer Risk Edition; Plus USPSTF News

The breast cancer advocacy foundation, Susan G Komen for the Cure, is actively promoting the need for copay-free germline breast cancer risk testing, at the state insurance level.

See a headline article at Precision Medicine Online (subscription) by Alison Kanski.


https://www.precisionmedicineonline.com/precision-oncology/komen-push-no-cost-genetic-testing-inherited-cancer-risk-taking-hold-state

  • Two bills became law in Arkansas and Tennessee.
  • The newest three proposed bills have been introduced in 2026 by legislator advocates in Florida, Mississippi, and Tennessee.  Proposed bills are also in play in Georgia, Illinois, and Minnesota.
National

National legislation, the Reducing Hereditary Cancer Act (HRCA), HR 4752, was introduced last summer by Rep. Wasserman-Schultz (FL).

USPSTF?

USPSTF guidelines lag, referring only to BRCA1-2.  The current version is from 2019 (here), while an update has supposedly been underway since 2024 (here).  However, its research plan only promised to re-evaluate BRCA 1&2 per se.

The March meeting of USPSTF was just canceled, meaning USPSTF has not met for a long time (here).   MedpageToday recently published that HHS "may eliminate the USPSTF" here.

USPSTF guidelines are binding on commercial health insurance, per the ACA.   State laws generally only impact fully-insured plans and not large employer self insured ("ASO") plans.




State Biomarker Bills

For the latest on state-level biomarker coverage bills in general, see fightcancer.org (ACS-CAN) here.  A state biomarker law recently advanced to the governor's desk in Mississippi, here.

Friday, March 6, 2026

AMA Lists Full Agenda for April CPT Meeting: Appendix S Again

Web page for April meeting here. (Also registration, dates.) Agenda here.

Pathology comments are due March 12!  Other comments are due March 31.

###

It looks like there are about 60 total topics. Code 31627 for bronchoscopy is proposed for an add-on code for robot-assisted (Tab 10).  There is also a Category I add on code to reported standardized analysis of individual abnormalities in PET-CT (tab 14).  Tab 48 is "AI malignancy risk prediction."

###

Tab 67 is perenniel proposals to update Appendix S on software-dominant services and AI.

Revise Appendix S by refining the definitions of assistive, augmentative and autonomous; establish the type of clinical evidence needed to classify an AI medical service as assistive, augmentative or autonomous; clarify what constitutes clinically meaningful output.

As of March 9, you could download the code change application about Appendix S, but not the actual changs TO appendix S, yet.   

I walked through each new lab code when lab codes were posted a week back - here

Friends of Cancer Research; AI -Enabled Tumor Assessment, Accuracy-Precision

With a Linked In posting and a new 2026 white paper, Friends of Cancer Research (FoCR) addresses how we should evaluate - and then use - AI-enabled tumor assessment tools.  By which they mean, radiology tools (like RECIST).   

Find the posting here, with an embedded summary deck (6p) as well as a link out to the full white paper (17pp).

https://www.linkedin.com/posts/ai-enabled-approaches-to-tumor-assessment-ugcPost-7433240076404305920-4u1i/

Sidebar:
Adjacent, Linked In gave me an article on "pathomics" (quantitative pathology) by Swarnagouri Naganathanhalli at Johns Hopkins. Here.

Sidebar:
See also:  My blog on AMA new codes, AMA approves Cat III code for AI-assisted PET tumor sizing and longitudinal mapping.   Here.

###
AI CORNER
###

AI Tumor Measurement in Cancer Trials: What This New White Paper Actually Says

Overview. A new 17-page white paper from Friends of Cancer Research (2026) explores how artificial intelligence might change the way tumors are measured in oncology clinical trials. The short version is that the paper focuses almost entirely on AI analysis of radiology images—CT and MRI scans—not histology or digital pathology. The goal is to explore whether AI analysis of scans could eventually supplement or replace the longstanding RECIST system used in drug trials to measure tumor response. The document does not propose immediate regulatory change or introduce a specific algorithm. Instead, it lays out a roadmap for how the oncology ecosystem might validate and adopt AI imaging tools as clinical-trial endpoints. For most readers, it is best understood as a conceptual framework paper rather than a technical breakthrough.

Scope of the paper. One important clarification is what the paper does not cover. The authors limit their discussion to radiologic tumor assessment. AI tools discussed in the document analyze imaging data from CT or MRI scans, detecting tumors, segmenting them, tracking them over time, and quantifying tumor burden. The paper does not address AI applied to pathology slides or digital histology, which is a separate and rapidly growing field. The focus on radiology is deliberate because the current standard system used in oncology trials—RECIST—is itself based on radiology measurements.

The RECIST system. RECIST, or Response Evaluation Criteria in Solid Tumors, has been the dominant framework for evaluating tumor response in clinical trials for more than twenty-five years. Under RECIST, a small number of representative tumors are selected as “target lesions,” and radiologists measure the diameter of those tumors on imaging scans over time. Tumors are then classified as shrinking, stable, or progressing. The approach was originally designed for simplicity and reproducibility across clinical trials, but it has obvious limitations. Only a handful of lesions are measured, the measurements are one-dimensional rather than volumetric, and the process relies on human interpretation, which introduces variability. Perhaps most importantly, RECIST endpoints such as objective response rate and progression-free survival often correlate only imperfectly with overall survival, the ultimate clinical outcome regulators care about.

Why change RECIST. The white paper argues that advances in imaging and computing now make it possible to move beyond these constraints. Artificial intelligence can analyze scans in ways that were simply impractical when RECIST was developed in the 1990s. One relatively conservative application would be AI-assisted RECIST, in which algorithms help radiologists identify tumors and measure them more consistently. In this scenario, AI does not replace human interpretation but instead automates repetitive tasks and reduces measurement variability.

Volumetric measurement. More ambitious approaches involve abandoning one-dimensional tumor measurements entirely. AI systems can measure three-dimensional tumor volumes, allowing the entire tumor burden across the body to be quantified rather than focusing on a few selected lesions. In principle, this could provide a more accurate picture of disease progression and treatment response. Because tumor volume changes exponentially rather than linearly, volumetric measurements may detect treatment effects earlier than RECIST measurements based on diameter.

Radiomics. Another area discussed in the paper is radiomics, a technique in which AI extracts large numbers of quantitative features from medical images. These features can capture patterns such as tumor texture, vascular architecture, internal heterogeneity, and relationships with surrounding tissues. Researchers believe these imaging signatures may reflect underlying tumor biology, potentially revealing early signals of treatment response that simple size measurements cannot detect.

Growth kinetics. The paper also highlights the potential of modeling tumor growth kinetics. Instead of treating each scan as a static snapshot, AI models can analyze how tumors grow or shrink over time, estimating growth rates and response trajectories. This type of analysis could provide a more dynamic understanding of tumor behavior and help distinguish meaningful treatment effects from normal variability in tumor measurements.

Implications for drug development. The motivation behind these ideas is the possibility of improving clinical trial design. Early-phase cancer trials often involve relatively small patient populations, yet RECIST measurements are relatively crude indicators of response. Because RECIST endpoints correlate imperfectly with long-term survival, important decisions about drug development are sometimes made using weak signals. The authors suggest that AI-derived imaging measurements could potentially provide earlier and more biologically meaningful indicators of treatment benefit, enabling faster go-or-no-go decisions and more efficient clinical trials.

Regulatory pathway. A substantial portion of the paper focuses on the regulatory pathway required for such tools to become accepted endpoints in drug development. The authors outline a process similar to other biomarker qualification efforts. First, a specific context of use must be defined—for example, a particular cancer type or treatment setting where improved imaging endpoints could make a difference. Next comes analytical validation, demonstrating that the AI tool measures tumors consistently and reproducibly. This must be followed by clinical validation, showing that the measurement correlates with meaningful patient outcomes. Ultimately, large meta-analyses across multiple clinical trials would be needed to demonstrate that the new measurements predict survival or other key outcomes better than existing endpoints.

Historical precedents. The authors point to historical examples of biomarker qualification to illustrate the process. In breast cancer, pathologic complete response eventually became accepted as an early endpoint in certain neoadjuvant trials. In multiple myeloma, minimal residual disease measurements followed a similar trajectory. In both cases, years of collaborative research, standardization, and pooled analyses were required before regulators accepted these markers as credible indicators of treatment benefit. The paper suggests that AI imaging biomarkers may need a similar development pathway.

A regulatory nuance. An interesting nuance appears in the discussion of how these tools might initially be used. When AI measurements are used solely to analyze clinical trial endpoints—rather than to guide treatment decisions for individual patients—they may not require traditional FDA device clearance. In other words, AI tools could potentially be deployed in clinical trials before they are widely used in routine clinical care.

Tone of the paper. Despite the excitement surrounding artificial intelligence, the overall tone of the white paper is cautious. It does not claim that AI tumor measurement is ready to replace RECIST today. Instead, the document emphasizes the need for standardization, shared datasets, cross-tool validation, and collaboration among sponsors, regulators, imaging experts, and technology developers. Much of the paper is devoted to outlining methodological questions that must be resolved before AI-based imaging endpoints could become part of regulatory decision-making.

Bottom line. For readers deciding whether to tackle the entire document, the essential message can be summarized simply. First, the paper is about AI analysis of radiology images, not AI analysis of histology slides or digital pathology. Second, the current RECIST system for measuring tumor response is widely recognized as limited and somewhat outdated. Third, AI-based imaging analysis offers the possibility of richer and earlier indicators of treatment response, but substantial validation work will be required before regulators accept these measurements as formal endpoints in clinical trials.

Bigger picture. The broader significance of the paper is that it signals growing alignment across the oncology research ecosystem. Pharmaceutical companies, imaging specialists, technology firms, and regulators increasingly recognize that the traditional way of measuring tumors may not be adequate for the era of precision oncology. AI-enabled imaging tools could eventually modernize clinical trial endpoints, potentially making cancer drug development faster, more informative, and more efficient. For now, however, the field is still at the stage of defining standards and building the evidence needed to support that transition.

Thursday, March 5, 2026

Life Science AI Reaches Half of Life Science Investment (SVB)

Over at Linked In, David Crean provides highlights of the January 2026 Silicon Valley Bank life science investment report.  Providing links to a 29-page Silicon Valley Bank investment report, of $46B healthcare investment in 2025, $22B or 46% was healthcare AI.   Diagnostics/tools, biopharma, and devices were down and flat.

  • Find Crean here.
  • Find his Substack blog here.
  • Find the current article here.
  • Find the SVB report there or here.

Here are some quotes:

Strip out AI-related deals, and the rest of the market is in sharp contraction:

  • Biopharma: $21.6B (-19% YoY from $26.6B)
  • Healthtech: $13.8B (+5% YoY from $13.2B, AI-driven)
  • Dx/Tools: $4.3B (-35% YoY from $6.6B)
  • Device: $6.9B (flat YoY from $6.8B)

AI: When One Theme Consumes Half a Market

Healthcare AI is projected to reach $22B which equates to 46% of total healthcare VC. 

##

See his article, VC is Broken.


click to enlarge

x
##
AI Corner
##
Chat GPT 5.3 summarizes.

The Silicon Valley Bank Healthcare Investments and Exits H1 2026 report describes a venture ecosystem that has split into two distinct worlds: a booming AI-driven segment and a broader healthcare investment market that remains constrained by tighter capital and fewer exits. 

Venture fundraising for healthcare has fallen sharply from its pandemic-era peak. Healthcare-focused venture funds raised only about $6.9B in 2025, compared with $41B in 2021, reflecting limited partner caution as IPO and acquisition activity slowed and distributions back to investors declined.

Artificial intelligence now dominates healthcare investment. AI-related companies account for roughly 46% of healthcare venture funding, driven largely by extremely large financing rounds. In particular, deals exceeding $300 million—often tied to generative AI models, drug discovery platforms, or large healthcare data systems—have grown rapidly and now represent a major share of capital deployed. These unusually large rounds have reshaped funding dynamics and concentrated investment among a small number of companies.

Outside AI, the sector increasingly reflects a “haves versus have-nots” environment. Early-stage funding, particularly Series A rounds, remains accessible for strong teams with compelling ideas.

However, later-stage rounds such as Series B and C have become significantly harder to secure unless companies demonstrate clear clinical validation, revenue traction, or other strong fundamentals. Investors are writing fewer checks and concentrating capital in companies with defensible technologies and clear paths to value creation.

Sector trends vary. Biopharma remains the largest destination for investment, although overall funding has declined and investors are favoring validated biological targets and proven drug modalities. Healthtech investment is heavily driven by AI, especially tools that improve provider operations or provide clinical knowledge support. Diagnostics and tools companies face the most pressure due to reimbursement and commercialization challenges. Medical devices have remained comparatively stable, with activity centered on surgical robotics, brain–computer interfaces, and AI-enhanced imaging technologies.

Another emerging theme is longevity and healthspan, spanning geroscience research, consumer health optimization products, and technologies aimed at preserving functional ability in aging populations. Interest from major pharmaceutical companies—particularly around metabolic drugs such as GLP-1 therapies—suggests that this area may become a larger strategic focus.

Overall, healthcare venture investing appears to be resetting after the exuberance of the early 2020s. AI continues to attract large amounts of capital, but across the rest of the sector investors are demanding stronger fundamentals, clearer commercialization strategies, and credible exit pathways before committing funding.

Diagnostics?

The report portrays diagnostics and research tools (Dx/Tools) as one of the most pressured segments of healthcare venture investment right now. In contrast to AI-heavy healthtech or well-validated biopharma assets, diagnostics sits at the intersection of science risk, reimbursement risk, and commercialization complexity, which makes investors cautious. As a result, the sector experienced a noticeable decline in both deal volume and capital invested during 2025.

Investment contraction.
Total Dx/Tools venture investment fell substantially in 2025 to roughly $4.3B across about 388 deals, down from about $6.6B in 2024. The decline reflects a broad reset in healthcare venture markets: investors are writing fewer checks, raising fewer funds themselves, and concentrating capital in fewer companies with stronger fundamentals. Diagnostics is particularly exposed to this shift because many companies historically raised early rounds based on proof-of-concept science rather than near-term revenue.

Reimbursement and commercialization pressures.
A recurring theme in the report is that diagnostics companies face persistent uncertainty around revenue models and reimbursement pathways. Unlike therapeutics, where regulatory approval can create large value inflection points, diagnostics often depend on payer coverage decisions, clinical adoption, and integration into healthcare workflows. These factors make investors wary of long commercialization timelines and unpredictable pricing dynamics.

Shift toward platforms and data moats.
Despite the overall downturn, the deals that did occur tended to focus on companies with:

  • AI-driven biology platforms that accelerate drug discovery or biomarker identification

  • “Picks-and-shovels” infrastructure for research, such as spatial profiling or computational biology tools

  • Large proprietary datasets that create defensible competitive advantages

Examples highlighted include companies building foundational biological models, spatial biology platforms, or computational design tools that could shorten preclinical and clinical timelines.

Diagnostics evolving toward proactive care.
The report also notes a conceptual shift in diagnostics investment: some companies are positioning diagnostics not just as tests ordered during illness, but as tools for continuous monitoring, patient-initiated screening, and preventive medicine. Investors see potential in diagnostics integrated into consumer health, decentralized care, and early disease detection.

Bottom line.
Diagnostics is currently in a capital-constrained but strategically important phase. The easy funding for proof-of-concept diagnostic ideas has largely disappeared. Investors are instead focusing on companies that combine strong science with clear commercialization paths—especially those with scalable data platforms, AI integration, or the potential to reshape how diagnostics are used in proactive healthcare.


###
Sidebar - A national diagnostics conference in DC in August 2026 now includes a whole track on digital pathology and AI - 
click to enlarge


Tuesday, March 3, 2026

CAP Tells CMS: Z Codes Are Highly Disruptive

Header:  In written comments on annual Medicare Advantage rules (CMS-4212-P), College of American Pathologists refers to Z-codes as "highly disruptive."

___

In comments to Medicare Advantage proposals (CMS-4212-P), CAP discusses a number of problems, including network adequacy, prior authorization, and "unorthodox" coding requirements such as MolDx Z codes.

Find the CAP comment here:

https://documents.cap.org/documents/CAP-MA-RFI-Comment-FINAL.pdf

Coding and Reimbursement 

Non-standard coding and/or reimbursement practices have serious adverse consequences for pathologists and laboratories trying to comply with conflicting requirements. Guidelines that deviate from or distort standard billing practices not only limit the ability of laboratories to provide care for patients, but also create fraud concerns, issues with state health plan contracts, and – of particular concern to many patients – potential denials from secondary insurance coverage. 

Such nonstandard billing requirements risk interfering with the ability of patients to receive timely and appropriate services, and adversely affect patients, providers, and the entire health care system. For example, Medicare Advantage plans operated by UnitedHealthcare, Optum Care, and Humana have implemented the requirement that molecular pathology claims contain DEX Z-codes, which are proprietary alpha-numeric codes obtained from the Palmetto DEX Registry. 

The CAP considers this requirement to be highly disruptive, administratively burdensome, and cost prohibitive for pathologists and laboratories, and ultimately believes it will impede patient access to medically necessary testing. 

The CPT code set is universally used by the medical community and transparently developed with broad stakeholder input, including the CMS and other payers who are represented on the CPT Editorial Panel. 

CPT codes are also recognized by the US Department of Health and Human Services (HHS) as a HIPAAcompliant Level I HCPCS code set. 

By contrast, Z-codes do not fit these criteria or undergo this level of input and scrutiny. We also strongly advise adhering to the use of CPT for reporting of molecular pathology and genomic procedures, as this approach does not impose additional requirements or reporting complexity for processing claims for medically necessary services, and maintains alignment with the reporting requirements established by other private payers.

###
AI CORNER
###
I asked ChatGPT to take the role of MolDx and write a response to Dr. Oz.
###
* * The Fictional AI Response: * *

To: Dr. Mehmet Oz, Administrator
Centers for Medicare & Medicaid Services (CMS)

Re: Coding and Reimbursement Policy for Molecular Diagnostics

Dear Dr. Oz,

We appreciate the thoughtful comments submitted by the College of American Pathologists (CAP) regarding molecular diagnostic coding and reimbursement. CMS shares CAP’s commitment to patient access, transparency, and consistency in billing practices. However, several clarifications regarding the MolDx program and the use of DEX Z-codes may be helpful.

First, CPT codes alone are not sufficient to uniquely identify many molecular and genomic tests currently billed to Medicare. Numerous laboratories may bill the same CPT code while performing tests that differ substantially in analytes, methodology, analytical validity, and clinical evidence. This heterogeneity creates challenges for accurate coverage determination, payment integrity, and program oversight. The MolDx registry and associated DEX Z-codes provide a mechanism to uniquely identify each specific test, enabling CMS contractors to link claims to validated test dossiers and coverage policies.

Second, the Z-code requirement does not replace CPT coding and does not alter the HIPAA-recognized coding structure. Laboratories continue to bill using standard CPT codes; the Z-code simply serves as a supplemental identifier that allows Medicare Administrative Contractors to determine precisely which test is being performed. This additional level of transparency supports consistent coverage determinations, reduces inappropriate billing, and allows Medicare to distinguish between tests with differing levels of clinical evidence.

Third, MolDx experience over more than a decade suggests that test-level identification improves program integrity and beneficiary protection. Molecular diagnostics have been an area of documented fraud and billing irregularities in recent years, including instances where identical CPT codes were used for tests with dramatically different clinical validity or for services not performed. The registry approach enables CMS to ensure that covered tests meet appropriate evidentiary standards and that payment is directed toward validated services.

We recognize that the introduction of any additional administrative process can create operational challenges for laboratories. For this reason, MolDx has continued to streamline the DEX registry process, expand guidance for laboratories, and coordinate with other payers to promote consistent implementation. Our objective is not to create proprietary barriers, but rather to ensure that Medicare claims for complex molecular diagnostics can be accurately identified, evaluated, and reimbursed.

In summary, CPT codes remain the foundational billing framework for laboratory services. However, for modern genomic and molecular tests, CPT alone does not always provide sufficient granularity to support coverage policy, payment accuracy, and program integrity. The MolDx Z-code registry addresses this gap while preserving the existing CPT billing structure.

We welcome continued dialogue with CAP and other stakeholders to further refine processes that maintain patient access while ensuring accurate identification and evaluation of molecular diagnostic tests.

Respectfully,

MolDx Program Leadership  [WRITTEN BY AI, VIA BRUCE QUINN]
Palmetto GBA / Medicare Administrative Contractors
On behalf of the CMS MolDx Program

##
For a third round, see an essay, about both essays above, by Lewis Black here.

Friday, February 27, 2026

CMS Issues RFI on "Fraud" - Highlighting Genomic Test Fraud; MolDx Controls Featured

From 2018 to the present, there have been billions of dollars of genetic test fraud in Medicare Part B, the vast majority occuring from unscrupulous labs in Texas and Florida.   MACs there allow pop up labs to bill $10M, $20M, or more in the highest-cost genetic test codes, with no controls before payment.   In one documented case, the Florida MAC paid out huge amounts for obviously-impossible tests to a lab in one year, then stopped payment on those codes.   The brilliant masterminds at the lab then billed tens of millions of dollars, the next year, on nearby genetic codes, unfairly evading the hard-working fraud experts at their MAC.  

This doesn't happen at MolDx MACs nor the NGS MAC.

CMS announces plans for major new fraud efforts, and seeks public comment on them. Genetic test fraud is highlighted.  Comment to March 30.


  • Find the home page for the Federal Register publication here.   Includes html text of the whole publication.
    • See Section D. Reducing Medicare Fraud Related to Laboratory Tests Including Genetic Tests and Molecular Diagnostic Tests.
    • 91 Fed Reg 9803, 6pp, 2/27/2026, here.  2026 CMS 0227 91FR9803 Fraud RFI
  • See the Crushing Fraud home page at CMS here.
  • In case I've been too subtle in this and prior blogs for several years, this genetic test fraud hasn't ever required geniuses and AI to catch.  It could be detected by a ten-year old with Excel.
    • "Mommy, why did this lab bill Medicare $25M last month for genes like Tay-Sachs and cystic fibrosis in 90 year old nursing home patients?"
    • Using public CMS data, I've written about this in great detail since 2020, six years ago - here.
  • See a broad press release from CMS here.
  • See an article at Fierce Healthcare here.  Genomeweb highlights MolDx news here.
MolDx Featured

Here's the MolDx section:

In 2011, the Molecular Diagnostic Services Program (MolDX Program), administered by Palmetto GBA on behalf of CMS, was established to determine and streamline coverage and reimbursement for molecular diagnostic tests on behalf of the Medicare program. It aims to provide uniform policies across multiple Medicare Administrative Contractors (MACs) and currently spans 28 states.

CMS is interested in understanding if a requirement to register in the MolDX program, that other payors may have, has had an impact on the risk of fraud, waste and abuse related to laboratory testing. CMS also wants to understand the rationale as to why other payors and MA organizations require laboratories to be registered in MolDX even when they are not located in a MolDX state.
  • What prompted other payors and MA organizations to require registration in the MolDX program?
  • What safeguards or challenges has the MolDX program provided to your organization regarding laboratory testing?

###
Chat GPT summarizes the publication.
###

CMS “CRUSH Fraud” RFI —
What It Signals for Labs and Genetic Testing

On February 20, 2026, CMS released its “CRUSH Fraud” Request for Information, signaling a major escalation in program integrity efforts focused in part on laboratory testing, including genetic and molecular diagnostic tests. CMS notes that Medicare Part B spending on lab tests reached $8.4 billion in 2024 and that genetic tests, while a small percentage of total test volume, accounted for a disproportionately large share of spending. 

The agency explicitly references prior DOJ and OIG enforcement actions involving genetic testing fraud and highlights the work of its Fraud Defense Operations Center (FDOC), which accelerated payment suspensions in 2025, including actions involving suspect laboratories.

The document is not a pricing proposal and does not address coverage standards. Instead, it centers on expanding CMS’s program integrity authorities and operational tools.


Strengthened Fraud Detection and Payment Suspension

CMS asks whether it should modify enrollment, audit, medical review, and payment suspension authorities to act more quickly against suspected fraud. The RFI signals interest in stronger use of data analytics, automated claims edits, and faster suspension mechanisms where there are credible fraud indicators.

CMS is also exploring whether shorter claim filing deadlines (for example, 90–180 days instead of one year) might be appropriate for high-risk services. Although no services are specified, laboratory testing is clearly identified as a concern area.

The tone suggests a shift toward earlier intervention and more aggressive prepayment safeguards rather than post-payment “pay-and-chase.”


Possible Expansion of MolDX-Related Requirements

CMS specifically asks whether requiring laboratories to register in the MolDX program has had an impact on fraud, and why Medicare Advantage plans or other payers sometimes require MolDX registration even when laboratories are not located in MolDX states.

This line of questioning suggests CMS is evaluating whether MolDX registration and related safeguards function as effective program integrity tools. 

The RFI does not directly propose nationalization of MolDX, but it clearly signals interest in understanding whether such registration requirements mitigate fraud risk.


Enhanced Ownership and Identity Proofing

The RFI raises concerns about opaque ownership structures and foreign-linked fraud schemes. CMS asks whether it should require U.S. citizenship or permanent residency for individuals with ownership stakes of 5 percent or more, expand fingerprinting and background checks, or enhance identity verification processes more broadly.

For laboratories, this signals possible tightening of enrollment scrutiny and ownership transparency requirements, particularly in high-risk categories.


Medicare Advantage and Part D Oversight

CMS notes that it currently lacks explicit regulatory authority to direct Medicare Advantage (MA) and Part D sponsors to suspend payments under circumstances similar to Traditional Medicare. The agency asks whether such authority should be established.

If pursued, this could align MA and Part D fraud oversight more closely with Traditional Medicare and reduce opportunities for providers revoked in fee-for-service Medicare to continue billing in MA environments.


Artificial Intelligence and Advanced Analytics

CMS is seeking input on how artificial intelligence could improve claims review, coding oversight, and fraud detection. The emphasis is on improving efficiency and accuracy in identifying overpayments and suspicious billing patterns.

For laboratories, this signals continued investment in data-driven surveillance tools.


What This RFI Is Really About

The CRUSH RFI is fundamentally about expanding CMS’s fraud prevention and enforcement authority, not about revising coverage policy or recalibrating laboratory pricing. Laboratory testing — particularly genetic and molecular diagnostics — is explicitly identified as an area of concern because of its high spending concentration and documented enforcement history.

The overarching direction is toward:

  • Faster fraud detection

  • Expanded enrollment and ownership scrutiny

  • Stronger prepayment controls

  • Greater alignment of MA oversight with Traditional Medicare

  • Increased use of analytics and AI

For legitimate laboratories, the central theme is operational compliance and transparency. The document signals that CMS intends to strengthen its fraud control infrastructure, and laboratory stakeholders should anticipate a more intensive program integrity environment in the coming rulemaking cycle.


The FDA's Recent NEJM Papers on Evidence: Applying Them to Evidence for Diagnostics

In the last couple months, Prasad and Makary at FDA have published two major position papers on FDA and evidence for drug trials.

Meanwhile, there's a recent NEJM-Evidence paper by Stewart et al. on evidence levels for imaging [diagnostics].

I ask Chat GPT to discuss any insights that may result from comparing the three papers.

### AI Corner ### Chat GPT 5.2

Overview

Two recent NEJM papers by Prasad and Makary signal a philosophical shift at FDA: away from rigid “two-trial” frequentist dogma toward a more integrated, Bayesian-style evaluation of total evidence—mechanism, effect size, coherence, and postmarket data. 

Their 2026 paper makes one strong pivotal trial the new default; their 2025 “plausible mechanism pathway” allows approval of highly targeted therapies when biology and clinical response align, even without traditional RCTs. 

In contrast, Stewart et al. in NEJM Evidence 2024 caution that diagnostics—especially imaging—have historically been adopted on plausibility and accuracy without proof of improved patient outcomes, leading to overuse and harm.

Comparing the three highlights a central policy tension: how to avoid procedural rigidity that stifles innovation while preventing premature adoption based on weak surrogate logic. 

The discussion also explores how payors view “evidence plus judgment,” and what this means for genomic risk tests where RCTs may be unethical. Readers gain insight into evolving FDA philosophy and its implications for diagnostics, reimbursement, and evidence standards.

Thursday, February 26, 2026

Register for April CPT in Boston; Lab Codes Posted (April 26)

AMA CPT ahs opened the registration page for the next Editorial Panel meeting, April 29-May 2, 2026 in Boston.  That's Westin, Copley Place.

Find it here:

https://www.ama-assn.org/membership/events/cpt-editorial-panel-meeting

##

The full public agenda for all codes will be released on March 6 (link above.)  Because they travel through several, serial subcommittees, Lab Codes are released for comment on April 26, link above.

####

LAB CODES here:

https://www.ama-assn.org/system/files/may-2026-lab-mopath-agenda.pdf

You can request code packets to common on starting NOW and must submit comments back by March 12.   Click on the "submitting a request" in the agenda pdf prologue.

  • NOTICE –INTERESTED PARTY PROCESS Upon review of this agenda, if the reviewer believes that they will need to provide comment on an issue, they must seek Interested Party status by submitting a request for a copy of the application and associated materials. Requests are to be submitted through the Smart App’s CPT Interested Party dashboard. Those interested in review of the agenda materials are required to complete this form in full.

There are a LOT of Category I & Category III codes, far more than typical.  This is the last cycle of new codes that will still enter the summer 2026 pricing process.

  • Cat 1, "POLE" gene
  • MAAA, multiple sclerosis
  • MAAA, CNS methylation
  • MAAA, revise text 81518 re breast cancer
  • MAAA, revise text 81542, prostate, remove "microarray"
  • Cat 1, testing for GFAP (glial protein)
  • Cat 1, pTau217/Abeta1-42 ratio
  • Cat 1, candida fungi PCR
  • Cat 1, 87800, 87801, add "single result" (pathogen)
  • Cat 1, rubeola by PCR
Digital pathology codes;
  • Cat 1, PAP test, Digital algorithmic classification
  • Cat 1, revise 88363 select specimen for [bladder prognostic testing] [sic]
  • Cat III, histomorphology augmentative software [AI], pancreatic cancer
  • Cat III, ex-vivo profiling of fresh tissue
  • Cat III, AI image analysis of breast cancer HER2-HR+
  • Cat III, AI image analysis prostate cancer
Re 88363 revision, this is a code for review of sections and blocks, for sendout for molecular studies (e.g. DNA from FFPE). It's not specific to bladder cancer now and I suspect this is just one application.

Re the "fresh tissue," see this from Chat GPT:
    This might refer to technology such as ex vivo confocal microscopy (EVCM) or closely related platforms (e.g., reflectance confocal microscopy of freshly excised tissue). These systems image fresh, unfixed tissue without freezing, sectioning, or routine H&E processing. In dermatology and surgical oncology, they are being positioned—carefully—as a potential alternative or adjunct to frozen section histology, especially in the context of Mohs surgery. 
    A  freshly excised tissue specimen is placed on a microscope stage, sometimes stained with a rapid fluorescent dye (e.g., acridine orange), and imaged optically using confocal principles. The system generates high-resolution images that approximate histologic sections—often digitally rendered to resemble H&E.


####

You can register for in person or zoom attendance.  In-person usually sells out (450 people for this upcoming session.) Even if it didn't sell out sooner, they list April 13 as the deadline for in person registration. You have to set up an email logon at AMA itself (free) to get into the reservation system for CPT meetings.





Wednesday, February 25, 2026

On a Lighter Note: JAMA Interviews Chatbot on History of Chatbots

JAMA consolidates AI-related articles across its journal family - here.  There's also a twice-a-month podcast, open access on podcast managers (Apple podcasts here).

For January 29, 2026, the topic is:   Chatting With a Chatbot: The History of the First Clinical Chatbots, Straight From an LLM.

https://edhub.ama-assn.org/jn-learning/audio-player/19034472

The 12- minute interview with ChatGPT 4o tracks AI in medicine back to the 1960s.  There's a transcript.

Fun fact: When I was in med school in the 1980s (Stanford), I took a one-hour-a-week elective on AI in medicine taught by Ted Shortliffe.  

Here's recent JAMA on over- and under-regulation of AI, link.  And, "The death of the consult note," link.  JAMA AI is edited by MGH psychiatrist Roy Perlis, heard on the interview above.

###

Chat GPT 5.2 summarizes the transcript.

###

JAMA+ AI Conversations: Chatting With a Chatbot — The Nearly Forgotten Origins of Clinical Chatbots

In a recent 12-minute episode of JAMA+ AI Conversations, Roy Perlis interviews not a historian, but a large language model, to revisit the early—and often oversimplified—history of clinical chatbots. The takeaway is that today’s debates about AI in mental health are not new. They trace directly back to the 1960s, when two pioneers, working in parallel, reached sharply different conclusions about whether computers should ever function as therapists.

ELIZA: The Famous Beginning

Most histories begin with ELIZA, created in the mid-1960s by MIT computer scientist Joseph Weizenbaum. ELIZA used simple pattern matching to simulate a Rogerian psychotherapist. It reflected users’ statements back as questions, creating the illusion of empathy. Despite its technical simplicity, users often experienced it as surprisingly human.

But the podcast argues that the story shouldn’t stop there.

Kenneth Colby: The Overlooked Pioneer

Before building his more famous chatbot, psychiatrist and computer scientist Kenneth Colby had already published a 1966 paper titled A Computer Method of Psychotherapy. In it, he described a scripted therapy program designed to guide structured therapeutic dialogue. Crucially, Colby envisioned these systems running on time-sharing computers—meaning a single machine could support multiple simultaneous “therapy-like” interactions. Even in 1966, he was thinking about scalability.

Colby did not argue that computers should replace therapists. Rather, he framed them as tools—adjunctive supports that might extend access or handle structured components of care.

PARRY: Modeling Psychopathology

In the early 1970s, Colby developed PARRY, designed to simulate a person with paranoid schizophrenia. Unlike ELIZA’s neutral therapeutic stance, PARRY incorporated rule-based representations of suspicion, persecutory beliefs, and guarded conversational patterns.

In informal Turing-style tests, psychiatrists were asked to distinguish PARRY from real patients with paranoid schizophrenia. Some struggled to tell the difference. For the era, this was remarkable.

In one of AI history’s more whimsical moments, ELIZA and PARRY were made to “talk” to each other. The result was absurd: ELIZA calmly reflecting statements, PARRY responding with paranoid suspicion. It was humorous—but also revealing. Even early chatbots could generate the illusion of personality through rules and framing alone.

The Philosophical Divide

The real tension lies not in the code, but in the ethics.

Weizenbaum later became sharply critical of applying computers to psychotherapy. In his book Computer Power and Human Reason, he warned against delegating deeply human emotional work to machines. He worried about misplaced trust, emotional attachment, and moral outsourcing.

Colby, by contrast, saw potential—if used thoughtfully and as augmentation rather than replacement.

Today’s debates about AI therapy apps, large language models in mental health, and scaling behavioral support echo this exact divide. The 2020s are replaying a 1960s argument—with far more powerful tools.

Lessons for Today’s Builders

The podcast closes with advice to psychiatry residents and computer science students who want to build the next generation of health chatbots:

  • Understand the ethical stakes. Weizenbaum’s cautions about over-reliance and loss of human connection remain relevant.

  • Think about augmentation, not replacement. Colby’s framing still provides a workable model.

  • Design for scale responsibly. The original promise—time-sharing therapy at scale—has now become global LLM deployment.

  • Preserve transparency. Early systems were rule-based and understandable; today’s black-box models demand even greater ethical rigor.

Bottom Line

Clinical chatbots did not begin with generative AI. They began with psychiatrists and computer scientists asking whether machines could structure, simulate, or extend therapeutic interaction. The core questions—empathy, trust, replacement vs. augmentation, scale vs. humanity—were already on the table in 1966.

What has changed is not the argument.

It is the power of the technology.

Trump State of Union, Healthcare Edition: Drug Prices High, Price Transparency Good, Insurers Bad

We gave Chat GPT 5.2 a link to the full transcript of the SotU, State of the Union speech.

###

Top-Line Summary

The 2026 State of the Union devoted meaningful time to healthcare, but almost entirely through the lens of prescription drug pricing and insurance affordability rhetoric, not medical technology or innovation policy. The President highlighted “most-favored nation” drug pricing reforms and criticized insurer profits, while pledging to protect Medicare and Medicaid. However, there was no discussion of diagnostics, digital pathology, FDA modernization, CMS payment reform, laboratory reimbursement, precision medicine, or AI in clinical care. Artificial intelligence appeared only in the context of energy infrastructure and youth education, not healthcare delivery. For medtech and genomics stakeholders, the speech offered little directional signal on regulatory reform or reimbursement modernization. In short, it was a pricing-focused address, not an innovation-focused one.

###

Evolving Importance of State Biomarker Laws: Guidance from AZ, GA, LA, OK

One of the major developments in genomics policy over the past five years has been the introduction of state mandates for biomarker test coverage, now active in about 20 states.  Typically, these bills peg private insurance coverage to Medicare coverage and guideline endorsement.   (Private insurance coverage is state regulated and generally excludes large-employer self-insurance plans aka ERISA plans).

See background at American Cancer Society here.  (Their map below).  See a 2024 article in JAMA by Lin, Coffman, Phillips here.  See a viewpoint from Avalon here.   See a recent press release re: Mississippi from ACS here.

ACS CAN


Several of the circa 20 states have issued some guidance to insurers about the laws:

AZ: Regulatory Bulletin 2026-01_Coverage for Biomarker Testing.pdf 

GA: https://oci.georgia.gov/press-releases/bulletins 

LA: bul2025-05-final.pdf

OK: BULLETIN NO. 2025-06 - Oklahoma Insurance Department


###

AI CORNER
###

I asked Chat GPT to review the four guidance documents.


CGP: An Evolving Definition

 I've been at several meetings recently where the topic turned to "defining CGP."   Is it 50 genes? 500 genes?  Ci Sure, it's indels and fusions, but does it require germline comparison?  If its LBx CGP, how important is WBC analysis for clonal hematopoiesis?  Ought we include a transcriptome?  If so, how much do we do with it?  (See, "The actionable transcriptome" here.)

The February issue of Precision Medicine Online addresses the topic directly.  Browse the February 2026 issue here.

##

AI CORNER

##

“Comprehensive Developments: Comprehensive genomic profiling plays an increasingly central role in personalized cancer treatment,” by Chris Anderson. Precision Medicine Online, February 2026.

In this February 2026 feature, Anderson argues that comprehensive genomic profiling (CGP) has evolved from broad gene panels into a foundational clinical strategy for biology-driven oncology, integrating large DNA/RNA hybrid panels, composite biomarkers such as TMB and HRD, and increasingly tumor-informed MRD applications. 

Drawing on data from Providence Health’s early, pathologist-directed 523-gene testing program, the article presents evidence that ordering CGP at diagnosis substantially increases actionable findings (67% vs. 33% with smaller panels), shifts patients toward targeted and immunotherapies, and may improve survival, while also highlighting persistent barriers in reimbursement and community adoption. For genomics stakeholders, the message is clear: CGP is less about a specific gene count and more about delivering comprehensive molecular context early enough to change therapeutic trajectories.

### DEEP DIVE

What exactly is “comprehensive genomic profiling” (CGP), and how far does it extend? From organ-based oncology to biology-driven care.


From Single Markers to Biology-Driven Oncology

Anderson opens by describing CGP as a natural progression from early single-gene tests to today’s multi-hundred gene hybrid panels. As next-generation sequencing (NGS) became cheaper and more powerful, and as actionable drivers expanded, panels grew accordingly. Modern CGP panels typically include:

  • DNA sequencing across hundreds of genes

  • RNA sequencing for fusion and splice detection

  • Complex biomarkers such as microsatellite instability (MSI) and tumor mutational burden (TMB)

  • Composite signatures like homologous recombination deficiency (HRD)

Rick Baehner (Exact Sciences) emphasizes that CGP is foundational to the shift from organ-based treatment decisions to biology-based treatment decisions. Common driver mutations—TP53, EGFR, KRAS, PIK3CA, APC—span tumor types. The logic of CGP is to match molecular profile to therapy rather than anatomical origin.

The article underscores a key principle: precision therapeutics are only as effective as precision diagnostics. Targeted drugs without robust profiling are, as Luca Quagliata (Thermo Fisher Scientific) notes, a missed opportunity.


Expanding Scope: Beyond SNVs

CGP has evolved beyond single nucleotide variants. Over the past decade, hybrid panels now capture:

  • Insertions/deletions (indels)

  • Copy number variants

  • Structural variants

  • RNA fusions and splice alterations

  • MSI and TMB

  • HRD and genomic-scar signatures

RNA sequencing is highlighted as particularly impactful. Baehner notes that adding RNA increases fusion detection by 15–20% and provides a more direct readout of tumor biology: “DNA tells you what could happen. RNA tells you what is happening.”

Ezra Cohen (Tempus) argues that combining tumor DNA, RNA, and germline testing improves diagnostic accuracy. Notably, Tempus data suggest approximately 9% of alterations are detectable only in ctDNA, supporting simultaneous tumor and blood testing.

For genomics readers, this section reinforces a critical theme: CGP is increasingly multi-analyte and multi-compartment. Tissue-only approaches may miss clinically relevant events.


Composite Biomarkers and Therapeutic Expansion

The article places special emphasis on HRD and PARP inhibitor expansion. Initially confined to ovarian cancer, PARP inhibitors are now used in breast and prostate cancer and are expanding further. HRD status—not just BRCA1/2 mutations—predicts benefit, meaning CGP must capture broader homologous recombination repair pathways (PALB2, ATM, genomic scars).

This reflects a broader evolution: integration of single genes into composite molecular signatures. The field is moving from binary mutation calls toward systems-level interpretation.


Early CGP: A Timing Paradigm Shift

The most compelling section for clinical policy readers is the Providence Health study.

Providence implemented pathologist-directed, upfront CGP at diagnosis using a 523-gene DNA/RNA hybrid panel, compared to a conventional 50-gene panel ordered later in the care pathway.

Key findings (first two years, 3,216 patients with advanced solid tumors):

  • 67% actionable alterations detected with broad CGP vs 33% with the 50-gene panel

  • TMB-high identified in 22% (not captured by smaller panels)

  • 52% received biomarker-informed therapy

  • Median overall survival:

    • 25 months (CGP-guided targeted therapy)

    • 17 months (chemotherapy)

  • In NSCLC:

    • 16 months median survival (CGP) vs 7 months (small panel)

Importantly, results were available 12 days before initial oncology consultation, meaning therapeutic decisions were made with genomic data in hand.

Carlo Bifulco frames this as correcting a timing flaw in oncology: genomic testing traditionally ordered too late, after treatment trajectories are set.

For genomics policy experts, this is significant. It reframes CGP not merely as a technical enhancement, but as a workflow redesign. It also strengthens arguments for early reimbursement parity.


Barriers: Reimbursement and Community Adoption

Despite compelling data, reimbursement remains the central obstacle. In the Providence study, testing was free. Real-world adoption hinges on payer coverage.

Community oncology presents additional friction:

  • 80% of cancer patients are treated outside academic centers.

  • Oncologists struggle to keep pace with rapidly evolving biomarkers.

  • Out-of-pocket cost concerns deter adoption.

NeoGenomics’ Warren Stone highlights peer-driven education as critical for broader uptake.

For reimbursement strategists, this section reinforces the familiar tension: clinical utility evidence is accumulating, but payer consensus lags behind guideline evolution.


CGP as Foundation for MRD

The article closes by linking early CGP to tumor-informed minimal residual disease (MRD) testing.

Broad initial profiling provides the mutation set required to design personalized ctDNA assays. Detectable ctDNA post-treatment signals high recurrence risk. This transforms CGP into the foundational “first pass” for chronic disease management rather than a one-time diagnostic.

This integration of baseline CGP and longitudinal liquid biopsy moves oncology toward:

  • Earlier detection of recurrence

  • Less invasive surveillance

  • Chronic disease framing of cancer care

For genomics readers tracking MRD reimbursement battles, this positioning is strategic: CGP is no longer just about initial therapy matching—it seeds future monitoring.


So What Is “CGP”?

Although the article does not provide a numeric definition (50 vs 500 genes), it implicitly defines CGP functionally:

CGP is:

  • Broad, multi-gene DNA profiling

  • Increasingly paired with RNA

  • Capable of composite biomarker signatures

  • Often integrated with germline and ctDNA

  • Delivered early in the care pathway

  • Linked to targeted therapy and immunotherapy decisions

  • Foundational for tumor-informed MRD

It is not simply a large panel; it is a clinical strategy centered on comprehensive molecular context.


Final Takeaway for Genomics Stakeholders

Anderson’s piece reflects where the field now stands:

  1. Technically mature hybrid panels exist.

  2. Clinical evidence for early use is accumulating.

  3. MRD integration strengthens longitudinal value.

  4. Reimbursement and community adoption remain bottlenecks.

  5. The definition of CGP is expanding toward multiomic integration.

For those of us debating whether CGP requires RNA, germline comparison, ctDNA, HRD signatures, or composite biomarkers—the article suggests the answer is evolutionary rather than binary. CGP is becoming less about panel size and more about biological completeness at clinically actionable depth.

In that sense, the “definition” of CGP may continue to expand as oncology shifts from mutation detection to full molecular state characterization.

Monday, February 23, 2026

Multiple Articles: FDA Launches Plans for "One Pivotal Trial, Not Two"

FDA drug studies have traditionally required two pivotal randomized controlled trials.   This makes the chance of a false positive very tiny.  (I also thought it might be because statute requires approval after well-controlled trialS, with an S).

We now have a packet of 5 FDA documents on scientific evaluations based on plausibility and drug approvals based on single trials.  

  • See the Feb 23 rare disease press release here.
  • See the Feb23 rare disease draft guidance document here. (Comment to ~ Apr 23).
    • See also a 2025 NEJM article by FDA on the general "plausible evidence" theme and how it might be applied in rare diseases here.

  • See the brand-new NEJM article on 1 trial approvals, here.  (Prasad & Makary Feb 18).
  • See a trade journal regarding the  new 1-trial NEJM article here.
  • Implications for Dx? See a GeneDx article on the "plausible" framework by Trish Brown here.
  • Coverage at Genomeweb here.

Making policy decisions often relies on a framework of facts.  (See also, how policy decisions are made for some diagnostics.)

Relevance to Diagnostics:

[Chat GPT] 
This evidentiary shift also has relevance for molecular diagnostics, where randomized controlled trials are often neither feasible nor ethically appropriate. In genomic testing, clinical utility frequently rests on strong biologic rationale, risk stratification accuracy, and concordance with outcomes — not on forcing artificial randomization. For example, it would be ethically untenable today to randomize women with a very low recurrence score on Oncotype DX to receive chemotherapy, or to deny chemotherapy to women with a very high score, simply to generate RCT purity. 

Instead, validation relied on retrospective-prospective analyses, biologic coherence, and concordance with clinical outcomes. In that sense, molecular diagnostics have long operated within a “plausible mechanism plus confirmatory evidence” paradigm. The FDA’s broader embrace of mechanism-anchored inference and one-trial flexibility (NEJM 2025; NEJM 2026) brings drug regulation somewhat closer to the evidentiary norms that diagnostics have navigated for years — particularly in oncology, where risk stratification tools guide treatment decisions without mandating ethically problematic randomization.

See also Nature on AI for rare diseases here


####
AI Corner  
Chat GPT 5.2 follows
####

Friday, February 20, 2026

AMA CPT Publishes "Summary of Panel Actions" for February Palm Springs Meeting

The most recent AMA CPT editorial meeting was February 5-6 in Palm Springs, and at day 14, February 20, they've released the Summary of Panel Actions.

Dozens of PLA codes were voted on, but for PLA, specific results and final text edits won't appear til April 1.

Here is the home page for Panel Actions:

https://www.ama-assn.org/about/cpt-editorial-panel/summary-panel-actions

And here is the February 2026 edition:

https://www.ama-assn.org/system/files/feb-2026-summary-of-panel-actions.pdf

####

DEEP DIVE [Chat GPT 5.2]

CPT February 2026: AI Is Still Getting Through — Carefully

The AMA CPT Editorial Panel met February 5–6 in Palm Springs, and the Summary of Actions now public. The agenda included 69 items. By my count, 27 were withdrawn, 5 postponed, and 4 rejected outright.

That distribution is typical of a mature coding environment. Withdrawals are not defeats; they are usually strategic pauses. Applicants often pull proposals to redesign descriptors, align specialty societies, clarify units of service, or reconsider whether PLA, Category III, or Category I is the correct pathway.  

Whole Slide Imaging AI: Category III Accepted

The most consequential action for digital pathology is Tab 45. Valar Labs received two new Category III codes (X568T, X569T) for AI-based prediction of immunotherapy response in non-muscle invasive bladder cancer.

This is a meaningful development. Over the past couple years, digital pathology-AI has been rejected from both PLA and MAAA categories. Some observers worried whether CPT would find a way to codify slide-based algorithmic services at all. This action demonstrates forward motion.

Category III status does not confer coverage, pricing, or payment certainty. But it does provide several critical foundations: a defined reportable service, a recognized unit of service, and a pathway to accumulate utilization and outcomes evidence. 

There is historical precedent for slide-based or analytic services being placed on the CLFS (about 8 PLA codes for dig path WSI), and I believe the same precedent ought to occur for the same service, just placed as a Cat III codes. We'll see how CMS handles these codes in the summer lab fee schedule "crosswalk-gapfill" meetings.

AI in Radiology: PET Heterogeneity Survives Debate

For different reasons, Tab 64 is equally instructive. Code X567T, describing AI analysis of heterogeneity of treatment response in metastatic cancer using serial PET imaging, was accepted after prolonged discussion.

The FDA-approved software uses AI to perform algorithmic tracking of multiple metastatic lesions across time, quantifying growth, shrinkage, and metabolic activity longitudinally. Radiologists do routinely compare scans to prior imaging. But they do not typically track dozens of lesions with systematic quantitative uniformity across timepoints. That distinction likely helped establish this service as separately reportable rather than merely an automated enhancement of existing physician work.  Potentially, this level of AI analysis makes the PET scans themselves more valuable to clinicians. 

On AI-assisted tumor assessment, see new FOCR white paper.

Pattern Behind the Withdrawals?

Several AI-related proposals were withdrawn during this cycle. That clustering suggests recurring friction points in the CPT process. Possible issues could include:

  • Unclear unit of service (per slide, per case, per lesion, per episode?)

  • Ambiguity about who performs or reports the service

  • Insufficient differentiation from existing physician interpretation

  • Framing the value proposition around improved sensitivity rather than decision impact

The services that succeeded this cycle were tightly framed around defined analytic outputs with specific clinical contexts. That appears to be the emerging template.

Some of these issues are now put into a topic called "codafiability."   Use it in a sentence.  "I don't like this service, so it lacks codafiability."

PLA Activity Continues

The Q1 PLA consent agenda added 29 PLA codes and revised or deleted several others. PLA remains active but algorithmic or software-driven analytic services may start showing up in Cat III.

Accepted Category III codes in this cycle are scheduled for July 1, 2026 release and January 1, 2027 effectiveness. That timing will affect any effort to coordinate with CMS annual pricing cycles.

Strategic Implications

Watch for more activity in upcoming AMA meetings on "Appendix S" - the AI appendix, which is getting intensive re-writing month by month, and track the possibility of a new code category "CMAA" for software-intensive services.   I've argued that services like genomics + AI should just remain on the CLFS as they have been for several years.