Friday, March 27, 2026

Korie et al. 2026: What Drives Next Gen Sequencing Denials at Yale Pathology?

Header:  A Yale pathology study presented at USCAP 2026 shows that NGS reimbursement denials are less about overuse and more about administrative failure—especially ICD-10 miscoding. Only 20% of cases were denied (275/1392), and most denials occurred despite guideline-concordant testing. The authors conclude, the fix is operational, not clinical.




Reimbursement Denials for NGS:
A Systems Problem, Not a Clinical One

[By Chat GPT 5.4]

At the March 2026 USCAP meeting, Korie et al. (Yale Pathology) presented a timely analysis of reimbursement denials for next-generation sequencing (NGS) in solid tumors:

Link (abstract PDF):
https://www.laboratoryinvestigation.org/action/showPdf?pii=S0023-6837%2825%2901936-1

The study evaluated 1,392 NGS tests performed between 2022–2023 at a large academic center. Of these, 275 cases (20%) were denied—a meaningful but not overwhelming fraction. That denominator matters: the system is not broadly failing, but the failures are highly patterned and correctable.


Key Findings

1. Denials cluster in common cancers and standard workflows

  • Lung (30%), head & neck (11%), colorectal (8%)
  • 80% ordered by treating physicians, not reflex testing
  • Most cases were metastatic (47%) or initial diagnosis (31%)

This is not fringe utilization. These are core oncology use cases.

2. Guideline concordance is high—even among denials

  • Lung: 88% met NCCN guidelines
  • Colorectal: 91% met NCCN guidelines

This sharply undercuts any narrative that denials reflect inappropriate testing.

3. The dominant failure: ICD-10 coding errors

  • 78% of denied cases involved incorrect ICD-10 codes
  • Despite:
    • 96% correct coding by ordering clinicians, and
    • 100% correct coding in pathology reports

This suggests a breakdown in downstream billing workflows, not physician error.

4. Classic payer reasons still appear—but are secondary

  • No prior authorization: 34%
  • Not medically necessary: 25%

However, given the high NCCN concordance, these categories likely overlap with documentation or coding failures.


Interpretation

This is a textbook example of revenue cycle friction overwhelming clinical intent.

Three observations stand out:

First, the denial rate (20%) is real but not catastrophic. The system is functioning—but inefficiently.

Second, the disconnect between correct clinical documentation and incorrect billed ICD-10 codes is striking. This points to:

  • Interface issues between EMR → LIS → billing systems
  • Manual coding steps or mapping errors
  • Possibly third-party billing workflows that degrade data fidelity

Third, payers are denying claims that—on retrospective review—clearly meet NCCN criteria. This creates a paradox:

The clinical system is aligned with guidelines; the reimbursement system is not aligned with the clinical system.


Operational Implications

For labs, pathology groups, and oncology programs:

1. Audit the ICD-10 transmission chain

  • Where exactly does the correct code become incorrect?
  • Is it:
    • Order entry → LIS?
    • LIS → billing export?
    • Billing vendor transformation?

2. Treat coding as a “last-mile” quality metric

  • Just as labs validate assays, they may need to validate coding integrity
  • Random audits of denied vs paid claims can be highly informative

3. Reframe denial management

  • These are not “medical necessity disputes” in the traditional sense
  • They are often data integrity failures

4. Prior authorization remains relevant—but not dominant

  • At 34%, it matters—but it is not the core story

Policy Angle

For those watching MolDx and broader Medicare/commercial alignment:

  • The study reinforces that utilization management is not the central issue for NGS in common cancers
  • Instead, administrative complexity creates artificial access barriers

This aligns with a broader pattern in precision oncology:

Coverage may exist on paper, but execution gaps determine real-world access.


Bottom Line

The Yale data suggest a simple but powerful conclusion:

NGS denials are often not about whether the test should be done—but whether the system can correctly describe what was done.

Fixing ICD-10 coding fidelity may yield more immediate reimbursement gains than any change in clinical policy or guideline alignment.

Register for AMA Meeting on Coding & AI: "Appendix S Revisions" - April 16

 AMA has big, big plans for changing how it handles AI services (potentially affecting digital pathology and genomics) in terms of policy and coding, possibly even with whole new classes of codes.  

These come under the headline of "Revising Appendix S," which has been a topic for several AMA CPT meetings in a row.   

You can register with AMA to view and comment on Appendix S plans, under the heading "Tab 67" of the next AMA CPT meeting.  Instructions here.  

New News: April 16:

AMA has just announced a special public meeting on Thursday April 16, from 430-600pm Central Time (530-700 ET, 230-400 PT).

Here's the AMA text and links.   Further below, I give you a very short AI summary of Appendix S.

See an essay from AMA policy participant Richard Frank MD - here.

###

Greetings,

 

The AMA will be hosting a virtual meeting to discuss Tab 67-Appendix S Revisions with Interested Parties and CPT Advisors ahead of the May 2026 CPT Editorial Panel Meeting. Due to interest in this tab, this session will serve as an opportunity for stakeholders to comment prior to the Panel meeting. While time may not permit all participants to speak, it is our goal to hear a broad range of perspectives.

 

Join us on Thursday, April 16th from 4:30 – 6:00 PM CT. Registration for this meeting is required.

 

Participants will be required to comply with the CPT confidentiality policy.  Meeting details will be sent to your registration email prior to the start of the meeting.

 

Register Now

Background

The CPT Editorial Panel created the Digital Medicine Coding Committee in 2024. The committee responds to requests from the Panel and supports the Panel on coding issues involving digital medicine or artificial intelligence. It does this by providing or obtaining subject matter expertise and advice.


Since its formation, the Digital Medicine Coding Committee (DMCC) has been planning an update to clarify and strengthen Appendix S: AI Taxonomy for Medical Services & Procedures, based on stakeholder feedback, applicant responses to AI-specific questions in the code change application (CCA), and needs identified by the Panel. The DMCC is actively seeking stakeholder feedback to ensure that revisions to the taxonomy in this code change application address longstanding areas in Appendix S that require clarification.

 

If you would like to attend, please register for this meeting here.  Your input and thoughts are very important to the CPT Process.

####

###


AI MIni Notes

From 2025 Publication to 0326 Proposal: summary (about 150 words):


The March 26, 2026 revision of Appendix S is not just an edit of the 2025 version; it is a substantial effort to turn Appendix S from a simple taxonomy into a more operational CPT policy framework for software-intensive services. The 2025 version mainly defined assistive, augmentative, and autonomous services at a high level. The 0326 version keeps those categories but adds much more about software outputs, reference services in current clinical practice, and the types of evidence needed to justify each category. It narrows assistive by warning that terms like “risk for” or “suggestive of” may require clinical validation. It raises the threshold for augmentative by demanding outputs that are not merely statistical but clinically meaningful, clinically important, and pertinent to the CPT descriptor. It also tightens autonomous claims by emphasizing transparency, guidelines, and clinical utility, suggesting the drafters want stricter boundaries and stronger evidentiary discipline.


From  Feb 02 Version to 0326 Proposal: summary  (about 150 words):

The March 26, 2026 version is best seen as a tightening and sharpening of the February 4, 2026 draft rather than a wholesale rewrite. By February, Appendix S had already begun evolving beyond a simple AI taxonomy toward a framework about software outputs, evidence, and coding boundaries. The March draft pushes this further. It drops more of the device-oriented/FDA-style language and speaks more clearly in CPT terms, focusing on software outputs and their relationship to a reference service in current clinical practice. It more carefully restricts assistive status, especially for outputs using predictive language like “likelihood of” or “risk for.” It makes augmentative more demanding by tying clinical meaningfulness directly to the CPT code characteristics. It also removes February language that gave Category III applicants a more permissive developmental pathway. Overall, March appears more conservative, more evidence-calibrated, and more focused on preventing applicants from overclaiming sophistication or autonomy.

Thursday, March 26, 2026

AI, Advanced Software, and AMA CPT Policy: Deadline March 31: Appendix S for Upcoming CPT Meeting

 For several quarters, AMA CPT has been debating major amendments to the AMA CPT "Appendix S," which may have enormous implications for how AI- or software-dominant healthcare services are reimbursed.

At the upcoming AMA CPT meeting in Chicago [virtual registration still available], a new round of revisions to Appendix S will be debated.  You can sign up now to read the current revisions and make public comment.  Debate was vigorous at the AMA CPT last September and this past February.  

Revisions to Appendix S may be followed by creating a new coding section called "CMAA," Clinically Meaningful Algorithmic Analyses.   

The deadline to comment is Tuesday, March 31.

My main concern is, they'll bring in policies adapted for radiology, cardiology, etc, and they may be a poor fit for genomics, which makes universal heavy use of extremely sophisticated software including AI and which already does not require "physician work" as its main input.

Here's how to comment:

First, go to the online PDF agenda for the April CPT meeting:

https://www.ama-assn.org/system/files/cpt-panel-may-2026-agenda.pdf

Click on the boldface link for INTERESTED PARTY COMMENT.  This should take you to AMA website here, but click on the PDF's link if the one below doesn't work.

https://cptsmartapp.ama-assn.org/ipdashboard

You may need to email register with AMA to access AMA functions like this comment dashboard.

When you get to the AMA CPT Smart App, be sure and click the tab near the top for "INTERESTED PARTY" view. Scroll down to bottom.


Note that for Tab 67, Appendix S, it sends you to the "Ballot" option (far right column) which is where you find the actual markup version of a new Appendix S.

Use the progress button near the bottom to scroll ahead to Agenda 67 (Appendix S).


So you've tapped Interested Party Portal, advanced to where you find Appendix 67.   There are four columns:

  • IP Interested Party Access (to CPT application and supporting documents like publications)
  • IP Comment (you get a fixed form on which to write your comments)
  • View Comments
  • BALLOT (in the case of Appendix S, you gotta get this, the actual 4-page appendix)
Appendix S is damn hard to read - it's nearly entirely fields of struck-out text and inserted text from beginning to end.   But it's important.

See snapshot of the heavy edits throughout:





Comment on CRUSH, CMS Policy, Genomic Testing: Due Monday, March 30

On February 27, 2026, CMS announced a vigorous plan for anti-waste, anti-fraud measures in Medicare, with strong highlighting given to two areas: (1) durable medical equipment DME, and (2) genomic testing.   The initiative, abbreviated CRUSH ("crush fraud") is open for public comment until Monday, March 30, 2026.

See my blog and links here:

https://www.discoveriesinhealthpolicy.com/2026/02/cms-issues-rfi-on-fraud-highlighting.html

Genomics fraud includes highly improper billing of hundreds of millions of dollars for genetic testing that is impossible or pointless in a Medicare population.   (These occurences were vastly dominated by the states of Florida and Texas where Medicare payment controls were amazingly weak, for years.) An example of a $52M genetic test scheme is here.

There have been 183 comments to date, but comments often pour in on the final day.   See the policy discussion here and find a "submit a public comment" checkbox.  Anti-fraud options discussed include nationalizing MolDx.

https://www.federalregister.gov/documents/2026/02/27/2026-03968/request-for-information-rfi-related-to-comprehensive-regulations-to-uncover-suspicious-healthcare



Tuesday, March 24, 2026

Nerd Note: 2017 PAMA Raw Data File is Still Posted

Header:  CMS still stores publicly available cloud data on lab test pricing surveyed in 2017 and representing CY2016.

###

 Congress and CMS are re-activating the PAMA reporting process.  For reporting laboratories who had >$12500 in Medicare payments in 1H 2025, they will report data on all claims paid by commercial payors in 1H2025, and they will report in May-June-July 2025.  See websites and announcements at CMS.

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule/clfs-reporting

The prior survey reporting 1H2016, reported and posted in 2017.  This set a new fee schedule for 2018 forward.

See the 2016/2017 Cloud Data 

At the time, CMS published a gigantic cloud database of reported prices.  I thought that was no longer available, but it seems it is.   The data can be pretty interesting.

On this page:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs

see "CLFS Applicable Raw Data File from 2017 Reporting."


That sends you here:

https://data.cms.gov/provider-characteristics/hospitals-and-other-facilities/medicare-clinical-laboratory-fee-schedule-private-payer-rates-and-volumes


For example, if you search 81211 (a popular BRCA code in 2016), you get 374 rows of data.   

I wasn't sure how that squares with a contemporary old 2017 data file I havea for PAMA 81211, which has 2364 rows of pricing data for 81211. From 2364 (my file) to 374 (online file today), about 85% of the rows are msising.  This is because the current cloud data leaves out all price reporting with less than 10 units per line, and my old file with 2364 lines includes many lines with only 1 or 2 payments at that price.  The overall shape of the data would be the same, just scaled down.

For example, the 2016 data I have (with 2364 rows) for BRCA 81211 shows a large price peak around $2900, which I assume reflect Myriad payments (this was not long after the BRCA Supreme Court case), and relatively few commercial payments at the CMS rate back then (around $2200) but another peak of payments around $1800, which was 85% of the CLFS at the time.   So I inferred (this is just armchair guessing) that Myriad was cruising along with numerous legacy contracts in the $3000 range for BRCA 81211, while new entrants were entering the newly opened BRCA market often at 85% of the CLFS.   



It was also notable that the thin tail of cheapest payments went below $100 and thin thin top end had the rare payment over $6000.

81455

AMA CPT created code 81445 (5-50 tumor genes) and 81455 (51+ tumor genes) at the same time.  81445 was gapfill priced to about $600, but 81455 was not priced by MACs.   However, in 2016 PAMA claims reported in 2017, the (current) database shows 53 claims.  (Recall it excludes any line with 9 or less units).

For 81455, there were 14 claims at $9,900, 11 claims at $1,600, 18 claims at $3579, and 10 claims at $4500.   From this, you'd expect the median was $3579, but it was actually $2916 when all claims (including those prices used 9 times or less) were counted.   We learned from BRCA claims for 81211 that the exclusion of price levels with 9 or less paid claims omitted over 80% of the total data.  

Monday, March 23, 2026

AI Experiment: How Alex Dickinson Describes the CARIS MCED ACHEIVE Report

In March, Caris released top-line results of its ACHIEVE study, testing its MCED test in real cases.  Press release here.  Active Linked In author Alex Dickinson wrote a set of 5 articles about the results.  One, two, three, four, five.

Out of curiousity, I asked what Chat GPT could make out of the six documents.



AI CORNER

###

Overview

Caris reports striking interim performance for its Detect MCED assay using deep whole-genome sequencing, with unexpectedly strong early-stage sensitivity in common cancers. However, enriched cohorts, limited follow-up, and incomplete blinded validation constrain interpretation. Dickinson’s analyses highlight a differentiated WGS multi-signal strategy with potential advantages over methylation-first approaches.


Consolidated Article (Caris + Dickinson)

Focusing first on the press release, the key point is that Caris reported an interim analysis, not a completed prospective screening validation. The Achieve 1 dataset includes 2,122 subjects (1,505 undiagnosed; 617 cancers), but the undiagnosed group is enriched, not general-population screening. 

Only 22.5% had ~1-year follow-up, with ~7% later diagnosed with cancer—again indicating high-risk enrichment. About 865 samples remain in blinded validation, so current results are signal-generating, not definitive.

The reported performance is notable. Stage-specific sensitivity was 56.8% (I), 70.1% (II), 77.1% (III), 99.1% (IV), with 61.3% for stage I–II. Early-stage sensitivity in key cancers included 53% breast, 78.9% prostate, 86.7% lung, and 62.2% colorectal. Specificity was 99.1% in a small asymptomatic subset (n=121) and 95.3% in the broader undiagnosed cohort. These are the central empirical results.

An Expert Discusses The Data

Dickinson’s posts provide useful context. He frames Caris as entering MCED from a position of scale and infrastructure—large tumor databases, clinical profiling, and sequencing capacity—suggesting Detect is an extension of an existing oncology data platform rather than a stand-alone assay.

Scientifically, Dickinson highlights the assay design: ~250x whole-genome sequencing of plasma with paired buffy coat sequencing to remove CHIP, extracting mutational, fragmentomic, and nucleosome-positioning signals for ML classification. This multi-signal WGS framework plausibly explains the observed sensitivity pattern.

His most provocative point concerns tumor-type performance, especially breast and prostate. He argues methylation-first MCED approaches may underperform in these high-incidence cancers, making Caris’s relatively strong early-stage sensitivity clinically meaningful. This reframes MCED evaluation toward incidence-weighted performance, not aggregate sensitivity alone.

Key Limitations

However, key limitations remain. The 99.1% specificity estimate rests on only 121 asymptomatic subjects. The broader 95.3% specificity is more relevant operationally but less reassuring for screening. The cohort is enriched, follow-up incomplete, and blinded validation pending—raising the risk of optimistic interim estimates.

Caris also claims superiority over methylation approaches, but provides no head-to-head comparison, making this an interpretive rather than evidentiary claim. Cross-study comparisons, including Dickinson’s, remain hypothesis-generating.

Strategically, Dickinson emphasizes Caris’s ability to pursue a sequencing-intensive design due to its scale. He notes current costs may be high (e.g., overlapping 2x150 reads), but could fall with alternative platforms. This underscores that economic feasibility will be as important as analytical performance.

Finally, Caris signals future expansion to whole transcriptome integration, reinforcing its view of MCED as a multimodal inference problem, though at the cost of added complexity.

Bottom line: Detect represents a biologically coherent and potentially important MCED approach with strong interim early-stage signals. But given cohort design, small screening subsets, and pending validation, the appropriate stance is serious interest with disciplined caution, not definitive conclusion.



CMS Reports on First Rural Health Summit under Tech Transformation

One of the lesser-known outcomes of last summer's budget bill was $50B for rural health transformation (RHT) focused on technology.  Last fall CMS established the Office of RHT and in December $50B of awards were announced.  Home page here.

On March 18, CMS convened leaders from all 50 states to discuss.  Find the online report here:

https://www.cms.gov/newsroom/press-releases/readout-cms-convenes-first-rural-health-transformation-summit-advance-state-led-innovation



AI CORNER

Here's a 100 word summary.

CMS convened its first Rural Health Transformation Summit on March 18, 2026, bringing leaders from all 50 states together to advance a $50 billion initiative created under recent federal legislation. The program aims to strengthen rural health systems through 

  • expanded access to care, 
  • workforce development, 
  • technology modernization, and 
  • innovative payment models. 

Discussions focused on five priorities: 

  • prevention and chronic disease, 
  • sustainable access, 
  • workforce pipelines, 
  • value-based care, and 
  • health IT. 
States shared early strategies such as telehealth, mobile units, and regional partnerships. CMS emphasized aligning efforts with Medicare and Medicaid financing and embedding long-term workforce and infrastructure solutions to ensure durable improvements in rural health outcomes.

Saturday, March 21, 2026

Journal Club: Value of WGS in Real-World Cancers (Van Putten, Nat Med)

 What's the value of going upscale to whole genome sequencing (WGS) in solid cancers?  Van Putten et al. assemble date from their experience with 888 solid cancers.  The work is from Hartwig Medical Foundation / Netherlands Cancer Institute.

Find the paper here and a Linked In essay here by Joseph Steward.  And here by Alex Dickinson.  Dr. Cuppen here, scientific director of Hartwig Foundation.


Most samples in this study were frozen tissue (89% success rate), but they remark that when archived samples were used, they had the same success rate (90%).

Chat GPT Discusses the Paper:

Friday, March 20, 2026

Can AI Re-Think Health Policy? Example Using WSJ Policy Essay (& MolDx)

Can AI read an article and project its possible applications into a different field?  That's today's question. 

Starting point: WSJ runs an essay by Harvard economics professor and Manhattan Institute authority Roland Fryer.  Fryer here, essay here.   


While his article was on "regulating AI," it clearly had ramifications or applications in other policy domains.  I asked Chat GPT 5 to read the essay and discuss its projection onto healthcare policy such as CMS.   I deliberately left my main initial request vague.   

At bottom, I asks it some Q&A, including how this applies to MolDx.

Here comes the initial response to my request, "apply Fryer's thinking to healthcare policy."

Thursday, March 19, 2026

NCCN Recommends NGS in All Stages of Pancreatic Cancer: Direct Conflict with Outdated Medicare NCD

Tuesday, I was in a webinar where stakeholders were discussing the badly-outdated Medicare NCD for NGS testing in cancer.  Thursday of the same week, more proof of the problem hit my inbox.

See the March 18, 2026, release of new pancreatic cancer management guidelines from the National Cancer Center Network - NCCN.   Here.   See coverage in Genomeweb, here.



Tuesday, March 17, 2026

Waiv (Former Owkin Dx) Brings In $33M For AI Precision Diagnostics

It's not every day you hear of a new $33M going into spatial diagnostics - but it happened this week.

WAIV Diagnostics, Paris (former Owkin Dx) is now a spinout from Owkin and pulling in its own investors and funding. 


https://wearewaiv.com/news/waiv-secures-33-million-to-scale-ai-precision-testing

See also a post from Meriem Sefta, WAIV CEO.

https://www.linkedin.com/posts/meriemsefta_waiv-waiv-secures-33-million-to-scale-share-7437851429651935232-WHmk/

"We exist to catalyze precision medicine with clinical-grade, AI-powered tests designed to detect biomarkers, predict outcomes, and better understand treatment response in oncology."

###

It's a fit with the Bloodpac webinar and white paper, "it's not just DNA mutations anymore" - here.

https://www.discoveriesinhealthpolicy.com/2026/03/bloodpac-releases-webinar-and-white.html


Illumina Publishes "Diagnostics Year in Review" CY2025 (48pp)

 Once again Illumina has published its Diagnostics Year in Review.   See CY2025 in a birdseye view, edited by Mara Aspinall, 48pp.

https://www.illuminaventures.com/wp-content/uploads/2026/02/Diagnostics-Year-in-Review-2025-Version-for-Distribution.pdf


##
AI CORNER
##

TL;DR ? Should you read this 48-page “Diagnostics Year in Review”?

Yes—if you care about where genomics, AI diagnostics, and reimbursement policy are actually heading, not just headlines. This is one of the clearest “industry synthesis” decks of the year, and it quietly encodes several policy-relevant signals that CMS/FDA watchers will recognize immediately.

The report argues that 2025 marks a shift from post-COVID recovery to a true “diagnostics renaissance.” That’s not hype—it’s grounded in three converging forces:

  • Regulatory relief (LDT rule vacated → innovation unlocked)

  • Platform shift (tests → data/AI-driven clinical decision engines)

  • Capital re-consolidation (massive M&A + “Terrific Ten” winners emerging)


BLOODPAC Releases Webinar and White Paper: New Frontiers in Therapy Selection / Beyond DNA Mutations

BLOODPAC offered a multi speaker seminar last fall on the topic, New Frontiers in Therapy Selection: Beyond DNA Mutations.  Find the online resources now.



Here's the home page: link.

Find the two-hour YouTube webinar here: https://www.youtube.com/watch?v=yuYdhbdVcpU

As you scroll the home page, you'll also reach the 37 page white paper.

###

Saturday, March 14, 2026

Chris Klomp Now Near Top of HHS; See His One-Hour Recent Interview

Wall Street Journal, Politico, Washington Post have all been covering the shake-up in senior management at HHS - here.    Chris Klomp rises to #2 at HHS.  Here's a profile of Klomp from several news articles.

Which gives extra importance to a one-hour interview that Paragon Institute posted just a few weeks ago.  

  • Find the text here
  • the YouTube archive here.  
  • He's interviewed by Brian Blase, President of Paragon Institute, and policymaker Demetrios Kouzakas.

Here's an AI article based on the interview transcript. [Chat GPT 5.4]

###

Chris Klomp’s Policy Playbook: 

Markets, Incentives, and the Power to Convene at CMS

In a wide-ranging Paragon interview, new HHS deputy Chris Klomp outlines a Medicare strategy built on incentives, market signals, and stakeholder convening rather than regulation, offering insight into emerging federal health policy direction. (January 27, 2026).



The Strange Place of FIT Testing Between FDA Label and Medicare Screening

 Header:  CMS is reviewing its coverage standards for CRC screening biomarkers - stool, blood, etc.   But CMS explicitly will  ignore fitting FIT testing into the new system.   (FIT testing will remain untouched and as-is.)

There's quite a story there - FIT testing is FDA-regulated much differently than Cologuard, Shield, etc.  Here's an essay planned by me, but written by Chat GPT 4 in a few seconds.

###

FIT Testing: Why It Has a Special, Hands-Off Status in the Screening NCD

Chat GPT 5.4

CMS’s current proposal on non-invasive colorectal cancer biomarker tests is notable not only for what it addresses, but also for what it leaves untouched. CMS is proposing new evidence standards for emerging biomarker tests, but it is not reopening its longstanding coverage of FIT and guaiac FOBT under the colorectal cancer screening benefit. That omission is interesting—at least to CMS policy nerds—because FIT looks simple from a distance but becomes surprisingly complicated if one tries to revisit the benefit rigorously. CMS may have decided it is better not to stir that particular hornet’s nest. [1][2]

The key issue is regulatory. FIT is not regulated like Cologuard.