The CGS MAC has issued a final LCD on "Automated Detection and Quantification of Brain MRI's", L40224. The proposal appeared on September 25, with comment to November 8. The final appeared December 4, with a "notice period" until January 18. The LCD was "MAC initiated."
I check new LCD postings every Thursday and I don't recall seeing DL40224. The non coverage statement is simple:
- This is a non-coverage policy for artificial intelligence assistive software tool for automated detection and quantification of the brain.
I would note the author who thinks this way might also write"non coverage...automated detection...whole slide imaging."
The LCD has 54 citations, some (by no means all) on ARIA, brain defects seen on amyloid therapies.
The MAC notes that "no comments were received." (?!)
This is the concluding rationale for non coverage:
While investigations have been exploring the potential of automated quantification technology for evaluation of ARIA, MS, TBI, epilepsy, brain tumors and other neurological conditions, this has been challenged by lack of established standards for measurements and access to large datasets to train the devices. While expert radiologists read the images based on visual patterns these programs quantify the brain volumes. While this is promising there is a lack of standards to establish what the normal values for brain volumes should be and each program has proprietary data so it is not interchangeable. There is not sufficient diversity within the data sets used to train the models to ensure changes based on age, gender, or ethnicity are accounted for. This is especially pertinent in the Medicare population as there are changes to brain volume related to age and with lack of standardized data it is challenging at this time to ensure subtle changes represent pathology and not variations of normal. At this time there is not sufficient clinical utility or validity data and use of this technology is considered investigational and not covered. CGS will continue to monitor the progression of research for these devices.
Not sure what to make of this. See a sidebar humor essay - Medicare non-coverage of newfangled stethoscope, microscope, and x-ray. Here.
####
The billing article is brief (A60245) and says that add on codes 0865T and 0866T are not comvered but the primary related MRI codes (eg 70551 52 53) can be covered.
###
Here is an online summary of 0865T. 0865T – Quantitative MRI analysis without a same-session MRI. This code is used when an analysis is performed on a previously acquired MRI. In other words, 0865T applies if no diagnostic MRI exam of the brain was performed during that same session. A typical scenario is analyzing a previously acquired MRI (from an earlier date or outside facility) using NeuroQuant or a similar tool to get volumetric data and lesion metrics. The code’s full descriptor emphasizes that it includes lesion identification, characterization, quantification, brain volume quantification and/or a severity score (when performed), along with data preparation, transmission, interpretation, and a report.,The code also requires that the analysis is reported when no concurrent brain MRI exam is done in that session.
###
AI CORNER
###
Chat gpt5
CGS’s LCD L40224 is a fairly unambiguous “not yet” on separately billable AI quantitation of brain MRI, including ARIA tools used with anti-amyloid mAbs, for J15 (KY and OH) effective January 19, 2026.(CMS) The scope is broad: although the policy text opens with ARIA and Alzheimer’s disease, the evidence section ranges across AD, MS, TBI, epilepsy, brain tumors, and generic volumetric tools. Taken as a whole, the LCD is written to block Medicare payment for stand-alone “automated detection and quantification” services on brain MRI, whether billed via a specific AI code, a Category III code, or an unlisted / add-on construct.
The contractor explicitly catalogs the current ecosystem: general volumetry products (NeuroQuant, icobrain, DeepBrain, Siemens morphometry analysis) and ARIA-specific tools such as icobrain aria, all cleared as Class II, adjunctive radiology software via 510(k). The LCD is careful to note that this software is labeled as a concurrent reading aid, not a replacement for radiologists, and is constrained in what it can and cannot segment (for example, exclusion of macro-hemorrhages ≥10 mm). That cataloging sets up the key policy move: FDA clearance is treated as a starting point, not a coverage entitlement. CGS frames its task as deciding whether paying extra for these adjunctive tools is “reasonable and necessary” under §1862(a)(1)(A). The answer, for now, is no.
The evidentiary story the LCD tells is actually quite coherent. First, CGS more or less concedes that there is technical validity in specific niches. For ARIA, they highlight Sima et al. (JAMA Network Open 2024), where AI assistance materially improves AUC, sensitivity, and inter-reader agreement for ARIA-E and ARIA-H versus unaided radiologists, particularly for mild ARIA. For volumetrics and related indications, they note multiple studies showing high correlations between tools (NeuroQuant vs FreeSurfer vs FIRST vs volBrain) and reasonable discrimination of disease vs control, whether in AD, MS, or temporal lobe epilepsy cohorts. In other words, they are not denying that the algorithms can segment structures and detect patterns.
However, the LCD repeatedly attacks generalizability. CGS leans on Scarpazza et al. and multiple systematic reviews in MS to argue that most tools have been trained and tested on small, homogeneous datasets, often with a single scanner or limited acquisition protocols. Scanner-to-scanner and protocol-to-protocol variation are not robustly handled, and training cohorts are not representative of the U.S. Medicare population with respect to age, race/ethnicity, comorbidity burden, and mixed pathology. The speech here is very much “we see signal, but you haven’t proven that it survives real-world heterogeneity.” They are particularly concerned about age-related brain volume changes in older adults; without standardized, demographically stratified reference data, subtle volumetric deviations may not reliably distinguish pathology from normal aging.
The decisive line is drawn at clinical utility. For ARIA, CGS acknowledges that AI improves detection of mild radiographic abnormalities, but points out that management algorithms in the Aducanumab, Lecanemab, and Donanemab appropriate use recommendations already allow continued dosing with mild ARIA under closer monitoring. There is no evidence that detecting an even subtler grade of mild ARIA-E or ARIA-H meaningfully improves patient outcomes: fewer symptomatic events, fewer catastrophic hemorrhages, better cognitive trajectories, or safer drug utilization. For other indications—MS, TBI, epilepsy, tumors—the situation is similar. The tools may match or occasionally exceed expert performance on reader-level metrics, but there are no RCTs or strong prospective data showing that their use changes treatment decisions or improves outcomes compared to standard expert reads. In MAC language, they remain “investigational” and therefore not “reasonable and necessary.”
NCD 200.3 on anti-amyloid mAbs and its Coverage with Evidence Development (CED) structure is the policy backdrop. Under that NCD, monoclonal antibodies directed against amyloid are covered only in CMS-approved studies that must meet fairly strict criteria, including robust imaging infrastructure and neuroradiology expertise to monitor ARIA. CGS’s rationale is straightforward: if you are in a CMS-approved CED protocol, you already have specialist radiologists and standard protocols for ARIA surveillance; if you are outside of CED, the drugs themselves are not covered, so ARIA-monitoring AI tools cannot be “necessary” for a covered treatment pathway. AI may be used within trials at sponsor expense, but that does not create a separate Part B benefit. The LCD thus blocks the use of local coverage to bootstrap a one-off AI reimbursement regime on top of a national CED construct that did not ask for that service.
An important subtext is the explicit separation of device clearance vs. payment. CGS spends a fair bit of space enumerating 510(k) clearances, then concludes that while these tools can be marketed as adjuncts, nothing in the evidentiary record compels Medicare to pay a separate fee for them. This echoes the broader CMS/MAC stance on radiology AI, CAD, and tools like FFR-CT: performance improvements at the reader level, absent clear effects on management or outcomes, are insufficient for a distinct payable service. The LCD is also clearly aware of potential automation bias and the risk of diffusing AI tools into low-expertise settings without adequate training; it repeatedly points readers back to neuroradiology education products from ACR, RSNA, and the Alzheimer’s Association as the preferred way to improve ARIA detection today.
The practical implications are sharp for imaging providers and vendors operating in J15. Imaging centers will not be able to bill Medicare separately for brain MRI AI quantitation covered by this LCD. If they deploy such software, the cost will need to be absorbed as part of providing an MRI (i.e., effectively bundled) or funded through other channels such as commercial payer contracts, research support, or pharma partnerships tied to CED studies. Radiologist training becomes, by policy definition, the default solution for improving ARIA detection and volumetric interpretation. For vendors, L40224 removes any realistic near-term path to a J15 Medicare fee schedule line item for stand-alone brain MRI AI; the commercial strategy must emphasize enterprise licensing, integration into research and trial operations, and long-term evidence generation rather than transactional per-scan charges.
From a broader AI policy standpoint, L40224 is a useful marker of how at least one large MAC is thinking. The policy implicitly accepts that “AI improves diagnostic metrics” is often true, but insists on “AI improves patient management or outcomes” as the bar for coverage. It shows acute sensitivity to non-representative training datasets, lack of standardization across proprietary tools, and the risk of over-reliance in non-expert settings. For ARIA specifically, it frames current AI tools as potentially helpful but still investigational, and places the burden of proof on vendors to show that improved detection or grading of ARIA translates into safer, more effective anti-amyloid therapy in real Medicare populations.
If one were designing a response strategy, the natural pressure points would be to reframe “utility” around safety and adherence to appropriate use criteria, generate multi-center, scanner-agnostic performance data in Medicare-like populations, and explicitly link AI use to changes in management (e.g., dose holds, discontinuations, avoidance of severe ARIA events), even before hard outcomes mature. In the interim, however, the business reality in J15 is that brain MRI AI must be sold as part of delivering better MR imaging, not as a separately reimbursed Medicare service.
