Sunday, March 16, 2025

CMS Releases NCD Proposals with CED; Calls it TCET; Relevant Article by Hernán et al

Background

Last year the Biden administration released a roadmap for coverage with evidence development (CED) called TCET - Transitional Coverage for Emerging Technologies.   The administration proposed 5 NCDs of this type per year, with review of each 2 years out.   At scale, this means 10 NCD reviews per year under TCET.   How CMS will pull this off has puzzled me, since CMS only does 2-3 NCDs per year.

Some initial data has emerged.   CMS has recently released 3 CED proposals that refer to TCET.  One is  for trans-vascular cardiac valves, a topic where CMS has done numerous CED NCDs for a decade.  In short, some types projects already being done under NCD-CED will now be tallied in the TCET column.

  • See CED for renal denervation, proposed 1-14-2025, CED for cardiac contractility, proposed 1-10-2025, CED for transcatheter tricuspid valve, proposed 12-19-2024.   
  • These all have CED, and all refer to TCET.
  • For example, we read, "CMS received a complete, formal request to provide coverage for the EVOQUE tricuspid valve replacement system (EVOQUE system). This is a Transitional Coverage of Emerging Technology (TCET) pilot. The manufacturer of this device tested the processes and concepts of TCET."
    • (CMS also proposed, on March 11, an NCD for home ventilation, a DME-like product, with no CED.)   
    • (CMS also finalized, on February 11, an NCD on pulmonary heart failure centers, that has CED, but doesn't mention TCET).

? Structure for CED:

See Hernán et al, 2025, "The Target Trial Framework for Causal Inference From Observational Data: Why and When Is It Helpful?"



While it's subscription-based at Annals of Internal Medicine, I was struck by the high-quality thinking in Hernán et al. (here).  Most CED studies have been based on registries, rather than detailed RCT's.   Hernan et al. describe an important approach to thinking about observational studies.    You should first lay out, in detail, a randomized controlled trial that answers the question that must be answered.    (This is the "Target Trial.")   Once you have done that, look closely at whether observational data (including a de factor control or comparison) can address the underlying question.    Hernan et al. argue that when this is done, and successfully, the observational data is very likely to be valid.   When the observational power falls short of key findings that an RCT would have provided, conclusions (if any) from the observational data are likely to be lacking or not be confirmable.   That's a summary; the full article lays out the logic and uses many examples.

For my money, the level of thinking in Hernán et al. goes beyond the level of logic brought to most discussions of CED.

Diagnostic Tests

Hernán et al. focus on interventional trials with therapies - you get a drug or placebo; you get a surgery or you don't.   With diagnostic tests, we more likely have accuracy data, clinical context, superiority to standard of care diagnostics, and decision impact.   You don't, for example, take a woman with a very low Oncotype score and give her chemotherapy, or a woman with a very high Oncotype score, and deny her chemotherapy.   

I've argued for years that simply evaluating diagnostics under the rubric "analytical validity, clinical validity, and clinical utility" is too vague, and more logic is required.   (It might be assumed that all the thinking and logic is just recreated for each new test assessment, pegged to AV, CV, CU).   In 2014, Frueh and I (here) wrote a paper on "defining clinical utility" where we argued that about six or seven questions are enough.   (One is too few - "do you have clinical utility" and 30 or 40 questions is too many).   These involved:

  1. What is the population?
  2. What is the standard of care test?
  3. What is the new test?
  4. What is the improvement obtained with the new test (e.g. #3 minus #2 = delta).   
  5. How much COULD that improvement affect clinical outcomes?
  6. How much DOES that improvement affect clinical outcomes?
  7.  Some measure of cost effectiveness or efficiency.
While we published this in 2014, and it's sometimes quoted, my experience remains if you badly fail some of these questions, a T.A. probably won't go well, regardless of exactly what method the T.A. uses.  If you can give a tight, logical answer in 2-3 sentences to all the questions, you'll probably do OK.

Similarly, for therapeutic questions and with observational data to look at, Hernán et al. provide a logical framework which should give decision-makers a lot of traction.

While I'm not at the level of Hernán et al., both their article and mine on diagnostics aimed to shine a light on ways of making data plans more sound and logical.

###
###
###
Hernán et al., Abstract

When randomized trials are not available to answer a causal question about the comparative effectiveness or safety of interventions, causal inferences are drawn using observational data. 

A helpful 2-step framework for causal inference from observational data is 1) specifying the protocol of the hypothetical randomized pragmatic trial that would answer the causal question of interest (the target trial), and 2) using the observational data to attempt to emulate that trial. The target trial framework can improve the quality of observational analyses by preventing some common biases. 

In this article, we discuss the utility and scope of applications of the framework. We clarify that target trial emulation resolves problems related to incorrect design but not those related to data limitations. [BQ - And it highlights which is which.]  

We also describe some settings in which adopting this approach is advantageous to generate effect estimates that can close the gaps that randomized trials have not filled. In these settings, the target trial framework helps reduce the ambiguity of causal questions.

###
###
AI CORNER
###
###
See a Chat GPT 4o summary of the article.

###
Final point.  This article discusses CED going forward as part of NCDs.  However, during the first Trump admininstration, HHS General Counsel took the position, CMS should not be doing CED at all, for legal reasons.   Here, here (Charrow 2021).

See an update 2025 on CMS coverage, Tunis et al.