Background
Last year the Biden administration released a roadmap for coverage with evidence development (CED) called TCET - Transitional Coverage for Emerging Technologies. The administration proposed 5 NCDs of this type per year, with review of each 2 years out. At scale, this means 10 NCD reviews per year under TCET. How CMS will pull this off has puzzled me, since CMS only does 2-3 NCDs per year.
Some initial data has emerged. CMS has recently released 3 CED proposals that refer to TCET. One is for trans-vascular cardiac valves, a topic where CMS has done numerous CED NCDs for a decade. In short, some types projects already being done under NCD-CED will now be tallied in the TCET column.
- See CED for renal denervation, proposed 1-14-2025, CED for cardiac contractility, proposed 1-10-2025, CED for transcatheter tricuspid valve, proposed 12-19-2024.
- These all have CED, and all refer to TCET.
- For example, we read, "CMS received a complete, formal request to provide coverage for the EVOQUE tricuspid valve replacement system (EVOQUE system). This is a Transitional Coverage of Emerging Technology (TCET) pilot. The manufacturer of this device tested the processes and concepts of TCET."
- (CMS also proposed, on March 11, an NCD for home ventilation, a DME-like product, with no CED.)
- (CMS also finalized, on February 11, an NCD on pulmonary heart failure centers, that has CED, but doesn't mention TCET).
? Structure for CED:
See Hernán et al, 2025, "The Target Trial Framework for Causal Inference From Observational Data: Why and When Is It Helpful?"
While it's subscription-based at Annals of Internal Medicine, I was struck by the high-quality thinking in Hernán et al. (here). Most CED studies have been based on registries, rather than detailed RCT's. Hernan et al. describe an important approach to thinking about observational studies. You should first lay out, in detail, a randomized controlled trial that answers the question that must be answered. (This is the "Target Trial.") Once you have done that, look closely at whether observational data (including a de factor control or comparison) can address the underlying question. Hernan et al. argue that when this is done, and successfully, the observational data is very likely to be valid. When the observational power falls short of key findings that an RCT would have provided, conclusions (if any) from the observational data are likely to be lacking or not be confirmable. That's a summary; the full article lays out the logic and uses many examples.
For my money, the level of thinking in Hernán et al. goes beyond the level of logic brought to most discussions of CED.
Diagnostic Tests
Hernán et al. focus on interventional trials with therapies - you get a drug or placebo; you get a surgery or you don't. With diagnostic tests, we more likely have accuracy data, clinical context, superiority to standard of care diagnostics, and decision impact. You don't, for example, take a woman with a very low Oncotype score and give her chemotherapy, or a woman with a very high Oncotype score, and deny her chemotherapy.
I've argued for years that simply evaluating diagnostics under the rubric "analytical validity, clinical validity, and clinical utility" is too vague, and more logic is required. (It might be assumed that all the thinking and logic is just recreated for each new test assessment, pegged to AV, CV, CU). In 2014, Frueh and I (here) wrote a paper on "defining clinical utility" where we argued that about six or seven questions are enough. (One is too few - "do you have clinical utility" and 30 or 40 questions is too many). These involved:
- What is the population?
- What is the standard of care test?
- What is the new test?
- What is the improvement obtained with the new test (e.g. #3 minus #2 = delta).
- How much COULD that improvement affect clinical outcomes?
- How much DOES that improvement affect clinical outcomes?
- Some measure of cost effectiveness or efficiency.