Tuesday, September 13, 2022

Very Brief Blog: CMS Announces MEDCAC on "CED" - Pivots Off AHRQ Report on CED

In what I believe is part of the run-up to announcements about "TCET - Transitional Coverage for Emerging Technology," CMS has scheduled an advisory meeting on "CED/Coverage with Evidence Development" for December 7, 2022.

See a UCSF Transpers policy article by Kathryn Phillips on CED, at JAMA, here.


I've clipped the CMS announcement below.


See the MEDCAC home page and membership roster here.  

CMS has a panel of about 100 potential advisors from which, a couple times a year, they convene a meeting of about 10 selected members on a pre-arranged topic.  CMS holds an all-day workshop with presentations on the topic of interest, discussion by its panelists, and voting by the panelists on pre-arranged CMS questions.  (Questions like: "Does Service X have a net health benefit for Medicare patients?")

The December 7, 2022 webpage is here.  Upcoming and past meetings are archived here.  


A few days ago I posted a blog that AHRQ had released a draft evaluation of CED as practiced by CMS, and potential improvements.  See that blog for more information - September 7, 2022, here.  

I recommend for an alternate viewpoint on CMS CED, you also see the academic paper by Ziegler et al. at Dartmouth, published in early 2022 - here.   On September 7, I put the AHRQ document in context of other initiatives like "TCET."

The AHRQ report is written by somebody, on contract, for AHRQ.  Usually there is some fine print footnote somewhere stating the source (e.g. University of Oregon, University of Connecticut, etc).  I haven't seen that for this document.

My Comment:

I have some concerns that the format and analysis is so vague it does not really address core issues. 

The questions from AHRQ simply focus on "the criteria for CED" (which are a set of 13 pretty boring bullet points, e.g. "The trial should be well-designed.")   You can analyze bullet-points like that all day, and it doesn't really tell if the whole CED paradigm is being applied to effective and insightful ends. 

There are no questions like, "Does what we are doing make any sense?"   I mean, there is nothing wrong with a bullet point, "CED studies should not be unnecessarily duplicative" - of course they shouldn't be - but does CMS really have a good way to assess when and whether they are unnecessarily duplicative, to implement the bullet point.   You won't figure that out by reading and re-reading the words of the bullet point.

AHRQ announces up front that "CMS is confident that the CED NCD Process is sound" whereas an alternate stance might be, "The CED NCD Process desperately deserves a hard-hitting and bottom-up evaluation of its overall value."   

For example, you can have 12 carefully written-down rules for good road building (the pavement is this many inches thick, this smooth, intersections are so many feet wide) and examining the rules on the page would never tell you... if people were driving well por terribly, or going places that were useful. 

To make this more concrete, for one recent example, the CED proposed in 11/2017 for cancer sequencing was absolutely ridiculous and un-doable, but CMS staff must have felt that, no matter how "nuts" it seemed, it carefully passed each the 13 AHRQ criteria such as "[checkmark] the study is registered at Clinicaltrials.gov."  Earlier in 2022, I = suggested, for example, that CED could reduce the NPV or ROI of typical medical device products by 2/3 or more, making them unfundable (un-developable) - here

Meeting Structure:

CMS will post questions in advance for December 7, and will expect the panelists to have read the 35-page AHRQ report, as can the public.   CMS hasn't posted questions yet, but they may parallel the questions already used to frame the AHRQ report.  Bonus points if you can tell the difference between "Guiding Questions" and "Key Questions" - I'd get as much value if they just listed 4 questions. 

Guiding Questions.  (1) What are the strengths and limitations of the current CED criteria?  (2) What criteria are used by similar decision-making bodies?

Key Questions:  (1) What revisions to CED criteria may best address limitations while preserving strengths?  (2)  How might the revised criteria be evaluated in the future?

Note, to my point, the first question is not the strengths and limitations of current CED efforts, whic his a really important question, but rather, what are the strengths and weaknesses of the 13 bullet points.  It's like asking, "What are the strengths and weakness of the Senate's rules of order?" (knock the gavel once for X and twice for Y) rather than "What is the value of the current legislative output?"



On December 7, 2022, the Centers for Medicare & Medicaid Services (CMS) will convene a panel of the Medicare Evidence Development and Coverage Advisory Committee (MEDCAC).  

National Coverage Determinations (NCDs) resulting in Coverage with Evidence Development (CED) can expedite earlier Medicare beneficiary access to innovative technology while developing evidence and ensuring that systematic patient safeguards are in place to reduce the risks inherent to new technologies, or to new applications of older technologies. This MEDCAC meeting will examine the minimum CED criteria for clinical studies submitted for CMS approval.

The AHRQ “Analysis of Requirements for Coverage with Evidence Development (CED)” draft report, was released on September 7, 2022 and is open for public comment until September 28, 2022. The report and link to submit a public comment may be found at https://effectivehealthcare.ahrq.gov/products/coverage-evidence-development/draft-comment. The AHRQ report will be discussed during the December 7, 2022 MEDCAC.