Monday, July 24, 2023

Pair of Papers Shines Light on Mysteries of MRD Test Coverage and Clinical Utility Design


  • We look at the challenges and requirements for obtaining Medicare coverage for minimal residual disease (MRD) testing, particularly in the MolDx program. 
  • MolDx emphasizes the need for MRD tests to meet certain criteria, including identifying molecular recurrence before clinical evidence and demonstrating sensitivity and specificity comparable to standard care methods. 
  • We'll look at two papers in JAMA journals, one on colon cancer MRD testing and the other on HPV biomarkers. Compared, they provide valuable insights for developers aiming to design effective studies for MRD testing. 
  • The Mo et al. paper gets praise from an expert for its systematic approach, while the Ferrandino et al. study is criticized for its less systematic design and frequency of tests. This "RWE" impacts the tightness of statistical conclusions.



One of the most frequent questions I get is what kind of trials must be undertaken to get "minimal residual disease" or MRD coverage from Medicare, e.g. the MolDx program.  The field is moving fast, for sure, as witness a new review article on LBX/MRD by Cohen et al. in Nature (here). 

MolDx has a sort of omnibus LCD for MRD (e.g. L38835) .  It can stretch to cover MRD in hematopoeitic cancers as well as solid cancers, and applications from postsurgical (1 test), to surveilliance (e.g. quarterly for 2 years) to drug response (esp. for I-O cancer drugs).   But MolDx reviews and parcels out each test and indication individually.   

MolDx provides several rules, but most companies find there is a gap between the rules and making actual black-and-white trial decisions.  The MolDx rules for MRD are here:

  1. The test is demonstrated to identify molecular recurrence or progression before there is clinical, biological, or radiographical evidence of recurrence or progression 
    1. AND 
    2. demonstrates sensitivity and specificity of subsequent recurrence or progression comparable with or superior to radiographical or other evidence (as per the standard of care for monitoring a given cancer type) of recurrence or progression.
  2. To be reasonable and necessary, it must also be medically acceptable that the test being utilized precludes other surveillance or monitoring tests intended to provide the same or similar information unless they either (a) are required to follow-up or confirm the findings of this test or (b) are medically required for further assessment and management of the patient.
  3. If the test is to be used for monitoring a specific therapeutic response, it must demonstrate the clinical validity of its results in published literature for the explicit management or therapy indication (allowing for the use of different drugs within the same therapeutic class, so long as they are considered ‘equivalent and interchangeable’ for the purpose of MRD testing, as determined by national or society guidelines).
  4. Clinical validity (CV) of any analytes (or expression profiles) measured must be established through a study published in the peer-reviewed literature 
    1. for the intended use of the test and
    2. in the intended population.
Very often, after reading this, the company is asking, "Whaddawe do next?"

Almost simultaneous papers in JAMA journals provide a guidepost for developers.   

In the first, Mo et al, JAMA Oncol, very systematically assess a fixed 6-biomarker MRD test for MRD in colon cancer.  An Op Ed by Ruiz-Banobre praises the systematic data in 299 patients.   

In the second, Ferrandino et al use real world data (available data) for observational statistics on an HPV biomarker for MRD, in 399 patients (JAMA Otolaryng).  Although this study has more patients, the Op Ed by Lango et al hones right in on the disparate and difficult to study intervals of available tests, both biomarker and imaging.  





Mo et al., JAMA ONC 9:770

Ferrandino et al, JAMA Otolar (July 9)

299 patients, fixed panel of 6 biomethylation markers for MRD, very systematic test schedules

399 patients, high sens HPV in plasma, test or schedules as clinically available



Op Ed:  Ruiz-Banobre, JAMA Oncol 9:763

Op Ed:  Lango, JAMA Otolar (July 9)





Praised the paper for clear results and systematic, carefully followed study disign

Clearly remarked that the test intervals were not systematically timed or repeated, available evidence was used.  (See quotes, bottom of blog).

Take home lesson, if you want a template for a study design that maximizes your key statistical outcome measurements, you'd use Mo et al, not Ferrandino et al., even though both papers were publishable in JAMA journals and the Ferrandino study had "more patients" (399 vs 299).

In a Nutshell

Let's close by looking back at the MolDx standards, for example, showing the test is better than standard of care imaging.  Let's show this as a hypothetical comparison:
  • If SOC imaging picks up cancer at 9 months plus or minus 1 month, and you pick up cancer at 7 months plus or minus 4 months, you haven't shown your test is better.  I mean, c'mon.  
  • If SOC imaging picks up cancer at 9 months plus or minus 1 month, and you pick up cancer at 7 months plus or minus 1 month, you have shown yours is better.   
  • QED, It's not hand-waving and it's not rocket science.

Mo et al. MRD in CRC [systematic evidence collection]
* Noteworthy for strong results using the non-bespoke or non-tumor-informed paradigm
Ruiz Op Ed

Ferrandino et al. MRD in H&N Cancer [available evidence collection]
Lango et al. Op Ed
  (Ferrandino coverage in 360Dx here.)

The summary "In a Nutshell" is adapted from an AI summary of the blog.

I tested both Op Eds in ChatGPT3.5 and it also agreed the CRC editorial was more favorable!  Here:  Overall, both editorials recognize the potential of liquid biopsy tests, but the second editorial (RUIZ BANOBRE) seems more confident in recommending ColonAiQ as a promising tool for managing localized CRC. The first editorial (LANGO) is more cautious in its approach, acknowledging the potential of the NavDx test but highlighting the need for further research and validation before widespread adoption.

As a supplement to this blog, find a detailed AI assessment of the two Op Eds, here.

This is the passage where Longo suggests the Ferrandino study was less systematic:

The reasons for these differences are not entirely clear,
but a comparison of the 2 studies is instructive. 

In the study
by Chera et al,1 patients had frequent blood draws during
and after treatment—10 per patient over the course of the
study. Testing was completed in conjunction with deinten-
sification protocols in which imaging and laboratory sur-
veillance was performed according to a prespecified sched-
ule. All patients with persistently negative tests remained
disease free. The authors determined that 2 positive tests
were suggestive of recurrence, improving the positive pre-
dictive value to 94%.

In contrast, Ferrandino et al2 used data that were col-
lected during the course of routine clinicalcare and at the
discretion of the treating clinician. On average, patients had 1 to
2 TTMV-HPV DNA tests after treatment (591 tests in 290 pa-
tients), and it does not appear that testing was obtained ac-

Furthermore,it is not clear if standardized surveillance
imaging was used,which might have affected
the time at which recurrences were clinically detected.