Friday, April 19, 2019

Systematic Review Pummels Diagnostics RCTs - Because Docs Know If They See Test Results

From time to time everyone sees "Hierarchies of Evidence" that start with case reports at the bottom and rise through RCTs to the highest point of evidence, meta-analyses of RCTs.

Everyone should be aware that RCTs that are pivoted on a diagnostic test can be an inefficient way to study the impact of diagnostics.  For example, in a drug trial between arms A and B, everyone in Arm A gets Drug A and everyone in Arm B gets Drug B.  In a diagnostic trial, if you use the diagnostic only in Arm B, maybe 20% get a change because of an especially low result, and 20% get a change because of an especially high result.  But in this example, 60% of the patients in arms A and B of the diagnostic trial are treated exactly the same way and should have exactly the same result in both arms.  This dilutes the apparent impact of using the diagnostic.  (See longer viewpoint here.)

Along with others, I've pointed out for years that an RCT can't get a perfect score with a diagnostic, because it can never be blinded as to whether doctors in the diagnostic arm know the result of the diagnostic.   (For example, you can't give fictional placebo randomly positive or negative cancer PET scan reports in one arm, and real cancer PET scans in the other arm.  Shiver.)

However, I just ran across a meta-analysis from NIH and Harvard that dings a diagnostic test RCT because the doctors weren't blinded to the fact a diagnostic was used and thus became part of their decision.   Good lord.

The article is Pepper et al. and looks at RCT results across more than a dozen studies in critical care sepsis patients where procalcitonin is used to monitor infection and assist management.  (Procalcitonin rises and falls with the severity of bacterial infection.)    The meta-analysis concludes that procalcitonin added to standard of care reduces antibiotic dosing days by one to two full days, and may also reduce mortality, but not with a strong effect. 

The abstract and body of the study repeatedly discusses the scientific problem of bias in the studies.  However, at only one point do they clearly describe what that bias is, and one of the main charges is that doctors were not blinded and knew that they knew procalcitonin results in the intervention arm.  How on earth else would you design a diagnostic study??   The authors state at five different points that the results were marred by "high risk of bias," but only at one point that this included primarily non-blinded doctors given PCT test results.  Some formal estimators of bias (e.g. funnel plots for publication bias) were negative.



___

Extra credit...

Recent papers on predictive and prognostic test evaluations.

Wolff et al. (2019)  PROBAST: A tool to assess the risk of bias and applicability of Prediction Model studies.  Annals Intern Med 170:51-8.

Moons et al. (2019)  PROBAST: ...Explanation and Elaboration.  Annals Intern Med 170:W1-W33.

And

Riley et al. (2019)  A guide to systematic review and meta-analysis of prognostic factor studies.  BMJ 364:K4597.