Wednesday, October 12, 2016

Should We Break "Decision Impact Studies" Into Two Clear Categories?

For decades, we have been told that diagnostic tests are only important if they change care, which begins by changing a decision.   Decision impact studies (the before and after of a physician's recommendation) are important and have been conducted with increasing rigor.  On the other hand, from time to time you hear a payer say, "We don't give any credit to decision impact studies" or "That's only a decision impact study, not clinical utility."

Since decision impact IS so pivotal to a test's clinical utility, we should continue to use the idea, but maybe we should divide it into two parts:
  • Recommendation impact study.
  • Management impact study.
One study can report both, but payers are probably more influenced by management impact studies.  More after the break.

Let's say there is a new test for Cancer X, and it helps make decisions for Therapy A or B.  (For example, prostate cancer might be treated by either surgery or radiation based on the cancer's genomic profile.)   A decision impact study, which usually is prospective, asks the physician to record his planned management, then receive the test result, and then record if planned management (i.e. recommendation) has changed.  What actually happens is not recorded.   That is a "decision impact study," but I would call it, a decision impact study of the recommendation impact type.

Now let's add management impact.  We will track patients for some agreed-upon period that depends on the context, probably at least one month, maybe three or six months.   We can ask three questions, (1) what was their management, (2) did it concord with test indications, (3) did it match the post test recommendation (which we recorded).    This is now a management impact study.

Let's consider this framework in the context of two classic PET scan studies, one recommendation impact, one management impact.

Decision Impact; Recommendation Impact Type (NOPR)

In the Medicare-funded NOPR registry, patients with a wide range of cancers were given diagnostic PET scans in an open access setting (e.g. a pragmatic real world registry).

Physicians were required to record their reason for ordering the PET scan, and their planned recommendation (before the PET scan).  After the PET scan, physicians recorded change in management plan (if any).  

The first series of studies reported impact on intended management (e.g. Hillner et al., 2008, here.)  Later studies included a post hoc look at claims databases to assess "intended management vs [claims] inferred clinical management" (e.g. Hillner et al., 2013, here.)

Decision Impact: Management Impact Type (Fischer Lung Cancer PET RCT)

In a study of the impact of PET-CT scans on real-world lung cancer surgical decisions, Fischer et al. (2009, here) randomized patients to then-conventional staging or to PET-CT staging.

High-stage patients were deselected for lung cancer thoracotomy.

Use of PET-CT halved the rate of thoracotomies that were "futile" (futile, because excessive disease spread was found only during surgery).  This was a decision impact study - surgeons made decisions not to operate based on PET-CT results.  It was obviously also a management impact study - by changing management, patients received the clinical utility of avoiding futile surgery.

Contemporary Genomics Example: Management Impact Type

In a study of the impact of the Myriad Prolaris test on prostate cancer management, Crawford et al. (2014, here) first assessed decision impact (the pre- and post-diagnostic test recommendation).  In addition, a third-party auditor went back to each patient's medical records and confirmed the actual management given.   This is a management-impact study: there was actually a change in the patient's medical management, such as fewer prostatectomies.

Terminology and Why It Matters

When terminology isn't clear, mistakes result.   I see this all the time in my career as a medtech strategy consultant.   I watch different parties talk past each other (and hopefully try to fix it.)

In the past year, I recall seeing what I clearly considered a "management impact study" referred to by a major payer as "only a decision impact study."

Well, it was a decision impact study, but the decisions involved major documented management changes, leading to less patient morbidity.  

Better if we had a ready-to-go framework that classed some studies as (A) "recommendation impact" ("that's only a recommendation impact study") and others as (B) "management impact."

By having these two mental categories ready and familiar, it would be harder to make the mental glitch of dissing a study with the cliche' "only a decision impact study."  For example, I've seen journal titles say "Decision impact study" when "Management impact study" would be more powerful and more descriptive because large changes in real patient management were shown.