Friday, November 16, 2018

CMS Releases CY2017 SEP-1 Sepsis Management Data; Academics Perform Badly

On October 31, 2018, CMS released hospital performance data on quality measures, for CY2017.   This gave the first full year of data for SEP-1, a measure of how well hospitals live up to CMS management standards after patients have been diagnosed with sepsis.   (The measure requires several actions within six hours, such as managing the septic patient by drawing blood for culture, starting fluids, and starting antibiotics.)

I reviewed earlier partial year SEP-1 data on August 4, and the full original article is here.  The median hospital performance is 49% - meaning 49 patients out of every 100 diagnostically septic patients get the CMS protocol of interventions.   CMS adopted the measure after the agency established that protocol performance resulted in a 30% relative reduction in death rate in patients diagnosed with sepsis (79 Fed Reg 50236).

About 500,000 patients were under the measure, and it's a very complicated chart-reviewed metric that may require up to an hour per patient to classify.*  (That means at 2000 hours of work per year, 250 FTE's were squirreling away their statistics on this CMS measure.  So much for "patients over paperwork.") 

Rather than repeat all the findings in the August blog, let me highlight a few here.  For one, academic hospitals (per the US News & World Report ranking) did terribly.  Exactly half of them scored below 40% (remember that a perfect score is 100%), and three actually scored below 25%, the lowest being Vanderbilt at 15%.   (That's very close to the performance of the lowest-ranked of states or territories, Puerto Rico at 11%.)  The highest academic center score was 78%, at Baylor, followed by 72%, at NYU Langone.

Update:
For academic articles on recent SEP-1 data, see


  • Barbash et al., 2018, national performance on SEP-1.  Here.
  • Truong et al., 2019, should we drop one-size-fits-all SEP-1?  Here.
  • Rhee et al, 2018, does SEP-1 really correlate with outcomes?  Here.  Similarly Sanghvi et al., 2018, here.


How Did Henry Ford Hospital Do?

Remarkably, Henry Ford Hospital in Detroit, which is the measure steward of record for the National Qualify Forum for this metric, performed below the national median, at 48%, meaning less than half of its own patients got care that passed the CMS quality metric.   For two press releases by Henry Ford touting its role in creating the metric, now required by all hospitals seeing Medicare's 40M patients, here (2013) and here (2018).  For Forbes article on Henry Ford and sepsis measures, here.  As far as I can tell, there is no Henry Ford press release touting its actual, below-median performance.

NIH faculty criticized SEP-1 publicly earlier this year, bringing a rebuttal defending use of SEP-1 in patients from Henry Ford authors, although Henry Ford is at best below-median in actually using SEP-1 in patients.  Additionally, when CMS rolled out SEP-1, it asked the public to "recommend any changes in this measure" to "the measure steward" (HFF) rather than to CMS (79 Fed Reg 50240).

Unlike some hospitals in Michigan, like Covenant in Saginaw, which reported data on all its patients (with a higher score than Henry Ford), Henry Ford Hospital reported to CMS on only a sampling of patients that met sepsis criteria, so the exact performance in their Medicare patients can't be discerned.

The SEP-1 measure is bafflingly complex, with hundreds of pages of CMS guidance documents and - wait for it - a 254-question "Q&A" document.  See examples here.

How Did Mayo Clinic Do?

Mayo Clinic was also well below the national median.

How Did Vanderbilt Do?

I mention Vanderbilt because it has one of the highest-profile Centers for Precision Medicine in the U.S.   With a score of 15, Vanderbilt ranked all the way at the very bottom of US News' Top-20 medical schools for CMS management quality after patients are diagnosed with sepsis by CMS criteria, and also near the bottom of all US hospitals reporting to Medicare.   This ranked Vanderbilt at about #2900 of about 3000 reporting U.S. hospitals.

How did Emory do?

Emory University Hospital reported a low score of 25.  Emory has an internationally known critical care center and its director edits the journal Critical Care Medicine.  This ranked Emory at about #2740 of about 3000 reporting U.S. hospitals.

Recent Literature

Use of the measure has been criticized in some circles (e.g. here for summary and here for recent article in Annals of Internal Medicine by Pepper et al.)   For a current article in JAMA by Klompas et al. at Harvard, here.  Rhee at Harvard has published some of the first articles on SEP-1.  Rhee et al. reports that, in early data, meeting or not meeting the SEP-1 measure was not associated with any impact on outcomes in corrected data; here (trade press here). She discusses other limitations of SEP-1 relative to measuring quality, here.  (Rhee has also written about biomarkers to help manage antibiotics in sepsis; here; SEP-1 lacks any biomarkers except lactic acid for organ failure.)

Wide variation in SEP-1 performance was reported as early as January 2018 by Venkakesh, here, for which an op ed commented, "SEP-1: A measure in need of resuscitation?" (here).

The background of creating SEP-1 was reviewed by Faust & Weingart (here).  For critique in MedPageToday, here.  For a strong article on the challenges and dilemmas of implementing sepsis metrics across populations and subpopulations, here.

Does CMS Get What It Pays For?

CMS pays for diagnosing sepsis, as this can upgrade DRG payments markedly, driving by electronic medical records keyed to billing systems.  Other than posting SEP-1 performance, as discussed in this blog, CMS does not penalize hospitals if they don't do things like draw culture or start antibiotics.

The Rankings

Of the top 20 medical school medical centers in the US, only 4 even ranked in the top-fourth of the 3000 community and other hospitals reported by CMS.


For additional links and sources for the actual CMS data, entry point here.

Making It Personal

I was cheered that the closest hospital me, Olympia Medical Center in Los Angeles, and next to the preschool my kids went to, was tied for the top 6 nationally, at 99%.

Bringing back memories, I logged time at four of the top twenty.  Three of the places I trained or worked - Stanford, NYU, and Northwestern - were above the median; one (UCLA) was not.

___

*  CMS estimated the work burden of 3 chart based measures at 1.6M hours for US hospitals, or about 500,000 hours per chart measure.  For two chart measures, CMS estimated the work burden at 741,074 hours, or about 370,000 hours per measure.   For one other chart measure, 858,000 hours of work.  Exact numbers are unavailable, but I assumed SEP-1  (with extremely complex rules) so a 500,000 hour estimate of work is conservative.  See CMS, FY2019 final rule, 83 Fed Reg pages 41689-90, August 17, 2018.

___

It could be that patients at academic centers are on research protocols different than SEP-1; more recently, CMS made an allowance for this.  However, it seems like research protocols wouldn't randomize away from the basic aspects of urgent clinical care; or if they did, it means there were clinical community equipoise for the alternative treatment being used instead, in which case, hospitals should not be penalized for that alternative.