Thursday, April 25, 2019

Very Brief Blog: Nature Reviews Cancer Publishes Go-To Review on "Cancer Over-Screening"

Screening for cancer is crucial to intercept cancers early and allow early successful treatment.  Some of the most successful screening methods are well-established (mammography, Pap smear, FIT or colonoscopy), but new methods are constantly coming on line (Exact Sciences ColoGuard, Epigenomics Epi proColon, low-dose CT for high risk lung cancer (LDCT), enhanced digital or MRI mammography, 4KScore, Prolaris, Decipher and others in prostate cancer, etc). 

Even with the oldest methods (mammography), the best rules and practices remain contentious (including start age, stop age, role of family history on practices, role of genetic risk burden, etc.)  Today in 2019, Medicare defines high-risk colon cancer screening solely by "family history" somewhere amongst your relatives, entirely ignoring whether or not you actually inherited the gene at risk or not (here).

Srivistava et al., 2019

Although it's not open access, Nature Review Cancer has published a comprehensive 10-page review that will probably be the go-to reference for the next couple years.   Find Srivistava et al., 2019, hereAuthors are from NIH, MD Anderson, Univ California, Hopkins.

I've clipped the abstract below the break.   Authors note that overdiagnosis rates vary a lot, not an issue in cervical or colon cancer, 25% in breast, 60% in prostate.   In the conclusion, they view better molecular determinants as a path forward for better public healthcare in this area.

Wednesday, April 24, 2019

Very Brief Blog; CMS Launches Proposed Inpatient Rule (FY2020); Tweaks NTAP Payments

On April 23, 2019, CMS released its annual spring proposed rules for Inpatient Hospitals, including new technology add-on payments.

  • CMS press release here.
  • Kaiser Health News here.
  • Boston STAT (subscription) here.
  • The home page for the rule is here.  
  •      The early access typescript version is here, 1824 pp.

For the NTAP program, devices must currently meet several criteria including being not just newly approved, but more broadly a "new" form of technology as well as having substantial health improvements in Medicare patients and substantial cost relative to its DRG.  Currently "new" includes being literally new plus, it must - so to speak - fail criteria that it is substantially similar to any existing device.  

CMS proposes to deem a device "new" if it comes out of an FDA expedited approval pathway (II.H.8).  They also proposed an expedited pathway device is exempted from the need to demonstrate "substantial" clinical improvement (p.21 and H.8, 730ff).  

Separately, CMS is also proposing some changes to clarify what is "substantial" improvement. (Current phrasing uses examples that it reduces mortality, decreases hospitalization or physician visits, or reduces recovery time.)  See p 724ff.  See notes at bottom of this blog.

NTAP payments generally last 3 years from device introduction, then stop.

In addition, CMS proposed to boost the add-on payment dollar amount from 50% to 65% of the additional cost.   This is relatively easy for CMS to do, as it is a cost neutral program.  The (tiny) amoung of payments for the NTAP program are cost neutral (meaning slightly debited) against the hundreds of billions of dollars CMS spends on inpatient care.  

Diagnostics - T2 Molecular Microbiology Test

For those interested in diagnostics, a discussion of add-on payment for T2 Biosystems rapid bacteria molecular panel is at p. 675-696.  For example, CMS discusses whether it is "new" or "substantially similar" to prior microbiology systems.   "We note that the T2 test panel uses DNA to identify bacterial species...standard of care blood cultures a DNA test is also required...we invite public comments whether T2Bacteria Test Panel is "substantially similar..."

For those who track FDA vs LDT issues, I don't think an LDT has ever even been proposed for an NTAP.

Footnote: Substantial Clinical Improvement

Substantial clinical improvement (SCI) has been open to CMS interpretation, with some landmarks being "reduces mortality, decreases hospitalization or physician visits, or reduces recovery time."

At H.6 p. 714ff and H.7 724ff, CMS proposes some additional options.   H.6 is in the form of an open ended request for information.  H.7 is in the request for feedback on specific written proposals from CMS.   These are:

  • SCI could include evidence of broad adoption.  If so, how define?
  • Positive clinical outcomes against existing technologies.  This provides a more firm agreement on what the comparison outcome is.
  • Evidence can include real-world evidence and does not necessarily have to be published in a peer reviewed journal before review.
  • Improvement may be defined more specifically to subsets of beneficiaries with certain preconditions, co-morbidities, etc.  However, since ICD-10 categories are crude, this could be hard for CMS to define (there's no code for ALK-positive DRG patients).  
  • SCI is possible without regard to FDA approval criteria; a device might be 510(k) for FDA but be different enough to have SCI.

Saturday, April 20, 2019

CMS NCD on NGS in Cancer Doesn't Fit FDA Approvals and Blocks Healthy Platform Migration

In March 2018, CMS finalized a National Coverage Determination on Next Generation Sequencing when used in cancer patients.  Although the effective part of the NCD is just a few sentences out of the 80-page total text, those several sentences have powerful implications for the development of cancer care and precision medicine as a whole. 

This essay highlights two key issues. 

On-Label Uses of NGS Testing

First, there are already examples where the NCD for NGS blocks use of on-label FDA-approved testing for drug management. 

These include blocking use of Her-2 genomics for on-label adjuvant therapy in breast cancer with on-label uses of Herceptin and related drugs.   Another problem area is on-label, on-guideline use of leukemia/lymphoma chemotherapy (or decisions for bone marrow transplant), when NGS platforms are used for minimal residual disease detection. 

These blockades - which will multiply with time on a rolling basis - are not seen in the proposed NCD for CAR-T therapy, which allows coverage for FDA-approved uses and on a future basis for NCCN-endorsed uses of CAR-T as they appear.

Validated Platform Transition to NGS Platforms

The second key issue is blocking the transition of tests that are approved on one platform, from use with equivalent or better results on a new platform (NGS).

For example, Agilent and others now have high-efficiency, accurate RNASeq platforms that many qRNA MAAA tests could transition to (see example here and here).  However, if the RNASeq is viewed as a form of NGS, then coverage stops as soon as the transition or bridging has been fully validated. 

Blocking these transitions is a really bad idea.  It's as if NIH said, you can have this grant, but you have to use rotary-dial telephones, not push-button telephones.

Sanger (Left), NGS (Right)


Snapshot: The NCD Rules

In the 1940s, the prolific author Isaac Asimov introduced "Three Laws of Robotics," which were very simple but led to many ramifications explored in subsequent short stories and novels.   In brief, the First Law, a robot may not harm a human being.  Second Law, a robot must obey orders, except when it conflicts with the first law.  Third Law, a robot must protect its own existence, except not in conflict with first or second laws.

The NCD can also be summarized quickly along the same lines as Asimov's science fiction.   First Law, an NGS test shall only be used in patients with recurrent or metastatic cancer.  Second Law, an NGS test shall only be used once per patient.  Third Law, an NGS test is covered if approved by FDA as a CDx, and also, used for an on-label patient indication, but only if it doesn't conflict with the first or second laws.     Fourth Law, an NGS test may be covered by LCDs, but only if it doesn't conflict with the first or second laws.

The First Law and On-Label Uses

Here, I've extracted the coverage rules in the NCD so that the First Law is that the NGS tests shall only be used in recurrent or metastatic cancer (e.g. stage III, IV cancer).   There are already examples where FDA approvals overrun this rule.

The most prominent is for Herceptin-class biologicals used as adjuvant therapy in breast cancer.   The FDA has approved the Foundation Medicine test for Her-2-neu genomics, based on an FDA-validated bridging study to an FDA-approved FISH test.  See here.    FMI presented bridging accuracy studies in 317 blocks, 125 positive, 192 negative (various analyses, TP, FP, etc, had 80%-96% agreement).   Herceptin is approved as a major adjuvant therapy for early-stage breast cancer that has, or has not, spread to lymph nodes (here).   The 2017 labeling is here.   Hint: That labeling is nowhere cited by the NCD. 

The adjuvant studies were specifically based on gene amplification, to which the FMI labeling and approval are bridged by the FDA.  Tumors were equal to (or larger than) T1c, e.g. T1 tumors in the 10-20mm range.  They weren't stage 3 or 4.  These fall outside the tumors allowed by the NCD's First Law requiring advanced-stage disease. 

FDA labeling for the drug specifically refers to use of "FDA-approved tests" for Her-2 overexpression, thus clearly including the FMI test by reference and providing labeled uses in Stage 1-2 cancer patients.

Adding a targeted biological may improve adjuvant therapy for early-stage resected cancer, but not always (e.g. not for cetuximab and colorectal cancer, here).  However, adding targeted drugs can have a substantial value for clinical outcomes in early-stage disease, including on-label uses, as shown in the case of breast cancer.  And MSI-family mutations (e.g. MLH-1, MSH-2 status that are accurately diagnosed by NGS including report on the FMI test) can be used in stage 1-2 adjuvant therapy decisions (see UpToDate here; Tougeron et al. here.)

The Second Law and Multiple Uses

One of the most important concepts in hematopoeitic cancers is the detection of minimal residual disease.   This was an early use of flow cytometry, but has migrated to molecular methods (e.g. molecular BCR-ABL with  serial tests) and now to FDA-authorized NGS methods (the ClonoSEQ test, here).   FDA has increasing moved towards using MRD as a fundamental outcome (here and here), but it is already universally used as a decision point for leukemia management.

Within Medicare MACs, the MolDx system covers ClonoSEQ in Medicare patients in the form of one, one-time test cycle with up to four MRD assays (A56322, here). 

As more tests in more geographies migrate to NGS platforms for MRD, it is imperative the NCD be updated to allow more than one test per lifetime per patient in these leukemia and lymphoma patients.   This would bring the NCD into consistency with new and constantly evolving FDA labeling for cancer care.

Platform Migration

It's easier to see what is blocked by the NCD that what is prevented from development.   

With advances by Agilent and others, it's now directly possible to migrate qRNA tests (most MAAA tests) onto RNASeq platforms (here and here.)   Almost the same day that the NCD was released controlling NGS test use in the US, such tests were being released with approval of European authorities (here).   But the First and Second Law of the NCD block use of the tests solely because they are migrated onto NGS platforms.   This may prevent tests from being migrated to more efficient, faster, or more accurate technology platforms.  (Hence my example earlier, comparing rotary dial and pushbutton phones). 

Tests should be valued for their impact on care - when they are reasonable and necessary - not what platform they are run on.

A Note on Platforms

The NCD seems to treat NGS platforms as "a device" rather than as a tool or modality (like light, pumps, or wheels).     

While there is one FDA category - for FMI-like tests - that defines Category II NGS platforms for use in tumors to give reports (510(k) reports) to physicians that are not drug-specific - most FDA sequencing test categories are not method specific.    For example, FDA device category 21 CFR 866.6080 is NGS-specific, it's a 510(k) category not a CDx or a PMA category, and more importantly, many FDA sequencing test categories like 866.5940, 866.5900, 866.6100, 866.3365, are not sequencing technology specific.   

NGS is no more an FDA device category than are "pumps" or "things that use light."  The former includes heart pumps, bedside saline pumps, pain pumps, and so on.   A bedside drug pump has no inherent medical necessity separate from what it is pumping, and an NGS platform has no inherent medical necessity (or approval path) separate from what it is sequencing (MRSA bugs, BRCA genes, KRAS, etc.)   Nor are "things that use light" an FDA category - see ophthalmoscopes, colonoscopy devices, laparoscopic devices, AI retinal imaging devices, psoriasis UV therapy lights, and so on.   There is no reason for an NCD on the FMI test in oncology to act so broadly as to even block the use of microbiology NGS tests in septic cancer patients, although that's how the NCD was written.


FDA doesn't attempt to cover all uses of "pumps" or "light" in one guidance document, nor is there one payer or CMS NCD policy written to handle "pumps" or "light."  FDA doesn't remotely attempt to cover all uses of NGS testing in healthcare in any one guidance document, and CMS shouldn't try to do so in one NCD either.   CMS ends up with something like the "Four Rules of NGS Testing" I've utilized here, and they quickly run into contradictions or insufficiencies.  It's like trying to have "The Three Rules of Chess" - it won't ever work.

Friday, April 19, 2019

Systematic Review Pummels Diagnostics RCTs - Because Docs Know If They See Test Results

From time to time everyone sees "Hierarchies of Evidence" that start with case reports at the bottom and rise through RCTs to the highest point of evidence, meta-analyses of RCTs.

Everyone should be aware that RCTs that are pivoted on a diagnostic test can be an inefficient way to study the impact of diagnostics.  For example, in a drug trial between arms A and B, everyone in Arm A gets Drug A and everyone in Arm B gets Drug B.  In a diagnostic trial, if you use the diagnostic only in Arm B, maybe 20% get a change because of an especially low result, and 20% get a change because of an especially high result.  But in this example, 60% of the patients in arms A and B of the diagnostic trial are treated exactly the same way and should have exactly the same result in both arms.  This dilutes the apparent impact of using the diagnostic.  (See longer viewpoint here.)

Along with others, I've pointed out for years that an RCT can't get a perfect score with a diagnostic, because it can never be blinded as to whether doctors in the diagnostic arm know the result of the diagnostic.   (For example, you can't give fictional placebo randomly positive or negative cancer PET scan reports in one arm, and real cancer PET scans in the other arm.  Shiver.)

However, I just ran across a meta-analysis from NIH and Harvard that dings a diagnostic test RCT because the doctors weren't blinded to the fact a diagnostic was used and thus became part of their decision.   Good lord.

The article is Pepper et al. and looks at RCT results across more than a dozen studies in critical care sepsis patients where procalcitonin is used to monitor infection and assist management.  (Procalcitonin rises and falls with the severity of bacterial infection.)    The meta-analysis concludes that procalcitonin added to standard of care reduces antibiotic dosing days by one to two full days, and may also reduce mortality, but not with a strong effect. 

The abstract and body of the study repeatedly discusses the scientific problem of bias in the studies.  However, at only one point do they clearly describe what that bias is, and one of the main charges is that doctors were not blinded and knew that they knew procalcitonin results in the intervention arm.  How on earth else would you design a diagnostic study??   The authors state at five different points that the results were marred by "high risk of bias," but only at one point that this included primarily non-blinded doctors given PCT test results.  Some formal estimators of bias (e.g. funnel plots for publication bias) were negative.


Extra credit...

Recent papers on predictive and prognostic test evaluations.

Wolff et al. (2019)  PROBAST: A tool to assess the risk of bias and applicability of Prediction Model studies.  Annals Intern Med 170:51-8.

Moons et al. (2019)  PROBAST: ...Explanation and Elaboration.  Annals Intern Med 170:W1-W33.


Riley et al. (2019)  A guide to systematic review and meta-analysis of prognostic factor studies.  BMJ 364:K4597.

Flurry of News: AI in Medicine and Creative Destruction

German economist Joseph Schumpeter coined the term Creative Destruction (schöpferische Zerstörung) to describe the growth and collapse of businesses and industries under capitalism.  (Today "creative destruction" is sometimes used enthusiastically to describe dynamic change and progress.  In his day, Schumpeter saw this less as upward growth and more as an eventual downward spiral, a "fall-of-the-west" or "implosion-of-capitalism" viewpoint.) 

Today with AI (and other new technologies in healthcare) we often construe "creative destruction" as an interesting idea, investment, bubble, hype, and then disappointment a la the Gartner Hype Cycle.   Is IBM Watson becoming an example?

First Example:  Derek Lowe's Assessment of IBM Watson in Drug Discovery

See an open access article by Derek Lowe at his Science blog, In the Pipeline.  He reports on April 18 that STAT reports that IBM has has canned its much-vaunted "Watson for Drug Discovery" now.   But here, Watson for Drug Discovery can probably stand in, almost with canned text, for many efforts to turn big data into useful health outcomes.   IBM wrote...
Watson for Drug Discovery reveals connections and relationships among genes, drugs, diseases and other entities by analyzing multiple sets of life sciences knowledge. Researchers can generate new hypotheses using the resulting dynamic visualizations and evidence-backed predictions. . .Pharmaceutical companies, biotech and academic institutions use Watson for Drug Discovery to assist with new drug target identification and drug repurposing. Connect your in-house data with public data for a rich set of life sciences knowledge. Shorten the drug discovery process and increase the likelihood of your scientific breakthroughs.
Clearly, this could be easily edited into "Harvard for Drug Discovery" or "Mayo for Health Outcomes" or "Optum for Public Health" -- to give some wholly hypothetical names to applications for AI where the existence of real projects wouldn't surprise you.

Lowe's blog has a lot of detail about IBM Watson for Drug Discovery, and if you don't subscribe to STAT, he also links to a recent open access article at IEEE Spectrum "How IBM Watson Overpromised and Underdelivered on AI Health Care," here.   (IEEE first chimed this note in 2015, here.)

"Overpromised and Underdelivered" is a truly impressive deep dive piece by IEEE senior editor Eliza Strickland, who also manages the May 2018 IEEE series, "Hacking the Human OS," which is a wealth of interesting open access articles on similar themes.   Find it here and track their running medtech blog here.

For addition IBM Watson bad news, see Forbes on a fiasco with MD Anderson, 2017, here.  See a 2017 negative STAT article on IBM Watson, here.  See a 2018 article on layoffs, here, and a 2018 article on scaling back the IBM Watson hospital services business here.  Over 5 years, IBM stock has slipped from $190 down to $140.  Microsoft is up from $45 to $120, and Apple up from around $100 to around $200.   During that time, Dow Jones is up from 16,000 to 26,000.  (GE has slipped from $25 to $10.  GE spunout its biopharma business to Danaher (here) but is working with ACR on AI in next story.)

Second Example:  AI in Imaging

Creative destruction is a dynamic mix of upswings and downswings.   Here's an upswing.  At the same time, we read that AI medical imaging startup AIDOC has just raised a new $27M for medical imaging based on AI, here.

And institutions are buying in, including prestigious bodies like NIH and American College of Radiology.   See an article in MedTechDive here, that links to a press release from Radiological Society of North America, that leads to a new NIH/ACR position paper on the expected rapid and important growth of AI in imaging.  That last document is here, Langlotz et al., and it's $30.  (As mentioned earlier, GE is tying in with ACR re AI R&D, here.)

See also the recent trade press here, about a new 20 page FDA white paper and guidance document on AI in devices, here, which was released with a press release by Scott Gottlieb, here.

FDA has begun approving AI-driven devices, such as for retinopathy diagnostics (here).  Those approvals have been for locked-software devices; the new guidance moves FDA toward self-updating AI systems. 

Digital Pathology, Too

CAP Today ran a long cover story on digital pathology in February 2019 - by Karen Titus, here.   (Admittedly, focused more on digital storage and interpretation than machine learning or AI.)  But see a very interesting article that we don't need "digital pathology," but rather "intelligent digital pathology," by Acs and Rimm in JAMA Oncology in March 2018 (here).   See some additional autumn 2018 headlines in digital pathology at blog, here.

See a 2018 review article on AI and digital pathology by Tizhoosh and Pantanowitz, open access, here or here.  See a 2018 article in IEEE Spectrum, "The First Frontier for Medical AI is the Pathology Lab," here.  Similarly, see a trade press article in Healthcare-In-Europe, October 2018, here.

In another area of medical technology, Edwards Lifesciences inked a deal in December 2018 with Bay Labs for AI-assisted product development in cardiac devices (here).

If It's in NEJM, It's Probably True

For another institutional endorsement, see the brand new review on machine learning in medicine by Rajkomar et al. in New England Journal, April 4, 2019, here

For recent webinars from Rock Health about "How to Exit [financially] in Digital Health" and from Accenture/Medtronic on "How AI Can Change the Future of Healthcare" - see here and here, respectively.

If the UK National Health Service Is Doing It, We Hope the Economics Are Sound

For a "Topol Report" which is a 50-page roadmap to digital health plans for the NHS over the next decade - here.

Wednesday, April 17, 2019

Update: CMS Posts First Code List for June CLFS Crosswalk Meeting

In late March 2019, CMS announced its annual summer new lab test pricing meeting for June 24.   See the full discussion at my post a few days ago, here.

On April 15, CMS posted a code list of codes to be discussed.   Comments and participant registration are due by June 10 (see prior link).

3+3+41=47 Codes To Be Discussed

CMS posted three codes under reconsideration - BRCA1-2 code 81163, BRCA-1 sequence 81165, and 0046U, a tyrosine kinase gene.

CMS posted only three new regular pathology codes, 813X1 (PALB2), 813X2 (PALB-2 family variant), 8XX01 (PIK3CA, targeted analysis).   There seem to be no new non molecular lab codes, if this list is complete.

Finally, CMS posted a bonanza of 41 PLA codes, including those approved in Q12019.

Will CMS Add May 2019 PLA Codes?  Like Last Year?  Past as Prologue?

Last year, the late June CMS CLFS meeting included those PLA codes approved in early May 2018 by AMA. 

We expect that AMA will be approving about 30 PLA codes this year in early May, so it is quite possible, but not certain, that in late May CMS will add them to the June 24, 2019 agenda.   CMS needs to receive the May 2019 codes from AMA in time to post 30 days in advance of June 24, 2019.

Download the Spreadsheet

Go to the CMS CLFS Public Meeting Page, find CY2020 code list near the bottom.

2019 Gapfill Process

CMS has about 18 codes under CY2019 gapfill process, with prices being set this spring by MACs.  CMS should post proposed gapfill prices in April or May or June.  Here (see bottom section of that blog for gapfill roster).

Friday, April 12, 2019

Contractor Snapshot: MolDx Rules for Open LCD Meetings

On March 28, MolDx released a host of new proposed LCDs.  The public has a chance to comment during a public meeting to be held May 6 in Columbia, SC.   The deadline for meeting submissions is today, April 12.

The public meeting website is here.  Since this event webpage is probably temporary, I've put a cloud copy of the meeting rules here.

Some tidbits:

  • Comments are required 2 weeks after the LCDs were releaesed, and about 3 weeks ahead of the meeting.
  • Just to attend, advance registration is required, but this may be optional if there are still seats available on May 6.
  • The meeting will be recorded and posted on the MolDx website.  This is a new CMS rule regarding transparency.
  • They have a specific remark, if a stakeholder has negative comments on an LCD, please be specific and cite literature.
  • BYOL - bring your own laptop if you want to project PowerPoint.
  • There are 12 LCDs and 120 minutes, suggesting 10 minutes per LCD. 
MolDx LCDs are also released in many other jurisdictions, WPS, Noridian, etc, which have their own public meetings on their own cycles. 

Some of the rules, such as posting of video or transcript, follow new LCD rules released a few months ago by CMS - find them here.

Thursday, April 11, 2019

AMA Posts Over 30 Proposed PLA Codes (May CPT Vote)

On April 10, 2019, AMA posted over 30 proposed new PLA codes for public comment.  Public comment is brief; apply to AMA right away to get a copy of an application of interest to you and comment to the committee by April 18.   The AMA PLA committee will deliberate until April 24, and the codes will be voted on by the AMA CPT Editorial Panel in Chicago on May 10.   The final codes will be posted July 1 and active October 1.

  • PLA PDF agenda here.
  • PLA home page here.
  • PLA calendar page here.
  • The web page for the May CPT meeting is here, including the agenda of all regular (non PLA) codes proposed.

What are PLA Codes?

PLA codes are special rapidly issued AMA CPT codes for lab tests that are offered in the USA and are either proprietary, FDA-approved, or both.   

Will these codes enter the June 2019 CMS pricing meeting?

If last year is precedent, PLA codes from this cycle will be included in the June 2019 CMS crosswalk/gapfill meeting for pricing new lab codes.  (Here).  

How Many Recent PLA Codes?

From August 2018 to March 2019, AMA created approximately 52 PLA codes, and if all 35 new codes are created in May, as many as 87 PLA codes will enter the June crosswalk/gapfill meeting.

Monday, April 8, 2019

PAC CARB Votes: HHS Should Tell CMS to Finalize Antibiotic Stewardship Rules for Hospitals

We live in a world where antibiotic stewardship and antibiotic resistance are major public health topics.  Just this weekend the New York Times ran two stories on the antibiotic resistance crisis (here, here).

One major federal effort was announced by CMS in 2016 with some fanfare:  Requires all hospitals serving Medicare to have official Antibiotic Stewardship Programs in operation. 

However, the rule has never been finalized, and it will expire if not finalized by June 2019.   Stakeholders who think this requirement would be a good idea have a major forum:  PAC CARB, the President's Advisory Commission on Combating Antibiotic Resistant Bacteria.

Today, April 8, 2019, the PAC CARB held a special session and its commissioners voted that CMS should move forward with the ASP requirement.   All the commissioners voted "Yay."   The letter (as currently drafted) is online here

Several government bodies like FDA, NIH, CDC, BARDA, CMS, are assigned to the committee.  However, in today's meeting, like the January 30 meeting, the CMS delegate to PAC CARB did not attend the meeting.
  • See an unofficial meeting transcript in the cloud here.  

Get a window into stakeholder positions and politics here, at the website of APIC, the Association of Professionals in Infection Control.


I made a personal professional comment to the committee; here.


Some additional entry points into the topic of Antibiotic Stewardship Programs for hospitals...

  • In 2014, CDC released "Core Elements of ASP" (24pp), here, web here.
  • In 2016, Joint Commission released standards for ASP, see short summary 4pp here.  Register for 96pp PDF "Toolkit," here.  (Note - cobranded JC/Janssen).
  • In 2016, National Quality Partnership (NQP) of National Quality Forum (NQF) released a strategic consensus viewpoint, the 38pp "Playbook" for ASP, here.
  • IDSA has numerous subject-specific guidelines and has an accreditation for Centers of Excellence in ASP, here.

Friday, March 29, 2019

CMS Lab Pricing Public Meetings: June 24, 2019 (Public), July 22-23 (Advisory Panel)

CMS has published, in the Federal Register, advance notice of its annual summer meeting for the pricing of new laboratory CPT codes.   The public meetings will be Monday, June 24, 2019.  The Advisory Panel will be Monday-Tuesday, July 22-23.

Here are the key dates in full:
  • Circa May 20, 2019, CMS releases agenda 30 days in advance of meeting (list of all codes)
  • June 10, 2019, Presentations due
    • Same date for public registration for non-presenters
  • June 24, 2019 (Monday), Public Meeting at CMS, Baltimore
    • Location: CMS Main Auditorium
  • July 8, 2019, Comments within 2 weeks of public meeting
  • July 22, 23, 2019, Advisory Panel on Clinical Lab Tests
    • July 1, 2019, deadline to register to attend Advisory Panel
  • Early September - Proposed Prices
  • Early October - Deadline for comment on proposed prices
  • November - Final Fee Schedule for CY2020
    • 60 Days after Final Fee Schedule, deadline to submit "reconsideration"
    • Reconsideration simply brings the code back for the July 2020 meeting
See the CLFS Public meeting announcement > here.  See the July Advisory Panel announcement > here.

Track further announcements at the CLFS public meeting webpage here, and the Advisory Panel homepage here.

Foreign Nationals

Not always clear in these announcements, if you are a foreign national, you must reach out to CMS much farther in advance and provide additional materials.  Otherwise, you'll be stopped at the front gate ... even though CMS has otherwise accepted your online registration.  I've seen it happen.

CMS Preferences (Stated August 2018)

Last year, in program materials released in August, CMS announced that in general, it preferred single crosswalks and not stacked crosswalks, and crosswalks x1 in preference to fractional crosswalks (e.g. x1.5 or x2.5).   These are not absolute rules, but now they are stated CMS preferences.  Codes that can't be easily crosswalked under these preferences are more likely to be turfed to the MAC gapfill process which spans the following calendar year.

PLA Codes

On March 21, 2019, AMA updated its PLA code list (here).  I believe last year's meetings included all PLA codes released to AMA's website up to June 1, and this new meeting will cover all PLA codes released between August 31, 2018 and June 1, 2019.   (For the latter codes, the submission deadline is April 3, and the finalization will be at the May 8-9 CPT meeting).  Roughly, this is PLA codes 0062U to 0104U (42 codes), plus let's guess another 20-30 codes for May 2019.  That means there will be 60, 70, or more PLA codes in the June 2019 summer meeting, plus the dozen or more "regular" CPT lab codes generated this year.   

PLA codes become active for use up to several quarters before their CMS pricing cycle.  In contrast, regular lab and other CPT codes generally aren't active for use at all, until after the CMS pricing cycle completes.  

An Additional Date to Watch!

Since pricing arguments and powerpoints are due by June 10, applicants should get started on that process by early May, understanding the rules, picking crosswalks or arguments, considering alternatives.  If you want to consult with stakeholder groups (like lab associations) for advice, you should have draft materials written by the beginning of May.  

Thursday, March 28, 2019

MolDx Releases New LCDs, May 2019

MolDx has released a host of new LCDs, to be discussed at a May 6, 2019 open meeting (for the Palmetto MAC).   The LCDs will, over a future period of weeks, be introduced in each of the several MACs around the U.S. who participate in MolDx consensus policies.  For Palmetto itself, the 45-day comment periods run May 6-June 20.  May 6 is a public comment meeting in Columbia, SC.

See All LCDs in ZIP File in Cloud:  HERE

DL38043:  G360 Liquid Biopsy in Solid Tumors

Re-release of Guardant G360 LCD but now expanded from lung cancer to broader coverage in solid tumors, with notes the test use must ALSO be consistent with the CMS NCD 90.2 for use of NGS tests in advanced cancers.
DL38045: General LCD for NGS in Solid Tumors
This is a general LCD about next gen sequencing in cancer regarding when it will be viewed as compliant, or non-compliant, with NGS NCD 90.2, as well as local implementation rules.  This particular policy is stated to be only concerning solid tumors, not hematologic cancers, ctDNA, or germline testing.   It doesn't say those other areas are outside the NCD, it simply says they are outside this particular LCD's discussion of the NCD. 
LDT NGS tumor panels are covered *after* completing a MolDx technical assessment (the LCD refers to the MolDx website for details).  For example, if I read this correctly, the Sloan Kettering IMPACT gene panel test under MolDx would have the same coverage as the FMI F1 CDx test under the FDA-approval-oriented parallel review NCD.  Assuming only that IMPACT or a similar test passed the in-house tech assessment by the MolDx team.  I believe that the LCD won't list test names, but that a MolDx article would need to list test names so that Medicare Advantage plans could implement corresponding coverage.    
The only CPT code listed for NGS testing is 81479, not 81445/81455. 
DL38047: General LCD for NGS in Myeloid Cancers
This is a general LCD about the use of NGS LDT tests (non-FDA-approved tests) for actual or suspected myeloid malignancies.  This appears to be a "class" LCD, provided general rules, but requiring that any particular test ALSO successfully pass a MolDx test specific technology assessment.  
Leukemias had some difficulty under the CMS NCD which allowed payment only for advanced cancers that must be "recurrent, refractor, relapsed, or stage 3/4."   To resolve this, the LCD explicitly declares AML, MDS, MPN as "refractory or metastatic cancers" by definition.  Gene panel testing is also covered where a myeloid malignancy is "suspected," with cytopenia over six months and "other possible causes have been reasonably excluded."

DL38041: Natera Prospera Test for dd-cfDNA in Renal Rejection

Provides coverage for the Natera Prospera test for donor-derived cell free DNA (dd-cfDNA) in management of renal graft rejection.

Noridian LCD L37358 covers the CareDx AlloSure test for cfDNA.

DL38039:  TruGraf Gene Expression for Renal Rejection

Provides coverage for the TruGraf Blood Gene Expression Test (Transplant Genomics, Inc), which uses gene expression to identify renal graft rejection. (Last year, MolDx covered a different type of renal rejection test, from CareDx, which picks up donor graft DNA in patient blood).

DL38029, DL38035: Decipher Prostate - Intermediate Risk

Decipher Biopsy Prostate Classifier to be covered for men with intermediate risk disease.  This helps determine next clinical steps after an intermediate-risk biopsy.

Assay covered with separate criteria in two separate LCDs (!) - one for patients with "unfavorable" versus one for patients with "favorable" intermediate-risk biopsy.

DL38051: DermTech Pigmented Lesion Assay PLA

The DermTech Pigmented Lesion Assay (PLA) assess RNA expression in patient skin for atypical lesions to determine the need for a melanoma biopsy in a atypical lesion.

DL38037: InterAct Drug Interaction

Although not billed as a "MolDx" LCD, an interesting additional LCD does NOT provide coverage for separately coded and billed drug interaction testing.  See on the InterACT Rx software system which is paired with a blood assay for interacting (including non-Rx) substances.  See Aegis website here.

That totals nine different new LCDs.


Coverage of the Guardant LCD at Genomeweb, here.  The NCD provides LBx coverage for patients with a solid tumor which could require a drug with a genetic biomarker on the G360 panel.  Note, though, that larotrectinib is approved in any solid cancer with an NTRK mutation.

For a recent blog on the overall structure and clarity of MolDx LCDs, here.
One thing I noted in that blog, is fixed already.   I was concerned that MolDx LCDs ended, as required by CMS, with an "Analysis of Evidence" but previously MolDx limited this to a few words, like "Quality Moderate, Weight Low."  Nothing else.  That's not "analysis."  
These new LCDs now end with a one or two paragraph analytic discussion of the quality and meaning of the evidence vis-a-vis the coverage decision.  I think this is a big improvement.  My only remaining wish is that they'd provide a short description of what the heck they mean by their three terms, "Quality, Weight, Strength" and how each term specifically differs from another.  When can evidence have "high weight, low strength" for example.
LCDs Timed with PLA Codes

Two LCDs are timed with new April 2019 AMA PLA codes.  These are PLA Pigmented Lesion Assay, Dermtech (PLA 0089U), and MyPath Melanoma, Myriad (PLA0090U).

Formatting of Indication: One Place or Two?

In DL38029, DL38035, Decipher Prostate Biopsy Assay,  the LCDs open with a short description of general coverage (e.g. "to inform  treatment decisions...for men with unfavorable intermediate risk prostate cancer."   Detailed coverage criteria with multiple bullet points and "AND" logic statements appear at the end of the LCD body.

In contrast, the DL38051 PLA melanoma assay has a brief coverage description at the beginning immediately followed immediately by detailed coverage criteria.

LCD Data

The LCDs average 11 pages (included coding lists or boilerplate) and 23 citations, citation range 7-39.

Three Renal Transplant Tests - Indications

The two new, and one prior, molecular renal graft tests are compared in the table below:


Administrative Details: Open Meeting, Columbia, May 6

(Read the fine print at bottom of LCDs.)

Wednesday, March 27, 2019

1965, Meet 2025: Medicare Launches AI Challenge for $1.6M

You've been waiting since November 2018 (here).  The Innovation Center, or CMMI, at CMS wants to support new ways through which AI can improve the quality of healthcare.

See the "AI Health Outcomes Challenge Homepage" here.  It's in partnership with the Laura and John Arnold Foundation and the American Academy of Family Physicians (CMS contributes $1M and the other two, together, $600,000).

Here's what CMS says:
CMS is calling on developers from all industries to create new predictive AI applications to help providers participating in CMS Innovation Center models to deliver better care and make quality measures more impactful. 
"The Artificial Intelligence Health Outcomes Challenge is a three stage competition that will begin with the Launch Stage, in which participants will submit an application at," officials explain. "Up to 20 participants will be selected to participate in Stage 1 of the Challenge. We anticipate that more information about Stage 1 and Stage 2 will be announced later this year." 
As much as $1.65 million in total awarded to participants during Stage 1 and Stage 2.
"If selected for Stage 1, participants will develop algorithms that predict health outcomes from Medicare fee-for-service data, and strategies and methodologies to explain the artificial intelligence-driven predictions to frontline clinicians and physicians while building trust in the data," according to CMS. "Participants in Stages 1 and 2 of the competition will use Medicare claims data sets provided by CMS to develop their algorithms and solutions."

  • See the full 11-page notice online here.  Entries are due June 18.
  • See coverage at Healthcare IT News here, Fierce Healthcare here, HealthLeaders here.  

In the past day, the Trump Administration has announced that it would not fight a lawsuit (running in Texas and higher courts) that would wipe out the whole Affordable Care Act, including the CMS Innovation Center which runs the AI Challenge.  CMMI was created by Section 3021 of the ACA.


In 2016, the Arnold Foundation awarded $7M to four groups working on drug price policy, including one to Peter Bach of Memorial Sloan Kettering - here.

For an article on more mundane uses of AI, such as matching Medicaid-eligible patients to their Medicare Advantage plans, here.

The UK also has been issuing upbeat reports about how AI and DHealth can improve the NHS - entry point here.

For a June 2019 conference on AI and healthcare in Boston, here.

For a March 2019 PWC report on digital health, pharma, and FDA, here.

HHS FY2020 Plan Mentions CMS Parallel Review, CMS Payment for Breakthrough Devices

A couple weeks ago, the Trump Administration released the President's overall budget for FY2020.  A few days ago, CMS released its 350-page budget and strategic plan for FY2020. 

Now, the other shoe, or the third shoe, has dropped, which is the 162-page HHS budget and strategic plan for FY2020.

There are three interesting references to Medicare policy, on page 84.

Strengthen the Parallel Review Process to Streamline Medicare Coverage
The Parallel Review program is a collaborative effort between the Food and Drug Administration (FDA) and CMS that is intended to reduce the time between FDA approval of a drug or device and Medicare coverage of that item. This proposal strengthens the existing parallel review process to improve device manufacturer participation and increase transparency.

    Takeaway: It's not clear exactly what this would look like.  CMS has issued Parallel Review NCDs twice, for Exact Sciences Cologuard in 2014 and for Foundation Medicine F1 CDx in 2018.   In speaking to ACLA's annual meeting on March 26, CMS coverage director Tamara Syrek Jensen didn't add any color to this HHS remark.   She noted that there is more happening under the umbrella of Parallel Review than just the two NCDs the public has seen.

    Debbie Downer:  Since CMS can't release in advance the date or content of its decisions, and, it already is in extensive Q&A dialog with Parallel Review participants, it's unclear what the "increased transparency" would be.

Improve Clarity and Transparency around Medicare Coverage Process
Some stakeholders find the process and standards for the Medicare coverage determination process lack clarity. This proposal requires CMS to issue additional guidance around the Medicare coverage process, including sub-regulatory guidance on the evidence standards that CMS utilizes in assessing coverage and the process to appeal coverage determinations, in an effort to improve clarity around Medicare coverage.

    Takeaway:  I have worked on many coverage issues for 15 years, and it's hard to give general rules for "what justifies coverage."   The situations, alternatives, comparators, vary greatly from one scenario to another.   Very, very few coverage decisions are NCDs, so this would mostly apply to LCDs.   CMS markedly updated the LCD process in October 2018, with changes rolling out now in 2019.   
     * I reviewed a group of very recent MolDx LCDs, and found the coverage analysis and clarity was quite high.  Here.
    * But right now, a device can have FDA Breakthrough Status, rigorous CMS Inpatient New Tech approval for added payment for confirmed new benefit, and still not get LCD coverage.  That's just insane.
    * As far as NCDs, I reviewed the quite recent NCD production process for the Foundation Medicine NCD, and found it was just pretty awful (white paper here, some other notes here.)

   Debbie Downer:  This HHS paragraph might refer mostly to the LCD improved processes already released a few months ago.

Improve Medicare Beneficiary Access to Breakthrough Devices
There is currently no expedited pathway for Medicare beneficiaries to access innovative devices that have received FDA breakthrough designation. This proposal provides Medicare coverage of devices approved through the Breakthrough Device Program for beneficiaries participating in clinical trials for up to four years from the date of FDA approval.

    Takeaway:  While legislation was floated in 2018 to improve CMS coverage of "breakthrough devices," there's also potential regulation lodged in 2018 with OMB (here), and a history on this topic dating back a full two years with monikers like EXCITE and PACER (here, here).  

    Debbie Downer:  The quote here is about coverage of approved Breakthrough Devices just for patients "in clinical trials."  Really?  Medicare normally already covers many devices in clinical trials anyway, with or without the breakthrough status (webpage here).  

Tuesday, March 26, 2019

CMS Announces It Will Soon Issue "Reopening" of Next-Gen NCD

In March 2018, CMS released the final version of its National Coverage Determination (NCD) for the Foundation Medicine F1 CDx FDA-approved test and similar FDA-approved tumor gene panel tests. 

From November 2018-February 2019, an unexpected series of events began with a CMS transmittal of the NCD that inserted a new text sections which raised alarm that the NCD will be interpreted to block NGS-method (but not other methods) of BRCA and other germline testing.  This would apply to all Medicare patients, regardless of age.   For a protest letter signed by about 60 clinical organizations in January 2019, here.

Reopening Announced on March 26

  • On March 26, 2019, at the American Clinical Laboratory Association, the director of the CMS Coverage Group, Tamara Syrek Jensen, announced that CMS plans to "reopen" the NCD within weeks. 

What Does This Mean?

CMS New Action #1
In my understanding, CMS could issue a revised NCD directly, allowing 30 days of public comment before finalization.   For example, CMS could redline the text to confirm the overriding effect of existing sentences "buried" in the NCD's body that state it applies only to tumor testing, not germline testing.

CMS New Action #2
Alternatively, and I believe more likely, CMS could simply issue a statement that the NCD is being "reopened," much like a request for information. 

Through this "reopening," CMS could request public comment on the best ways to modify the existing NCD.   Such a comment period would run for 30 days, after which, in some period of weeks or longer, CMS could issue a proposed NCD, take another 30 days of comment, then finalize it.

Bizarre Situation 

CMS is unlikely to make any "retroactive" changes to the NCD, which by law is effective the date is was released one year ago. 

If the NCD is interpreted to "ban" NGS BRCA or other NGS testing in stage 1-2 cancer patients, then potentially CMS could seek recoupments.  But given the messy situation, it's unlikely that CMS itself would try this. 

Some other entity could seek recoupments (such as a for-profit Recovery Audit Contractor or for-profit qui tam case).  In my personal opinion this would likely fail sooner or later, as the NCD contains clear, mutually contradictory statements about its own scope.    Like Schroedinger's Cat, the NCD taken as a whole currently blocks and allows germline testing by NGS at the same time. 

For a detailed white paper on the saga since 2017, see here.

For the white paper, a chart that compares how CMS met statutory NCD requirements for analysis and justification of its position on tumor testing versus the failure to meet any of these requirements regarding germline testing:

Sunday, March 24, 2019

Should Evaluation of Psychiatric PGx Focus on the "Red Drug Patients?" (Yes.)

Both in the US and Europe, there has been a marked uptick in the interest in pharmacogenetics in the past five years. 

Benchmarks include the growth of the CPIC - Clinical Pharmacogenetics Implementation Consortium, and a range of research and investments over the past several years by NIH-IGNITE (Implementing Genomics in Practice). 

There is an increasing consensus that implementation science is key, such as providing integrated PGx data in electronic health records, coordinated with pharmacies/pharmacists, and providing the right tools for the physician at point of care and across a health system.  This is equally discussed in the U.S. and in Europe (see e.g. Blagec et al. 2018, open access, here.)  I discussed this emerging consensus around implementation and patient selection in a white paper for Thermo Fisher in mid 2018 (here).

One of the most discussed RCTs in pharmacogenomics, certainly in the financial community, is the Greden et al. "GUIDED" trial, using the Myriad GeneSight test and supported by Myriad.  (Not open access, but here.)   This study - Genomics Used to Improve Depression Decisions - compared treatment as usual (TAU) to a pharmacogenomics-guided intervention where physicians could utilize information from a PGx test panel.    In the text arm, using PGx, response and remission rates were improved, but another metric, mean symptoms, did not (p=.10).   Assessment were performed at weeks 4, 8, 12, and 24 (1, 2, 3, 6 months).  The final per-protocol cohort was 1398 patients (717 in TAU, 681 with PGx). 

Higher-Risk Patients

One of the emerging concepts in PGx trials is to use higher-risk patients.

Here, in GUIDED, patients had "treatment resistant depression," failing at least one medication in the current episode of care.  Other research, not specific to depression, also looks to flag high-risk patients for preemptive pharmacogenomic testing (e.g. see the firm YouScript, here, which can screen population health system records for patients at highest risk of PGx-mediated adverse effects). 

Largest Impact Seen on "Red Drug" Patients

Although not a primary endpoint, Myriad has pointed out that the largest impact on health outcomes was in a population created not by the a priori entrance criteria (TAU), but by the gateway test itself.  This is the population of patients who are currently on a "red" or "not-recommended" drug.   In these patients, the point of testing would be to convert them to a higher-chance response on a "green" or "recommended" drug. 

In an online presentation, Myriad emphasizes this data analysis (here).   Basically, this sorts out 30% of patients who are already on GeneSight "Green" medications and have no significant expected GeneSight benefit.*  When you do this, you get larger impacts on the 70% of remaining patients, which you would expect, and all the p-values are below 0.05:

You Would Expect GeneSight's Result, And Its Absence Would Be Worrisome

Note that you would expect the GeneSight data to show larger favorable impact by removing the "already genetically appropriate" group of 30%.   Meaning, if this did NOT happen - if the clinical benefit occurred but it was somehow exactly level between green-drug and red-drug patients - it would be pretty worrisome that green/red identifications were actually irrelevant.  You might worry it was a non specific placebo effect, for example.  That doesn't happen, the "red-drug" patients get bigger effects and the green-drug patients don't. 

Expect Test Impacts to Be Positive and Real...but Finite

Impacts of PGx testing may be real, useful, but modest per 100 patients.  For example, if 11/100 patients remit in the control group, and 18/100 in the test group, that is 8/100 who benefit.   Discussion can shift not to "whether" there is an impact, but whether that is or is not "enough" to merit testing.   Generally, one background fact is that a PGx test has a pretty low risk profile, making it easier for the clinical risk/benefit to be positive.   And usually one in ten effects are impactful on practice; if one drug benefits 30 patients per 100 and another 40 per 100, the second drug will be favored, while the net impact is 1 patient per 10.  For example, if you run a Herceptin test and 2 patients in 10 are test-positive, the 2 get the drug, and one of them lives a year longer, that is a one-year benefit for 1 patient of 10 originally tested.  (See also Marquart, Chen, Prasad, 2018, here.)

Fitting PGx To Other Accepted Principles in Precision Medicine Analysis

To me, this could have been a study design in itself. 

The whole era of precision medicine, since Herceptin, has been to look at the impact of a precision medicine intervention in a defined subgroup.  This is how we evaluated Herceptin - see the book Her2, by Robert Bazell, 1998.   About 20% of cases are her-2-neu positive.  We then look at the impact of Herceptin on these 20% of patients; we don't dilute the impact across the 80% of patients who are test-negative and don't get the intervention.

For example, in a 2012 Cochrane review, Herceptin-treatment women (within 3 years) were about 40% less likely to have cancer recurrence and about 40% less likely to die.

If you diluted that effect over all women who "got the Her2neu test," the 80% who don't test positive and don't get drug, the effect would be diluted down by about 80% (e.g. 10% net effect included negative patients) and might not be statistically significant at all.   Nobody suggests running the analysis this way - nor for EGFR in lung cancer, for ALK in lung cancer, and so on.   There are some reasons for looking at the whole population - for example, the Her2neu, EGFR, and ALK tests are run on the whole population and contribute to population costs (you have to run five $150 Her2neu tests, on five patients, to find one positive patient, so the test cost per positive patient is $750, not $150).  And you can ask whether there are adverse impacts on women who test negative (usually that is a minor consideration, and applies to other tests, like estrogen receptor).

Consistent Impact for Sicker and/or "Red Drug" Patients

Gregen et al 2019 is not the only study with this type of finding.  In a large, 700-patient RCT that also looked a treatment-resistant patients, the difference in test performance was higher in the "severe depression" patients (although also significant in the moderate/severe cohort).  See Bradley et al, 2018 (open access, here) :

Bradley et al. 2018

Results also trend higher in a group not previously on a green drug, and trend higher in a decade-by decade age analysis within the cohort (about 15% were >65). 

Another RCT, with 150 patients randomized, was similarly favorable for panel PGx use in difficult patients, Singh et al., 2015.  Open access here.  It was discussed favorably in a 2018 review (see Zeier et al., below).

Can We Reduce the Boil On This?

Focusing on the population  who actually qualify for a test-directed intervention would reduce the temperature of current debates on psychiatric pharmacogenetics.   In the past year, FDA opined the subject in a confusing way, and JAMA carried a negative-toned news article.   There was a fairly balanced review in the APA journal, by Zeier, Nemeroff and colleagues (article here, APA trade press here).   Zeier has a publication date of September, but was E-pub in April.   Unfortunately, while creating a document for the APA position, due to timing,  Zeier et al were not able to review either the 700-patient RCT by Bradley et al. nor the 1400-patient RCT by Gregen et al

While these were appearing, there was what I read as a somewhat "just throw anything at the wall" op ed in JAMA.   To me, many of the concerns therein about internal controls and validity are resolved by the precision-medicine-paradigm of analysis looking at more severe patients or the "non green drug" patients, the same way we assess the value of other precision medicine tests. 

There is a famous quotation to the effect, "Statistics is common sense applied to numbers."   It seems like common sense not to dilute the impacts of a Herceptin intervention on the 80% of patients who test away from that intervention and don't get it.  By the same token, in these several psychiatric PGx trials, the actionable intervention is shifting from a red to a green drug, and "already-on-green-drug patients" just don't receive that intervention. 


*To my eye, at most there would be a slight benefit in that the "green drug" ill patients might be switched to another green drug while avoiding a "red" drug.