Friday, April 29, 2022

Very Brief Blog: 3 Articles Critical of Quality System Campaigns

 A couple weeks ago, I highlighted a Rosenbaum article in NEJM on "Metric Myopia" (do metrics pull attention to a few things while other issues go unattended).  Blog here.

Let me quickly tie three articles together, from the last couple weeks.

  • In NEJM April 20, Lisa Rosenbaum writes, "Metric Myopia - Trading Away our Clinical Judgement."  Here.
  • Pair that this week with an NEJM April 28 article, also by Lisa Rosenbaum, "Reassessing Quality Assessment - A Flawed System for Fixing a Flawed System."  Here.
  • And I'd make this a triple play, yet again in NEJM, by adding in Elizabeth Rourke's April 7 article, "Ten Years of Choosing Wisely to Reduce Low-Value Care," which in part argues that nominated topics are too often figureheads that are either low cost or rare or practiced by somebody else.  Here.
    • See similarly a January 6 article in HEALIO, here.
    • HEALIO pivots off a December 2021 article in JAMA IM by Ganguli et al. here.
Regarding "Reassessing Quality Assessment," see complaints by major organizations like IDSA (Infectious Disease Society of America) that review of a quality measure, SEP1 or NQF-500, was poorly handled by NQF - here.  Coverage also at MedPageToday, here

Regarding the Rosenbaum series, see part 3 also, "Peers and Professionalism," here.

Thursday, April 28, 2022

More Data on Medicare Advantage Denials: OIG Weighs In with 2022 Report

Direct instructions from CMS require that Medicare Advantage cover all services covered by traditional Medicare.   So the simple rule is, if the service gets coverage from Traditional Medicare, you get Medicare Advantage coverage for that service, also.   

(And, normally, that Medicare Advantage payment rate should be the same as traditional Medicare, unless you have negotiated a lower-fee contract - details here).

However, as a consultant, I also warn people that getting that coverage and payment from Medicare Advantage may take a lot of hand-holding or appeals.  And it seems to be focused on labs.  A Health Affairs report earlier this year by Schwartz et al. (here) found that MA plans just denied 1.4% of all claims, but hidden deep in the paper, a huge proportion, like 80-90%, of those denials were lab claims.

Now we have a new 61-page OIG report on MA denials of medically necessary care, as adjucated by physicians hired by OIG.  See the OIG report here, and see coverage at New York Times here.

OIG had 3 recommendations to which CMS concurred.

  1. Issue guidance on the use of medical appropriate use criteria for MA plans,
  2. Update audits to quantify and enforce action on this problem,
  3. Direct MA plans to address their "vulnerabilities" that lead to the excess denials.

______'

Below, from Schwartz et al 2022, Lab denials (either under MA specific rules (yellow) or under LCD rules enforced by MA plan(blue)) were the vast majority of all types of denials.



____

For labs, MolDx has announced it is contracting to share edit protocols with MA plans, which should tighten the linkage between what FFS Medicare covers and what a MA plan covers.


ASCO Updates Guidance for Breast Cancer Prognostics (Adjuvant Chemotherapy)

 One of the early landmark tests in precision medicine was the Oncotype DX test for breast cancer prognosis, first published in the NEJM in 2004.   Today, there are a half-dozen tests recommended with a few variations in society guidelines.   

ASCO has updated its guidance and published it April 19, 2022, as Andre et al., "Biomarkers for Adjuvant Endocrine and Chemotherapy in Early-Stage Breast Cancer" (30 pp).  Tests include Oncotype, Mammaprint (FDA), Breast Cancer Index, Endopredict, Prosigna (FDA), IHC-4.  When access to these is not available, they recommend Ki67 (a single immunostain).   Find Andre' et al here, open access:

https://ascopubs.org/doi/full/10.1200/JCO.22.00069

The prior version was Harris et al. 2016.

Flow chart from Andre' 2022

See coverage at Genomeweb here.

The Andre' guidance  has a significant section on ctDNA as a biomarker for the need for adjuvant therapy, however, falls back to a 2018 ASCO/CAP guidance (Merker et al) that stated this approach was not ready for routine clinical use as of yet.  

History Lesson

Today, MolDx has several LCDs that are keyed directly to relevant extramural guidelines, such as tying PGx coverage to CPIC and FDA PGx recommendations.   

Tidbit:  In May 2016, MolDx briefly proposed, then withdrew, a plan to directly lock Medicare coverage directly to current ASCO recommendations.  (My blog here).    (One community objection, at the time, was that guidelines are only updated every 5-6 years, as we see here with Harris et al. versus Andre' et al, 2016, 2022).

The Brief Noisy History of "CDD" and CED

Not directly related, but my search for the above link led me first to a December 2016 blog where MolDx proposed withdrawing coverage for the Vectra test (here).  That blog also talks about how MolDx seemed to be canceling or withdrawing references to its short-lived local "coverage with data development" or CDD effort.  MolDx later dropped CDD paragraphs from the MolDx handbook in 2018, here.  

A 2022 article on Medicare and risk sharing arrangements including CMS CED  by Chen & Carlson tallied up a large number of those long-defunct MolDx local CDD policies (here).   Not touching on MolDx CDD, but taking a deep dive into CMS NCD CED, see Zeitler et al. (entry point here).

Next: AI for Breast Cancer Prognostics

By keyword search, artificial intelligence and machine learning are not mentioned in Andre' et al.  However, there are already not just original studies but quite extensive review articles on using machine learning for breast cancer prognostics (see review by Li et al, PLOS ONE, 2021, here; see also Yousif 2022 here, Wang  2021 here, Fitzgerald 2021 here.)

A Sociologist's View

See a book on precision medicine from the sociology and anthropology viewpoint, with a chapter on the adoption and changing viewpoints of the Oncotype Dx test.  Chapter 2, Genomic techniques in standard care: Gene-expression profiling in early-stage breast cancer, in book: Personalized Cancer Medicine, Kerr et al., Manchester Univ Press, 2021.  (Hardcover only).




Wednesday, April 27, 2022

Case Study: When Do Evaluators Think a Test's Incremental Accuracy is Too Small?

When do evaluators, like tech assessment committees, think an increase in test accuracy is "too small" to be impactful?  This is something modern molecular diagnostics run into all the time.   Does a molecular expression test in breast cancer or prostate cancer provide enough added value, over pathology grade and tumor size?    

We have a case study, although from radiology not pathology, this week in JAMA Internal Medicine. 

Ominously, the research article and the op ed run under a banner, "LESS IS MORE," see also the home page for this series here.   If your new technology is reviewed under a logo "LESS IS MORE," it's probably never a good sign.

The two articles are Bell et al., a systematic review and meta-analysis of adding Coronary Artery Calcium Score to a "traditional CV risk assessment' e.g. things like BP and cholesterol.   There is an op ed Gallo & Brown.  Note, the op ed banner title includes "PRIMUM NON NOCERE" (first do no harm), which is probably never a good sign in an article about your product.

I think the key point is summarized by Gallo & Brown.  Existing predictors have C statistic of .70 to .80, and CACS adds .03  (e.g. 0.73 to 0.83).   They find this unimpressive and give some reasons why.

Sources

BELL (Article) here

GALLO (Op Ed) here


Discussion

Usually, the popular statistic "area under the curve" is a hard way to show "added clinical value."  If the AUC of the standard of care is .75, and the AUC of your molecular test is .79, what does that mean for outcomes and care?   First, AUC is an abstract concept, and second, AUC (or ROC) assume binary tests without a middle ground, and are based on pure sensitivity (in 100 known patients) and pure specificity (in 100 negative patients), so base rates are hard to extrapolate.   AUC means very different things clinically if there are 10 patients per 20 tests given, or 5 patients per 100 tests given.  

Usually it's better to translate to terms like "Current standard has 20 false positives per 100 patients, but our test cuts that in half.  This means 10 less patients per 100 get a needless biopsy." 

Friday, April 22, 2022

Very Brief Blog: CMS Publishes Notice, Annual Lab Pricing Meeting

CMS holds public meetings each summer for pricing new lab codes.

CMS has published the announcement for this year's annual public meeting, "For CY2023 Codes," which will be June 23, 2023.  See 87 Fed Reg 22897, April 18, 2022, here:

https://www.govinfo.gov/content/pkg/FR-2022-04-18/pdf/2022-08259.pdf

CMS will publish the code list about 30 days before the meeting.  In past years, new codes approved at the early May 2022 AMA CPT meeting are included in the June CMS meeting.  Also, the meeting "rolls up" and includes all PLA codes from the prior 3  quarters, meaning PLA codes applied-for on July 1, 2021 or later.

Presentations: June 2

Presentations must be submitted by June 2.  

Expert Panel: July 18-19

The Medicare Advisory Panel (of about 12 experts) will meet on July 18-19, 2022.  It has its own announcement, at 87 FR 22895, here:

https://www.govinfo.gov/content/pkg/FR-2022-04-18/pdf/2022-08253.pdf



Thursday, April 21, 2022

Very Brief Blog: Two Medical Policy Articles in NEJM: TCET/CESTI and Metric Myopia

Highlighting two medical policy articles in NEJM for weekend reading.

Metric Myopia

Rosenbaum discusses "Metric Myopia," are we setting up and excessively emphasizing single rewarded metrics for measurement...in a way that is net counter productive for overall health outcomes?   (And it's one more in a long history of articles on exactly this topic).  Find it here.

From MCIT to TCET to CESTI!

Mathews et al. discuss "Governance of Emerging Technologies in Health and Medicine" -  something that has been front and center at CMS, with debates about policies like "MCIT, Medicare Coverage for Innovative Technology" and "TCET, Transitional Coverage for Emerging Technology."  The NEJM article has a fairly long discussion then pivots to pointing the reader to a new emerging technology workgroup at National Academy of Medicine.  

See NEJM here.  

See "CESTI" at National Academy of Medicine, here.  (Committee on Emerging [medical] Science Technology & Innovation."  The initial announcement of CESTI goes back to January 2020.  It may have been derailed by COVID.   Getting back in the spotlight, CESTI has just held a workshop on April 14-15, 2022.  And, there's a current public comment page, a little vague, but open til about May 1, here.




Wednesday, April 20, 2022

Very Brief Blog: Link to Discussion of Mask Mandate Court Ruling, What It Says

There's been a flurry of journalism over Judge Mizelle's strike-down of the CDC mask mandate.  

Here is the original ruling:

https://pacer-documents.s3.amazonaws.com/40/391798/047124235804.pdf

Here is the best discussion of exactly what she said and why (second half of article):

https://davidlat.substack.com/p/musings-on-mizelles-mask-mandate?s=r


___


I think the ruling is sort of goofy in some ways, but it's certainly well-written and readable on its own terms.

First, the judge has a long discussion of the exact statutory authority for public health measures (looking up dictionary definitions of terms like "sanitation" and using 1940s dictionaries for the 1940s law.)  

Second, she rules that the CDC violated Administrative Procedures Act law, which can be enough to void a regulation.  APA requires at least 30 days public comment, which the agency can skip in exceptional urgent circumstances.   She says the CDC did not do nearly enough to justify, in writing, facts and circumstances for urgency.  In her telling, CDC merely remarked the matter was urgent so it was skipping APA.  She also notes that the CDC could have, but did not, do an easy workaround.  That is to issue an interim final rule (IFR), immediately open 30 days comment, and then close the rule (say, on day 35) as a non-interim aka "final" rule.  CMS sometimes does IFR with comment in this way.  The 35 day period hypothesized here would have run in February 2021 or so, long before the case reached Judge Mizelle in past weeks.

Finally, she says the ruling was arbitrary and did not discuss alternatives (e.g. temperature checks instead of masks.)  She doesn't claim that temperature checks were as good as masks, only that, she feels the CDC was obligated to type out a discussion of alternatives and failing to do so was enough to be fatal ("arbitrary.")

There's also a reference or two to "interstate" conditions.   This is based on the historical basis of many laws on "interstate" powers of the federal government.  She only touches on this and doesn't delve into the implication the CDC could regulate interstate flights but not within-state ones.  Recall that in the Constitution, states were like independent countries that could do nearly everything, except for powers (like declaring war) given over to the federal government by the Constitution.  Thus, current conservative justices might view state mask mandates as state's broad and original rights to regulate themselves (the broad term of art is "police power"), but federal mask mandates require a clear and particular line of delegated constitutional authority (such as to regulate interstate commerce).   



Tuesday, April 19, 2022

The Exception for Pathologists Ordering a Test: Medicare Policy

Generally in Medicare, only the physician who is treating the patient can order a diagnostic test, enshrined at 42 CFR 410.32. (See also the Fed Reg 1997 where the rule appeared, more here).

However, there are some exceptions for radiologists and pathologists, and I frequently forget where they are and have to dig them up.  They are enshrined at the Medicare Benefit Policy Manual Chapter 15 - here.  See sections like 80.6.4, Rules for Testing Facility Interpreting Physician to Furnish Different or Additional Diagnostic Tests.  

For example, in Radiology, the radiologist may determine things like "the number of radiographic views obtained, use or non use of contrast media."

In Pathology, the pathologist may do additional special stains if "the services are medically necessary so that a complete and accurate diagnosis can be reported to the treating physician."  And the pathologist uses the results and documents their use.  

  • An example is given of a lung biopsy, which on microscopy shows a "granuloma" (microscopic finding) suspicious for tuberculosis.  He may order a stain for tuberculosis.

While the current manual 15:80.6.4 lists this section as revised in 2008, effective 2003, implemented 2007 (!!), I've seen the language in online old documents as far back as 2002 (here, CR2167). [*]

__

The examples haven't changed in decades.  But pathology has.  It's not just $10 special stains any more.  Today, can this pathologist order a $3500 prostate prognostic test or $3500 breast cancer prognostic test "to finish the case" and provide a complete diagnosis?  Seems unlikely.  But exactly where do you draw the line between the $10 TB stain (enshrined in the manual and A-OK) and the wide gamut of modern tests (e.g. an NGS-based TB test for $150 instead of a TB stain for $10?)  Can a pathologist or radiologist order a supplemental AI-based analysis?  A puzzle for another day.



__

[*] For a real 'nerd note,' I think this section of the manual was deleted in a big document restructuring around 2003-2005, and rediscovered and put back in in 2007/8.  

For my uber-nerdy 2012 discussion of regulations for placing test on the pathology/RVU fee schedule vs clinical lab/CLFS fee schedule, see here.  It has to do with 42 CFR 415.130, with reference to additional rules at 415.102(a) which also apply before 415.130 is applied.  The lab test has to be ordinarily performed by a physician, defined as >50% of the time, such as reading a glass slide as breast cancer.  This glass slide diagnosis is ordinarily performed by a physician, not by a lab expert PhD, so it is a "pathologist service."  In contrast, the lab work and report that states you have an Oncotype Dx breast cancer score of "35" is not ordinarily performed personally by a physician, and is not a pathology test, but a clinical laboratory test.  Another area of confusion is "molecular pathology," used without definition in the date of service rule.  In practice, such as in Appendix B of the outpatient fee schedule for hospitals (OPPS APC), CMS defines "molecular pathology" as only human DNA-RNA tests (not microbiology.)  If you look to pub med or google, often "molecular pathology" refers to human DNA-RNA tests, but sometimes, it includes molecular microbiology in scope.  While "pathology" is on the physician RVU schedule, "molecular pathology" is on the CLFS schedule. 



Brief Blog: Novitas Publishes Draft LCD for GI Pathogens

Novitas has just published a draft LCD for GI pathogens and multiplex testing.  

Find it at DL38229 at CMS.  Comments to May 28.

It's a busy area for MACs.  

  • Novitas published an LCD for pathogens last fall I found maddeningly vague (tests are covered when medically necessary and timely).  
  • MolDx published a pathogens LCD this spring which is extremely detailed.   
  • The MolDx tech assessment guide, though, for pathogen panels is a generation forward in clarity and completeness (here).

The new GI LCD proposal from Novitas is in-between, like the middle bowl of porridge in the Goldilocks story.


Nerd Note: LCDs that Replicate Existing NCDs or are Badly Written

CMS tells its MACs not to write LCDs that simply replicate coverage rules already in LCDs.  In part, this is because administrative law judges can over-ride language found in LCDs, but can't override language found in NCDs.   It's confusing to the ALJ (and causes errors) if he thinks he can override language in an LCD, but it's actually quotation from some obscure source that he is forbidden to override.

Two offenders are at the top of the "New Draft LCD" report at CMS.

Redundant LCD for Bone Mass

Palmetto LCD DL39268 is for "bone mass measurement" and does virtually nothing but replicate language already in the NCD (and its claims manual instructions) for bone mass measurement.  While they do put the benefit manual and claims manual citations in the header section, the body of the LCD is almost wholesale plagiarism from those existing sources and you can't really tell that.

Palmetto then goes on with a "summary of the evidence" for BMM, quoting USPSTF and so on, but this is wholly unnecessary, because the LCD is simply restating NCD level and statutory- and regulatory-level text - laws and regulations that define coverage for BMM.   See SSA 1861(rr) and 42 CFR 410.31. 

If in fact MACs are far behind and backlogged with LCD work, efforts like this seem like a pure unfiltered waste of time.  Comments open to May 14.

LCD for Stem Cell Transplants

And another one.  Also from Palmetto, LCD DL39270 covers allogeneic stem cell transplants for B and T cell Hodgkin's and Non-Hodgkin's lymphoma.  Again, this is an area covered by an NCD.   While the text states, "this policy describes locally covered indications," it doesn't do so very clearly.  

The format for the LCD is standardized by CMS: "Coverage Indications and Limitations," followed by "Summary of Evidence," and "Analysis of Evidence."   

Sounds simple, but, nope.  Here, the coverage section merely repeats the NCD coverage, and then states, this LCD will cover additional indications.  And abruptly, with that remark, the coverage section stops cold.  

The LCD goes on to discuss "summary of evidence" and "analysis of evidence" but this isn't where the statements of covered and non covered criteria go, they belonged in the section labeled "indications and limitations for coverage."   

The "summary of evidence" is supposed to be an objective description of published evidence, but it is not.  It makes some broad statements and re-quotes the NCD again, which is not "a summary of the evidence" at all.   

Then the jumps to "analysis of the evidence" (where you draw conclusions and asses the pro's and con's of the evidence against the need for coverage) and if anything, that reads more like a dry "summary of the evidence."  

To convey this idea, for comparison, a medical record may have a "summary" of the patient which is what you see (fever, clouded chest x-ray, hacking cough, O2 sat of 85%, and household COVID exposure) and an "analysis" which is what that evidence makes you think (the patient has pneumonia and merits admission due to severity.)

In short, despite CMS providing a simple template, the LCD is confusing and has a willy-nilly disregard for the several headers and their expected content.  Comments open to May 14.


Brief Blog: MolDx Publishes Draft LCD on GI Dysplasia

Gastric reflux can causes precancerous changes (dysplasia) in the esophagus, and they're traditionally monitored by endoscopy and biopsy.  As in so many areas of surveillance, molecular tests have been entering the space.  

MolDx has released a draft LCD and article on the topic, with comment to May 14.  See Medicare documents DL39256, DA59015.

MolDX: Molecular Testing for Detection of Upper Gastrointestinal Metaplasia, Dysplasia, and Neoplasia

https://www.cms.gov/medicare-coverage-database/view/lcd.aspx?lcdid=39255&ver=2&proposedStatus=all&sortBy=commentStart&bc=9

https://www.cms.gov/medicare-coverage-database/view/article.aspx?articleId=59014&ver=2

What It Says

The first sentence doesn't sound promising:

Current molecular diagnostic tests that identify individuals with upper gastrointestinal metaplasia, dysplasia, and neoplasia are non-covered by this contractor.

The LCD provides ten bullet points that must be met for coverage.  For example,

The test demonstrates analytical validity (AV) including an analytical and clinical validation for any given measured analytes, and has demonstrated equivalence or superiority for sensitivity or specificity of detecting dysplasia to other already accepted methods for the same intended use measuring the same or comparable analytes.

Not in these bullet points, but later, they remark that tests will "ideally" (quote unquote) have 95% sensitivity and 95% specificity.

While the LCD states, current molecular tests are non-covered, the Billing Article lists code 81479 (which could refer to any list of unknown unstated tests) and lists code 0114U, methylation analysis for Barretts.

But don't get your hopes up, these codes don't appear to be for "coverage," since the billing article opens with the unpromising statement, "To receive a denial, please submit the following."

Analogy to CMS Colorectal Screening Blood Test NCD

Famously, in 2021, CMS released an NCD for blood tests for colorectal screening, promising to cover them if FDA approved and if 74% sensitive (picking up 3 cancers in 4) and 90% specific (about 1 false positivein 10).  No current tests meet that bar (!), although several probably will within a few years. 

Similarly, MolDx opens by saying no molecular Barrett's tests are covered, yet, but provides performance and publications criteria that would trigger coverage in the future and on a rolling basis.

TissueCypher - Barrett's

Very recently, as I have blogged, CMS declared the Cernostics/Castle Bioscience test "TissueCypher," 0108U, to be an ADLT test (price $2350).   Since it is an ADLT test, by definition it must be "covered," probably by Novitas, since TissueCypher is in Pittsburgh.  However, I couldn't find any coverage info on 0108U at the CMS coverage database.   Code 0108U is not mentioned in the new MolDx LCD or article, which it wouldn't necessarily be, since they don't bill into MolDx.  However, MolDx has written coverage (or non coverage) on non-MolDx tests before, such as the MolDx LCD for 4KScore.

Vagueness

Some features are still cryptic in these LCDs.  For example, at some future point, MolDx might have reviewed 6 Barrett's tests, pass 2, fail 4, all under code 81479.  All you see in the article is a bare reference to "code 81479" and you don't know the reviews, the passes, or the fails.  Technically, that information could be found in the MolDx DEX database, but only if you knew in advance the brand names of the labs and tests, which you don't.

And the LCD doesn't spell out some criteria labs may debate during development.  How many patients are needed to study?   Is a single-center study OK?  Or not?   Do there have two separate studies or is one enough?  What proportion need by age 65?   These are key planning questions and aren't mentioned or alluded to in the LCD evidence criteria, but are something might be gleaned in pre-meetings with MolDx.







Brief Blog: CMS Releases Annual Inpatient Propose Rule (FY2023)

As usual for 2H April, CMS releases the proposed inpatient rulemaking for the upcoming fiscal year.

  • Find the press release here.
  • Find the detailed Fact Sheet here.
  • Find the home page for the rule here.
  • Find the pre-publication (typescript) PDF here.  1768pp.
    • Typeset rule in Federal Register, May 10.
    • Comments to June 17.
See a quick ten-point summary at Becker's, here.

Last year, there was quite a bit of policymaking around what level of detail to require in ICD-10 codes.  At the same time (Spring 2021) that CMS was proposing reasonable changes for DRG coding, CMS had proposed just-plain-crazy changes to coding for the Next Gen Sequencing NCD, 90.2 (story here).

NTAP

Find the always-interesting sections for add-on payments for new technologies, beginning at typescript page 247 and running to page 629.  It looks like there are 13 products ("a" through "m").  (Fun fact - they are discussed in alphabetical order.)  There's a discussion of "alternative" NTAP pathways at page 547, whether to post applications (p. 623), whether to use NDC codes to identify NTAP drugs (p. 616).  

Several recent-past NTAP products have used artificial intelligence (e.g. screening CT images for pulmonary embolisms or strokes).   Some have won NTAP payment, interesting since a radiology AI system for bone mass and fractures is classified as unpayable in the hospital outpatient system (see CMS OPPS rule for code 0691T).   

This year, the NELLI Seizure Monitoring system is discussed "to enable detection of epileptic events using pretrained artificial intelligence." (P. 585.)  

Also, the TAVI Coronary Obstruction Module (p. 592) using "intelligent decision support powered by AI and machine learning."  Those look like the only two machines of AI or ML in this year's rule.

SEP-1

For those who track debates around the sepsis measure SEP-1 (0500) it's in the rule in its usual place among quality measures, and scheduled to stay there through at least FY2028 (p. 1207). It's one of only two laboriously chart-abstracted measures, along with "elective delivery" (0469) which requires manual abstraction but is quite rare among Medicare patients.







Brief Blog: White Paper on CMS Innovation - 4 Minute Video Also Available

Earlier this month, I was excited to be able to publish a 3-page article on CMS and innovation.  What are some ways CMS actually supports innovation well?  What are some of the key problems?

Find the article here  Find the trade journal, Inside Precision Medicine, here.

______

If you like a multi-media approach, I've also produced a snappy four minute video summary of the article.  Find it at Youtube:

https://www.youtube.com/watch?v=waJY6fXe1N0

  



Monday, April 18, 2022

Very Brief Blog: AMA CPT's "APPENDIX S" - Artificial Intelligence Taxonomy

I think this may have slipped by me in the fall of 2021.

The CPT 2022 code book has "Appendix R," which is a two-page, 10-column table of Digital Medicine code categories.

TO late for the 2022 code book, but online at the AMA website, see "Appendix S," which is a two-page, multi-category taxonomy of "artificial intelligence for medical services and procedures."

Find the AMA page for the project here.  This webpage includes a brief summary of the aim and result.

https://www.ama-assn.org/practice-management/cpt/cpt-appendix-s-ai-taxonomy-medical-services-procedures

Find the 2-page "AI" PDF taxonomy here:

https://www.ama-assn.org/system/files/cpt-appendix-s.pdf



Related - as part of a project today, I learned that the CPT code 0691T, which is autonomous classification of bone mass and fractures, is classified by CMS as "not payable" in the hospital outpatient setting (OPPS APC Appendix B, for insiders).   

0691T is an unpriced code in the 2022 physician fee schedule tables for Part B, and there, 0691T set up to take a -TC and -26 modifier, and policy appears to be left to contractors.  See a July 2021 trade journal article about 0691T here.

Friday, April 15, 2022

CMS Creates New ADLT Code: 0108U, Cernostics TissueCypher

 CMS has updated its ADLT test page to show a new test, 0108U, the Castle Biosciences/Cernostics "TISSUECYPHER" test for Barrett's esophagus.  Castle press release here.

Find the listing here:

https://www.cms.gov/files/document/advanced-diagnostic-laboratory-tests-under-medicare-clfs.pdf

The test is priced at $2350.

Code 0108U is for "Gastroenterology, Barrett's esophagus, whole slide digital imaging, including morphometric analysis, computer assisted quantitative immunohistochemistry of 9 protein biomarkers, and morphology, FFPE tissue, algorithm as risk of progression to high grade dysplasia or cancer.”

I had not been aware of 0108U as a "covered test."  (CMS prices tests with codes, whether or not they're covered.)  However, it has to be a "covered test" to win ADLT status, so it must be.   

Concentration of ADLTs

I tally 12 ADLT codes, with this ownership:

  • 4 Castle, 2 each for Biodesix, FMI, one each for Guardant, Natera, Myriad and Veracyte.  (Note that the CMS ADLT page shows a Melanoma test as pertaining to "Myriad," 0090U, but it was traded to Castle.)

Two tests are similar (FMI liquid biopsy panel 0239U, Guardant liquid biopsy panel 0242U), because they are ADLTs conferred via the FDA route to ADLT, rather than by the MAAA route which adds a "uniqueness" criterion.   Those two tests are also automatically covered under NCD 90.2, which covers all FDA approved NGS CDx tests.

_________________

I initially found the conversion of 0108U to an ADLT puzzling.  Creation of a new ADLT code is pretty cut and dried and follows ADLT regulations.  The test has to be covered by CMS, and be a MAAA test of DNA, RNA, or protein, and be "unique" as interpreted by CMS.   Then, the test is initially priced at the list price, which is carefully defined in regulation as the lowest publicly available price on the first day the test is offered to the public.  OK.

Code 0108U is not so new (we're approaching 400 PLA codes), and already had a fee schedule price of $2513.  I had previously thought that if a test was already priced on the CLFS, then, becoming an ADLT is still possible but it doesn't trigger a new-test pricing period.  I may have misunderstood.  A rule like that may have applied only prior to 1/1/2018 (see the footnote on the ADLT PDF page linked at top).  The lack of a new ADLT period to an already-priced code may apply just if "payment was made on CLFS before 1/1/2018.")

Back the the MAAA definition.  An ADLT of the MAAA type must be analytes of DNA, RNA, or protein (quotation from statute).  CMS must have determined that the immunohistochemical measurements in the Cernostics test are a measurement of "protein" (intensity of staining measures protein).  

_________

Cernostics isn't listed in the MolDx DEX registry and Castle has four tests listed, but not TissueCypher.   I looked at the CMS Coverage Database for 0108U, and didn't find an entry for it.  Cernostics is based in Pittsburgh, so it would bill into Novitas from that location.  Cernostics was acquired by Castle in 12/2021.

Above image from: https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/ClinicalLabFeeSched/Clinical-Laboratory-Fee-Schedule-Files