Saturday, September 23, 2023

Lecture Notes: German Health Minister Sets Goal for Fast Digital Advances in German Healthcare

Some readers notice that I am a lifelong German-language hobbyist, and I know more the health system in Germany than any other European country.  Sometimes this pays off, like this week, when I attended a fascinating lecture Germany's Health Minister on dramatic new goals for a digital push forward in the German healthcare system.   The event was a joint Q&A with Dr Karl Lauterbach and Dr Micky Tripathi, the National Coordinator for Health Information Technology at the US HHS.  It was held at the Kennedy School at Harvard on September 22.

Dr. Lauterbach, who holds a PhD in health policy from Harvard, became well-known in Germany during the COVID crisis but did not ascend to the position of Health Minister until the change of  government in late 2021.   As prep, I ran across a 30 minute interview that Lauterbach gave a few days ago, with the German newspaper DIE ZEIT, on AI in healthcare (German; here).   He recapitulated many of the same points in Boston this week.

Lauterbach focused on 4 or more pieces of legislation just now moving through the writing, Cabinet-approval, and parliamentary process in Berlin.   The results would connect doctors' offices, hospitals, and pharmacies throughout German with a unified electronic health record.   

Moreover, there would be an "outer ring" and an "inner ring," per Lauterbach.  The outer ring would be encrypted but patient-specific, and patients (who could opt out entirely) could easily allow transfer of clinic visits and history to new providers.   The inner ring would be anonymized and constitute a regulated secure "Federal Research Space" as a learning health system and for research.  AI could help create retrospective narrative structures from original free text.  

Other features mentioned by Lauterbach included a reform of the research approval process so that high standards of patient safety could be maintained while shrinking the burocractic approval process to 30 days.   Both at DIE ZEIT and at Harvard, he referred several times to 2025 as the goal for putting major pieces in place (this seemed remarkable to me).  

In the DIE ZEIT interview, countries like Israel and Sweden were used as models and goals for electronic health records the great-leap-forward in Germany.   In addition, online, the interviewer noted that Germany was not only working with or learning from nations, like Israel and Sweden, but Microsoft and Google were at the forefront of digital health and AI.   Echoing this exactly, at Harvard, Dr. Tripathi closed by noting in the last few days he was keynoting at meetings where he was preceded or followed by Microsoft or Google.   

##

Context.  It's difficult for me to entirely place this "new plan" in context, as a quick Google search the last five years shows a plethora of articles about prior legislation and prior goals and deadlinees to help Germany "leap forward" from its backward position in digital health records and interoperability ("Digitalisierung.")  E..g. this page is dated 2020.  Compare a 2023 essay here.  Other articles talk about the shaky status of German hospital economics, and Covington just published an article updating us on pending major hospital DRG reforms in Germany, here.

AI Corner

I left the Lauterbach-Tripathi session with a page of fragmented phrases, keywords, and roughly typed notes.  I dumped them into Chat GPT and asked it to edit them into an elegant journalist-style essay.   Here.

Bonus Book

Health care change, let alone health care reform, has always been messy and chaotic and confusing.  See a 2021 book by a Yale professor, Peter Swenson, covering 150 years of zig-zagging US health policy - Disorder.

Journal Club: New JAMA Essay on AI in Healthcare (Gottlieb & Silvis)

A new, open-access JAMA Health Policy essay, "How to Safely Integrate Large Language Models Into Health Care," two authors with both industry and FDA experience discuss pathways for integration of Large Language Models (LLMs) in healthcare. Dr. Scott Gottlieb  served as FDA commissioner and Lauren Silvis is a former senior FDA regulator now at TEMPUS.

The essay highlights the potential of LLM's to enhance patient interactions and healthcare delivery.  Three takeaways:

LLM Potential: The essay underscores LLMs' transformative potential in healthcare, enabling human-like text generation, aiding in diagnosis, and providing ongoing patient support.

Staged Integration: The authors advocate a cautious and phased approach, starting with well-understood conditions, to help ensure safe LLM integration and boost confidence.

Data and Bias: Effective LLM use in healthcare demands improved data sharing, bias detection, and inter-institutional collaboration for accurate and unbiased results.



Wednesday, September 20, 2023

Very Brief Note: 153 Comments Posted on "TCET" Proposal

 CMS has posted (after a few weeks' delay) the 153 comments on its TCET, emerging technology proposal:

https://www.regulations.gov/docket/CMS-2023-0107

TCET was discussed multiple times at a September 19, 2023, House Energy & Commerce hearing on Medicare and innovation.

https://www.discoveriesinhealthpolicy.com/2023/09/september-19-2023-congress-holds.html


Tuesday, September 19, 2023

September 19, 2023: Congress Holds Hearing on CMS and Innovation

On September 19, 2023, the House Energy and Congress committee held a hearing on, "Examining policies to improve seniors’ access to innovative drugs, medical devices, and technology."

Find the home page here.  The agenda includes Dr Dora Hughes, chief medical officer for CMS. The E&C website links to the YouTube archive video stream.  Her ten page testimony here.  (Most of it, was a fairly dry review of LCD and NCD and CED processes).

Chair's remarks here.  

The most interesting document is the 6 page hearing memo here.  Remember, for Dora Hughes, this is essentially a hostile audience - the Republican House vs the Democratic HHS administration.  The hearing memo reviews 10 or 12 "legislative fixes" that have been proposed to improve the NCD and other CMS processes.



AI CORNER

I've posted an unofficial auto transcript beginning with an unofficial detailed summary, in the cloud here.



MolDxology: DEX Registry Allows Categorical Browsing

I'm not sure when this started, but I'm pretty sure it didn't used to be here.   The MolDx Z code online registry, DEX (which shows all about a registered test, EXCEPT its actual secret Z code) has always allowed searching by test name and lab name.   

(Test name can be a real wild west, and lab name can even be complicated in case of acquisitions and subsidiaries.)

Now Palmetto DEX allows searching by five different drop-down categories.  At the left-hand side, these are diseases, FDA status, medical specialty (e.g. cardiology), method, and test type.  (Test type is "diagnostic, confirmatory, predictive," etc).  Find DEX here:  https://app.dexzcodes.com/

There appears to be a built-in "and" logic, for example, here I've searched for the Test Type category "DIAGNOSTIC" while I happened to have let the test name "RaDaR" in the name search field.  This excluded quite a few variants of the RaDaR test, but showing one that is classed as "diagnostic."   

The DEX registry is available with free email registration.


##

Bonus

The New York State "Wadsworth" system allows searching for lab tests endorsed by New York State, here.  Article in 360dx.

##

AI Corner:


This blog in French.

ACLA, Others, Comment on Verbose and Confusing Novitas LCD for Oncology

 Last year, Novitas issued a draft LCD for oncology that primarily deferred coverage judgements to compendia like NCCN.  They finalized a far longer, more confusing version of the LCD this past June, withdrew it in July, and reissued it as "draft" in July.  It's accompanied by an extremely long and confusing billing article.   See my July blog about the "re-issue" here.

ACLA, in a press release, flags some of the public comments, including one by a multi-stakeholder group and one by ACLA itself.   Stakeholders quarrel with the deferral of coverage decisions (unless made in a one-off way for a few tests), to the NCCN and other guidelines.

See the ACLA press release here.  It has a number of links in it. These include the multi stakholder letter, the ACLA letter, and its supplement.

The comment letters are worth reading but frankly, I'm surprised they all just take the very long, verbose, repetitive, and confusing policy and article "on its face" and try to respond rationally to it.

For the AMP response, not yet posted, check here.

##

Open Meeting Transcripts Available.

See the LCD open meeting page here.  See the transcript (and audio) listing here. 30,000 words.

##

NCCN Compendia for Drugs - As a Model

If you wanted to use compendia as a guide to coverage (more a floor than a ceiling, though), there's an excellent precedent.   SSA 1861(t)(2) refers to cancer compendia, and CMS rulemaking provides process guidance.  See the CMS webpage here.  The regulatory discussion, in 2009, was just a few pages long here, The process in regulation at 414.930, is here, just a few paragraphs long.  They're written in a few paragraphs of normal, clear professional English.    (For NCCN's request letter to be endorsed, 10pp, here. For CMS's review of it, here.)


AI Corner #1.

Here is a summary by ChatGPT of the multi-stakeholder letter.

The letter, dated September 8, 2023, is addressed to Dr. Patrick Mann and Dr. Alicia Campbell regarding concerns about the draft local coverage determinations (LCDs) for "Genetic Testing for Oncology" by Novitas and First Coast Service Options. The letter highlights fears that the LCDs, if finalized as drafted, would limit Medicare beneficiaries' access to genetic testing for cancer, potentially violating Social Security Act and Medicare Program Integrity Manual requirements. Concerns include default non-coverage for tests not in knowledgebases, ICD-10 code issues, documentation requests, and restrictions on hereditary cancer syndrome testing. 

The letter requests collaboration with stakeholders to address these concerns and ensure access to essential genetic tests for cancer diagnosis and management. Signed by numerous healthcare organizations.

AI Corner #2

A summary by ChatGPT in the style of Lewis Black.  Here.


Friday, September 15, 2023

Brief Blog: Medicare Fiasco News: Patient Bowled Down by Obscure SAD List

BULLET.  Patient trapped with sudden $176,000 drug bill due to an obscure Medicare MAC policy change. +Lawsuit.

####

Background

Medicare policy makes extensive use of "incident to" services.   For example, there is a benefit for durable medical equipment like drug pumps and the drugs they pump are actually secondary tag-alongs to the DME benefit for the pump.  (Weird!).   Drugs that are administered in physician offices, like chemotherapy infusions, are covered "incident to" the service of a physician.

There are some rules around this.   The drug must be NOT self-administered HALF the time or more.   MACs are required to keep and update lists of injectible drugs that are NOT self administered, and thus eligible for office payment.   CMS defines this across all patients; if one patient is quadriplegic, for example, that doesn't matter if 51% of all patients self administer the drug (such as insulin).

MACs don't always agree (SAD lists can differ) and big debates sometimes occur.  See a sample "excluded" list here.

Today's News

MEDPAGE TODAY has a detailed article (by Cheryl Clark) about a patient who faces a major crisis because his drug was switch to "self administered, not payable in office" status by the MAC where he and his doctor live.   (Article may require email registration).  He got a bill for $176,000, whereas up to that point, his 80% payments were covered by CMS and his 20% payments were covered by his medigap plan.  A lawsuit by the Center for Medicare Advocacy is in flight.

Find the article here:

https://www.medpagetoday.com/special-reports/exclusives/106338



###

Another example of a 50% rule separates physician services and other services.   Medicare classifies a physician service IF AND ONLY IF a physician signs performs the service more than 50% of the time.   (42 CFR 415.102 . The rule doesn't state 51% numerically but this is how CMS implements the word "ordinarily" done by a physician.)   

The pathology rule is even tougher, the test/service must REQUIRE (quote - unquote) the service of a pathologist (415.130).  Back in 2012/2013, some stakeholders wanted the then-new genetic test codes to be on the physician fee schedule, but CMS determine that genetic tests did not "require" physician signout (the lab director can be a PhD).  (My 2012 white paper still downloadable here.)

###

Article summary.

Medicare unexpectedly changed its policy on the drug Stelara, classifying it as "self-administered" (SAD) on October 15, 2021. This led to retirees like George Beitzel, suffering from Crohn's and Parkinson's diseases, facing unexpected bills of up to $176,000 for previously covered injections. The Center for Medicare Advocacy filed a class-action lawsuit against this policy shift, arguing for notice, cost waivers, and professional administration options for patients unable to self-administer. Thousands of Medicare beneficiaries may be affected, raising concerns about the impact of such changes on patients' health and financial well-being.

As Haiku:

Medicare's surprise,
Stelara reclassified,
Burdens patients' lives.

##

AI Corner.

I fed a CMS MAC SAD Article to Chat GPT and asked it to figure it out and explain it.  Here.

Very Brief Blog: MAC CAC on BOTULINUM

 While we think of botulinum toxin first for cosmetic uses, it has a range of medical useless as well, such as in neuromuscular conditions.  E.g. blepharospasm.  An example of a MAC LCD is here.

The several MACs, all of them, have a "multi jurisdictional" CAC or public expert advisors meeting on botulinum, coming up on Thusday, October 19.  

FCSO link here (entry point).

NGS MAC runs the show here.

A 17-page background and question list is here:

https://www.ngsmedicare.com/documents/d/ngs/2118_0923_questions_for_botulinum_toxins_sme_panel_508-pdf

Click to enlarge.  Sample.


Very Brief Blog: Interesting Posts from Center for Genomic Interpretation (CGI)

While I don't agree with every position that the CGI takes (Center for Genomic Interpretation), they held my attention for a half hour today looking through their blog posts and Linked In posts from recent months.   

Find their blog posts here:

https://www.genomicinterpretation.org/blog/

And find their Linked In article feed here:

https://www.linkedin.com/company/center-for-genomic-interpretation/posts/?feedView=all

For example, a recent blog here points to a peer reviewed article here.  This links to an August 2023 paper, Patel et al., titled, Genomic Data Heterogeneity across Molecular Diagnostic Laboratories: A Real-World Connect Myeloid Disease Registry Perspective on Variabilities in Genomic Assay Methodology and Reporting.

For another recent example, they linked to the CLIA Advisory Committee CLIAC with regard to an August workgroup on NGS in CLIA labs.   Find the blog here.   You can link through to a 29 page CLIAC summary document here.  Or the direct PDF at CDC here.

See a CGI blog several months ago on what they see as weak points of flaws in recent state level biomarker legislation - here.

Fun fact - the laws vary across the 10 states where they have been passed.  There's a website that details the differences, but not that the table is *very* large and long and hard to read.  (I didn't even realize the table was there for a couple minutes, scroll down and down.)

See a link to Pfeifer et al. 2022 who discuss reference samples for inter lab comparisons of NGS, here.



Thursday, September 14, 2023

Multiple Articles: Concerns about Quest LDT Alzheimer Blood Test?

FDA-approved Amyvid (PET Scan) advertised next to LDT article

Multiple news articles this week quote experts as expressing concern about an LDT (lab-developed test) being marketed as an Alzheimer diagnostic.   The test is a blood test; the only FDA-validated tests for Alzheimer's so far have been PET scans and CSF tests.

See an article from Reuters here.  With remarks such as, "Dr. Sarah Kremen, a neurologist at Cedars-Sinai in Los Angeles, was concerned that people who test positive but have no symptoms will come in seeking further testing."

See an article at MedPage here.  With remarks such as, "There are no large-scale, long-term clinical trials that support the idea that the AD-Detect test can predict whether a cognitively unimpaired person will transition to cognitively impaired," said Rebecca Edelmayer, PhD, senior director of scientific engagement at the Alzheimer's Association in Chicago. "As a result, it is unclear what the results of this test may mean about your Alzheimer's risk or your health status/"

The longest article, by Adam Bonislawski, is by subscription at 360Dx.  With remarks such as, "mass spec-based assays for Aβ 42/40 ratio have higher performance ... than do immunoassays, but "you have to be very rigorous about" your measurements, said Suzanne Schindler, associate professor of neurology at the Washington University School of Medicine. "There's only about a 10 percent difference between positives and negatives, and so if you are off by a little bit or your assay drifts, then you can really misclassify a lot of people," she said, adding that Quest has released little data on the analytical and clinical performance of its test. [The article continues, "At the 2022 Alzheimer's Association International Conference, Quest presented a poster on AD-Detect, and said it aims to publish data on the test in a peer-reviewed publication."]

##

The test is NY State approved; search for analyte amyloid and facility Quest, here.

The test is stated in an article above to be 71% specific, which could mean about 1 in 3 would get a false positive, although the negative and positive predictive values are highly dependent on the test population (population spectrum).   One historical problem - not necessarily relevant here - is Alzheimer tests over the decades that were validated on 100 perfect controls and 100 perfect Alzheimer cases, but then perform much worse on the real world and borderline patients who actually need testing.  

##

AI Corner:

ChatGPT reviews the three articles, then describes them in the satirical voice of Louis Black, here.


Wednesday, September 13, 2023

Journal Club: (1) Scenarios for Whole Genome Seq Adoption, (2) Scenarios for Any MedTech Adoption

This week I ran across two excellent papers on medtech adoption.  

One is recent, by van de Ven 2021, on adoption of whole genome sequencing.  The other (from a citation within Ven) turns out to be a classic, with 1200 citations to it, which is Greenhalgh 2017 on principles for the adoption of any med tech.


van de Ven 2021: 

Whole genome sequencing in oncology: Using scenario drafting to explore future developments.  BMC Cancer 2021.

The Van de Ven paper explores the use of scenario drafting and expert elicitation to anticipate future developments in the implementation of Whole Genome Sequencing (WGS) in clinical oncology. 

It identifies potential barriers and facilitators, highlighting the importance of factors such as price, clinical utility, and turnaround time in determining the likelihood of WGS adoption, offering valuable insights for policymakers and stakeholders in genomics.

Open access here.

For further reading see Ellis 2023 or Fleck 2023.


Greenhalgh 2017:

Beyond Adoption:  A new framework for theorizing and evaluating non adoption, abandonment, and challenges to scale-up, spread, and sustainability of healthcare technologies.  J Med Internet Res 2017.

The Greenhalgh paper introduces the NASSS framework, which offers a comprehensive and nuanced approach to understanding the adoption, scale-up, spread, and sustainability of complex medical technologies. The authors emphasize the importance of recognizing the non-linear and context-dependent nature of technology adoption and highlight the need to address factors related to the technology, the organization, the wider system, and the individual in healthcare settings.

Open access here.

For more about NASSS, see an implementation for a cardio health tracking device, open access, Abimbola 2019, here.  See a shorter overview of NASSS ("Cliff notes"), Greenhalgh & Abimbola 2019, here.  See a 2020 paper that introduces a "NASSS Toolkit" here.


Bonus 1:

There's a new 2023 paper by Greenhalgh et al. on the complexity of figuring out differences in values, among stakeholders in healthcare.   Find it open access at Milbank Quarterly, here.    

Bonus 2:

Greenhalgh et al. 2023 writes a letter to the editor of Annals of Internal Medicine about a review of N95 masks.   It contains this great quote:  

  • "Some members of the evidence-based medicine community seem to assume that a randomized controlled trial, however imperfectly and illogically designed, is necessarily superior to other forms of evidence. This is not the case."
  • Find it open access here.
  • See a lively video by Greenhalgh on flaws of evidence based medicine here.

AI Corner x 3

Find a ChatGPT discussion of Greenhalgh 2017, here.  For example, Chat applies the framework to genomics, and to digital pathology.

Find a ChatGPT discussion of van de Ven 2021, futurism in WGS, here.   

Find a ChatGPT discussion of Greenhalgh 2023, on complex values in healthcare, here.





Tuesday, September 12, 2023

Stranger Things: USPSTF Launches, Then Halts, Lynch Syndrome Guidance

Currently, at the USPSTF website, there's an announcement that they launched an evidence review plan for Lynch Syndrome in February 2023.  Lynch Syndrome is a range of genes related to colorectal and some other cancers.   USPSTF here:

https://www.uspreventiveservicestaskforce.org/uspstf/announcements/public-comment-draft-research-plan-prevention-lynch-syndrome-related-cancer

FORCE, Facing Our Risk of Cancer Empowered, representing familial cancer stakeholders, just announced the USPSTF Lynch effort is nixed.


At this website, they have an article describing a two-year process of getting USPSTF to take on Lynch as a topic.   

Newly, at the top of this webpage, FORCE has a notice that USPSTF on August 30 notified some stakeholders like FORCE that Lynch review "cannot move directly" forward at this time.  FORCE speculates this could be due to under-staffing but no cause is provided by USPSTF.


Thursday, September 7, 2023

Brief Blog: CMS Updates NCD Wait List

 A few weeks ago, I gave a talk CMS's plans for "TCET" Transitional Coverage for Emerging Technology.  I noted that even if you think it sounds good, CMS has a record of underperforming regarding coverage innovations.  I listed three things -

1.  Parallel Review was announced with great fanfare and gusto, but was used ultra rarely.

2.  Coverage with Evidence Development is perennially praised, yet its accomplishments are few (not zero, but few).  (CED is a pillar of the proposed TCET).

3.  CMS some years ago announced a national NCD dashboard, but updated it once or twice and forgot about it (like a child or puppy wandering to a different room and leaving a forlorn toy behind.)  Called the NCD Dashboard (on this page), until a few days ago, its last annual update (so called) was 2019 or so.

It's Updated

CMS has updated the NCD Dashboard on August 23, 2023.  It's here:

https://www.cms.gov/files/document/ncd-dashboard.pdf


Finals

In the past years, 2 NCDs were finalized, for cochlear implantation and for seat elevation (both are related to benefit category issues, too.)   

In Flight

Four NCDs are in process right now.  The Beta Amyloid PET NCD is proposed for deletion.  A USPSTF prevention benefit is being created as an NCD - PREV therapy.  There is an umpteenth update of a carotid stenting NCD.  Then there's the fairly recent opening of the stem cell transplantation NCD, opened June 7, draft decision expected December 7, 2023.  Myelodysplatic syndrome (MDS) is covered only CED, and organizations have asked the CED to be put to rest.  See the nine page request from ASH and others.

Note - MolDx recently covered whole genome sequencing for MDS therapeutics.  Blog.

Backlogged

Unlike finished or in-process NCDs, backlogged future topics are represented by a title phrase and not always penetrable.  Backlog topics include 1) "diaphgram pacemakers for neuromuscular disease [please non cover]", 2) hepatitis C screening [USPSTF endorsed 5/2020], 3) subcu insulin infusion pumps, 4) "power standing systems" (e.g. stand-up?), 5) pulmonary pressure sensor, 6) ventilators for COPD [ DME issue?], and finally 7) HPV for cervical screening.

The USPSTF rec for HPV and cervical screening is a few years old and being revised shortly.  I suspect there may be some revision issue, like coverage every 3 vs every 5 years.  Right now CMS covers annual Pap smears plus 1 HPV per 5 years on top of that.  USPSTF is somewhat laisse faire, allowing any of the following:  1) annual Pap, 2) HPV q5y, or 3) both.

CMS has coverage for insulin pumps now, and they must be subcu (not IV or IM), so there is likely some revision in play on that one, too.  

##

TCET Fun Fact

The comment period for TCET closed in late August.  As of September 7, 153 comments were received but 149 are still on hold and not posted yet. 


Very Brief Blog: Guardant Health Publishes 165 page deck for Investor Day

Guardant Health published a 165 page deck, in connection with its 9/7/2023 investor day.  Find a webcast, and the PDF deck, online at their investor relations page.

https://investors.guardanthealth.com/events-and-presentations/events/event-details/2023/Guardant-Health-Investor-Day-2023/default.aspx 

https://s26.q4cdn.com/594050615/files/doc_presentations/2023/09/GH-2023-Investor-Day-Deck.pdf







AI Corner: Large Study of AI Test Performance, 32 Courses, Compared to Humans


In a large scale study published in the Nature family journal Scientific Reports, essay question grades for ChatGPT were compared to human subjects in 32 classes spread over 8 subject matter areas.   For example, in the Psychology course "Biopsychology," ChatGPT outperformed, while in the Psychology course "Social Psychology," it did slightly worse.   In the chart below, the AI score is shown in green, the human score in blue.

click to enlarge
Find Ibrahim et al. here.
See a news report about the study here.

In each topic area, three real student answers and three ChatGPT answers (total of 6 answers) to each of 10 questions were graded by three graders.   (The inter-rater reliability of grading the 6 answers, varied by subject area).

__
The AI classification programs GPTZero and AI Text Classifier were quite imperfect at detecting which answers were AI generated and which were student-written, making many errors in both directions.  This fell even further when AI answers were "processed" through a rephrasing program called Quillbot. 
___
Some of the computer/math classes required math or coding answers only, not essays.