Sunday, April 21, 2024

Very Brief Blog: FDA Promises Reform, Efficiency in Device Review

 At the Medical Device Manufacturer's Association MDMA, in April 2024, FDA CDRH director Dr Jeff Shuren promised reforms, over time, that will speed up the device review process while reducing the failure rate.  He refers to it as "agile" review.  In software, this refers to rapid sprints to hit new deliverables.

Find an article open access at RAPS:

See also some notes by an observer at Linked In:

Jared Seehafer, at that link, shows a slide about "Statutory Red Tape," the "47 year old framework is not fit for purpose," and solutions to "remove red tape."

Recall February 2024, FDA promised (in a press release) to eventually shift many PMA diagnostics to  Class II / 510(k)

Friday, April 19, 2024

Tuesday April 23: DECIBIO Webinar on FDA LDT Regulation

 Don't miss DECIBIO's April 23, 2024 90 minute panel on FDA regulation of lab developed tests.  It's at 9 pacific, 12 eastern.   I am lucky to be one of 6 panelists.

Find the registration here:



A DeciBio Consulting Webinar

Executive Roundtable: Navigating the FDA's Laboratory Developed Tests Regulation


Tuesday, April 23, 2024

9:00 AM PT / 12:00 PM ET / 04:00 PM UK

What to expect:

Join us for a digital executive roundtable discussion of DeciBio's white paper "Regulatory Reckoning: Navigating the FDA's Laboratory Developed Tests Regulation". Free download available here.

JAMA on Medical AI Accuracy; Stanford Releases 500-page report on AI 2024

JAMA releases an important short article, "Comparative Analysis of Multimodal Large Language Model Performance on Clinical Vignette Questions," by Han et al.  (here).  Authors from Berlin, Munich, Aachen.  See also a 9-minute podcast.  

AI performance across a variety of tests and AI models continues to improve, with some differences between models and subject areas.   Many models are competitive with, or exceed,  human performance.

The graphic below shows six models and 1 human group when graded on harder (left) and easier (right) questions.  In each cluster, the humans are on the right.   All answers get better performance with easier (right) questions, but gpt4V consistently outperforms humans (far left vs far right column in each cluster).  There are several other graphics in the article.

click to enlarge


Stanford releases its April 2024 500-page report on progess in AI.   See a news article about it here.

See the home page here.   Get the full PDF here.  Get the 32-page Science/Medicine chapter here.  See a summary of the Stanford report, by Margaretta Colangelo, here.

AI Corner

ChatGPT4 summarizes the Sci-Med chapter.

The chapter on Science and Medicine in the 2024 Artificial Intelligence Index Report highlights the significant advancements in AI's role within these fields, emphasizing both scientific discovery and medical innovations. Here's a detailed summary:

In 2023, AI's influence in scientific research led to substantial developments in various areas, from weather forecasting to materials science. One key highlight was the introduction of AlphaDev, an AI system that developed efficient sorting algorithms, marking significant progress over decades of manual research. Another noteworthy project, FlexiCubes, utilized AI for optimizing 3D mesh constructions, crucial for applications ranging from gaming to scientific visualization.

The medical field also saw remarkable AI-driven innovations. Systems like SynthSR have transformed brain scan analyses by synthesizing standard medical scans into high-resolution images, aiding in both research and clinical settings. Furthermore, AI's role in diagnosing and managing diseases has expanded, illustrated by the development of AI tools like EVEscape for pandemic prediction and the advanced mutation classification capabilities of AlphaMissense.

Moreover, AI models have been increasingly recognized by regulatory bodies such as the FDA, which approved a significant number of AI-related medical devices, underscoring their reliability and effectiveness. The chapter also delves into AI's potential to enhance healthcare administration and patient care, citing systems like MedAlign, which aims to reduce administrative burdens through AI-driven solutions.

Overall, the chapter provides a comprehensive look at how AI is reshaping the landscapes of science and medicine by accelerating research processes, enhancing diagnostic accuracy, and improving overall efficiency in healthcare operations.

Proteomics performance enhanced by AI interpretation of signals (Alz.)

Sidebar: 10 Key Takeaways

AlphaDev: AI system developed new efficient sorting algorithms, impacting computational research significantly.

FlexiCubes: AI-driven optimization of 3D mesh constructions, improving applications in diverse fields.

SynthSR: AI transformation of brain scans into high-resolution images, enhancing medical analysis.

EVEscape: AI tool for predicting pandemic patterns, aiding in preemptive healthcare strategies.

AlphaMissense: AI model for classifying genetic mutations, potentially revolutionizing genetic research.

FDA Approvals: Increased approval of AI medical devices, confirming their efficacy and safety.

GraphCast: Enhanced weather forecasting through AI, demonstrating superior accuracy.

GNoME: AI's breakthroughs in materials science, discovering new materials faster.

MedAlign: AI application in healthcare administration, aiming to streamline operations.

CoDoC: Integration of AI in clinical settings, optimizing diagnostic processes and outcomes.

[/end ChatGPT4]

Tuesday, April 16, 2024

Brief Blog: CMS Posts Info for Summer Lab Pricing Meetings (June 25, July 25)

CMS holds two meetings each summer for pricing of new lab tests.  One is a public comment meeting (June 25, 2024), the other is an expert advisory panel which is livestreamed (July 25-26, 2024).

Find the expert panel announcement here:

Find the public meeting announcement here:

Track updates on the CMS CLFS annual meeting page:

UPDATE 4/18:  CMS has posted a code list (74 codes), a template for comments (the same 74 codes but with more columns), and an updated Powerpoint template for public presentations.   All the new GSP codes from last year (such as CGP liquid biopsy) are under Reconsideration.  Otherwise there are only a handful of new CPT 80,000 series codes - several variants of Alzheimer testing (tau, beta-amyloid).   

CMS knows the code list is preliminary.  We've seen that some 60 PLA codes, plus some new GSP codes, will be at the May 2024 AMA CPT meeting, and those will probably be rolled in to the final June agenda.

I think the PowerPoint template is updated to bring more focus on cost information.  There's a slide to provide "test background" and "purpose and methodlogy."  There's a slide asking you to present, "resources for test," and "costs" and "charges."  On the last slide, they ask you to petition for either a crosswalk or gapfill price, and explain why.

For updates specific to the expert panel meeting, here:

For several years, the meetings have been virtual-only.  This year they will be "hybrid" (both in the CMS auditorium in Baltimore, and live-streamed).


Public comments must be submitted by May 30, 2024 5 pm ET.  Use a PPT template for your opening slides which captures basic info in a fixed format.

The Code Agenda - Wait for it....

The code list (likely well over 100 codes this year) will be released 30 days before the meeting, e.g. May 25.   Assuming you know whether you have a code in the ring or not, you don't have to wait for the May posting by CMS to start working on your slides.   CMS will include codes that "pass" the CPT around May 10, but it takes a while for AMA to get those codes over to CMS to post.


Although I don't see any explicit remarks, in some past years, if you have a non US passport, the advance requirements to attend were very elaborate and no exceptions.   If this pertains to you, I suggest you touch base with CMS ASAP.



Having read the PDF about the experts advisory meeting, retell the story in the style of an old fairy tale.


Once upon a time, in a realm governed by the wise and just, there existed a venerable gathering known as the Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests. This illustrious panel, summoned by the Secretary of the Department of Health and Human Services, was a convocation of the most learned and sagacious minds from across the kingdom, each a master in their own right of the arcane arts of laboratory science and health economics.

In the heart of a grand hall, under the luminous glow of chandeliers that danced with light, the panel convened twice each circling of the sun. They were charged with a task of great import: to deliberate upon the mystical codes of new and reconsidered clinical diagnostic laboratory tests, deciding how gold—the coin of the realm—should be apportioned for their use.

The Panel, composed of molecular pathologists, seasoned researchers, and sages with deep knowledge of laboratory workings, employed ancient methods known as "crosswalking" and "gap filling" to cast their decisions. Crosswalking was a method where the value of a new test was measured against the worth of tests of old, linking them like the stones of a castle wall. Gap filling, on the other hand, was akin to charting unknown territories, setting values in places where no prior paths existed.

Each meeting was a spectacle of debate and discussion, with the panelists' words weaving through the air like spells, casting the future of medicine. They were watched by many, from near and far, through magical mirrors and devices that could capture their likeness and voice, spreading their deliberations across the kingdom in the blink of an eye.

The Panel’s recommendations were like golden keys that unlocked the treasury of Medicare, ensuring that the alchemists and healers across the land could continue their noble work, armed with the best of tools and knowledge. Their decisions were recorded in great ledgers and announced in the town squares and through messengers on swift horses, so that all could know of the wise determinations made within the grand hall.

Thus, the realm thrived, with the Panel’s wisdom guiding the health of its people. And so the story of the Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests was woven into the fabric of the kingdom's history, a tale of wisdom and stewardship, told and retold wherever scholars gathered under the stars.


Write a short, snappy, exciting summary of this CMS policy meeting announcement. Highlight some of the informational content like deadlienes, what kind of info to submit if you comment, etc.

Chat GPT4

Join us on June 25, 2024, for a pivotal public meeting hosted by the Centers for Medicare & Medicaid Services (CMS), where we will delve into the future of diagnostic healthcare through discussions on the Clinical Laboratory Fee Schedule (CLFS) for the upcoming year. This is your chance to contribute to shaping Medicare payment structures for new and reconsidered diagnostic laboratory test codes.

At this meeting, we will gather crucial feedback and recommendations on establishing payment amounts for innovative and substantially revised test codes. Whether you're a healthcare provider, patient advocate, industry professional, or a stakeholder in the medical laboratory community, your insights will help ensure equitable and effective payment schemes.

Important Deadlines:

Presentation and Comment Submission: If you wish to present or have your comments considered during the meeting, ensure your presentations and written comments are submitted electronically to the dedicated CMS email ( by May 30, 2024, at 5:00 p.m. EDT.

Special Accommodations Request: If you need special accommodations due to a disability, requests must be submitted by the same deadline of May 30, 2024.

What to Include in Your Comments:

Data-Based Recommendations: We seek detailed recommendations based on data that support your views on the appropriate basis for establishing payment amounts.

Details on New or Revised Tests: Include comprehensive information on the purpose, methodology, costs, and charges related to the new or reconsidered tests. This will aid in a transparent evaluation process.

Virtual and In-Person Participation:

The meeting will be accessible both virtually and in person at the CMS Central Building in Baltimore, Maryland, allowing participants from across the nation to join and contribute. If attending in person, registration is required, and we recommend arriving early to complete the security process.

Post-Meeting Actions:

Proposed Determinations Publication: CMS will publish proposed determinations by early September 2024. These will outline preliminary decisions on the payment amounts for each test code discussed during the meeting.

Final Comments on Proposed Determinations: Stakeholders will have until early October 2024 to submit written comments on these proposed determinations.

This meeting not only offers a platform to influence current policies but also to engage with key decision-makers and peers in the healthcare sector. Your participation ensures that the voice of the community is heard loud and clear in the decision-making process. Join us to help define the standards that will govern clinical diagnostic testing payments in 2025!     [end/chat gpt]

Brief Blog: AMA Considering 60 New PLA Codes This Quarter

Typically, there have been about 20 new proprietary PLA codes created each quarter by AMA CPT.   For the Q2 cycle (April-May-June), there are about 60.  A couple of those are revisions.  My tally of new codes doesn't include the 3 or 4 being deleted.    

AMA is posting the roster for public comment here:

(Look for, "review the PLA agenda").

AMA will have an invitation only April 23 teleconference of the PLA committee to wordsmith and finalize the codes.  Those will then be voted on (usually very quickly) by the AMA CPT committee of the whole in Chicago in May.   

Final codes will be released both for the upcoming CMS summer pricing meetings (June 25, July 25) and posted on the AMA website June 30.

Sunday, April 14, 2024

AI Corner: JAMA Updates Guidelines for AI Writing Assistance

[Note, I am traveling so I asked Chat GPT 4 be the substitute author, and write up this new JAMA article for us.]



Flanagin A et al. (2024) Reporting use of AI in research and scholarly publication - JAMA NETWORK guidance.  JAMA 331:1096-8.


[AI write-up follows.]


Exploring the Nuances of JAMA's Updated AI Reporting Guidance

The Journal of the American Medical Association (JAMA) Network has recently updated its guidelines for reporting the use of Artificial Intelligence (AI) in research and scholarly publications. This is the second update within a year, highlighting the rapid advancements and increasing integration of AI in medical research. These guidelines are designed to ensure transparent, appropriate, and accountable use of AI, fostering a foundation of trust and replicability in scientific publications.

  • Understanding the Updated Guidance

JAMA's revised guidelines reflect an effort to address the challenges and potential pitfalls associated with the use of AI in medical research. With AI's capabilities expanding from predictive analytics to complex decision-making tools, the need for clear reporting standards has never been more critical. Here are several key areas outlined in the guidance:

Transparency in Manuscript Preparation:

Authors must disclose any use of AI in creating or editing manuscript content, specifying the AI tools used, their versions, and their roles in the manuscript preparation process.

Detailed AI Involvement in Research:

For research involving AI, authors are required to adhere to specific reporting guidelines such as CONSORT-AI for clinical trials with an AI component and CLAIM for AI in medical imaging. These guidelines ensure that each study's AI methodology is fully reproducible.

Ethical Considerations and Data Integrity:

The guidance emphasizes the importance of ethical use of AI, including considerations for patient privacy and the handling of data. Authors must ensure that no identifiable patient information is included and must address copyright concerns related to AI-generated content.

Methodological Rigor:

Detailed descriptions of AI applications in study design, hypothesis generation, and data analysis are mandated. This includes a clear delineation of the AI models used, data sets involved, and the assumptions tested.

Impact on Clinical Decision-Making:

The guidance raises concerns about AI's role in clinical decision-making processes, urging authors to discuss the potential biases and inaccuracies of AI tools and their implications for clinical outcomes.

  • Implications for Medical Experts

For medical professionals and researchers, the updated JAMA guidance serves several vital functions:

Enhancing Reliability: By standardizing the reporting of AI applications in research, the guidelines help ensure that studies are reliable and their results are reproducible, a cornerstone of scientific inquiry.

Fostering Innovation: Clear guidelines provide a roadmap for incorporating AI into research in a way that is both innovative and ethically responsible, pushing the boundaries of medical science while maintaining rigorous standards.

Educating the Community: The guidelines also play an educational role, informing researchers about the potential and limitations of AI in medical research and encouraging a thoughtful approach to its application.

Preparing for Future Challenges: As AI technology continues to evolve, these guidelines will likely be updated to address new challenges and opportunities. This dynamic approach ensures that the medical research community remains at the forefront of technological advancements.

  • Conclusion

JAMA's updated AI reporting guidelines are a testament to the evolving landscape of medical research in the era of artificial intelligence. They not only promote transparency and accountability but also encourage thoughtful engagement with new technologies. For medical experts, staying informed about these guidelines is crucial in navigating the complexities of AI in research, ensuring that their work is both impactful and in line with the highest standards of scientific integrity.   [ / Chat GPT4]

Bonus: "Write a second article, this time in the voice of a clinical researcher."

Navigating JAMA's Updated AI Guidelines: A Clinical Investigator's Perspective

As a clinical investigator who frequently publishes in JAMA and regularly integrates Artificial Intelligence (AI) into my research, the updated guidelines on reporting AI use are both a welcome development and a crucial framework for my work. These guidelines reflect the evolving landscape of AI in medical research and underscore the importance of transparency and accountability. Here, I will share how these updates influence my research process and the broader implications for the scientific community.

Embracing Transparency in Manuscript Preparation
In my recent projects, I've leveraged generative AI tools to refine the language of manuscripts and generate preliminary data analyses. According to JAMA's new guidance, detailing the use of AI in manuscript preparation is mandatory. Now, in the acknowledgment section of my papers, I explicitly list the AI software used, including the version, the manufacturer, and the specific roles these tools played in preparing the content. This level of transparency ensures that peers can understand the extent of AI involvement and assess the integrity of the reported findings accurately.

Adhering to Rigorous Reporting Standards
The guidance provides a comprehensive list of AI-related reporting standards, such as CONSORT-AI for clinical trials and CLAIM for studies involving medical imaging. My research often involves predictive modeling in clinical prognostication, necessitating adherence to guidelines like TRIPOD-AI. These standards compel me to meticulously document AI methodologies, from the datasets used for training and validation to the assumptions tested and the model's performance metrics. This discipline not only enhances the reproducibility of my work but also elevates the scientific rigor of the studies.

Ethical Considerations and Patient Privacy
One of the paramount concerns in using AI in medical research is the ethical handling of data, particularly patient information. The updated guidelines reinforce the necessity of omitting identifiable patient data and of being vigilant about copyright and intellectual property rights when using AI-generated content. In my practice, I ensure that all AI-generated data, texts, or images are rigorously checked for compliance with these ethical standards, a practice that not only safeguards patient privacy but also aligns with JAMA's emphasis on ethical responsibility.

Methodological Innovations and Challenges
The guidelines also push me to constantly innovate and critically evaluate the AI technologies I employ. For instance, when using AI to generate hypotheses or assist in data analysis, I now provide a thorough description of the AI's role, detailing the specific tasks it performed and the rationale behind using AI for these purposes. This careful documentation helps in addressing potential methodological biases and inaccuracies that AI tools might introduce, ensuring that the research outcomes are robust and reliable.

The Future of AI in Clinical Research
Looking ahead, the rapid development of AI technologies suggests that these guidelines will continue to evolve. Staying abreast of these changes is not just about compliance; it's about actively contributing to the dialogue around AI in medical research. By adhering to JAMA's guidelines, I not only ensure the integrity of my research but also help shape the standards that will govern future innovations.

The updated JAMA guidelines on reporting AI use are more than just procedural necessities; they are the pillars upon which we can build a transparent, ethical, and scientifically rigorous future for AI in medical research. As someone at the forefront of integrating AI into clinical investigations, I view these guidelines as essential tools that guide my work and enhance the credibility and utility of our scientific contributions.

Thursday, April 11, 2024

JAMA Weighs In: How to View Endpoints for Cancer-Screening Technologies

Header:  This week, JAMA offers a large meta-analysis of different endpoints in cancer screening trials, and Bach offers a conservative op ed with regard to endpoints other than mortality itself.


Find Feng et al here.  

To oversimplify, imagine this is a classic surrogate biomarker study.  You have the best biomarker, [say] mortality, at 5 years, and you have an early biomarker, incidence-of-late-stage, at 1 year, and you ask, if you made decisions on "late-stage," at 1 year, how well does that predict if you had waited for "mortality" at 5 years?   Or again, it's the same logic as asking if you have only PFS today, how well does that predict OS later.  

Find Bach here.

AI Corner

The FENG systematic review and meta-analysis explores the use of late-stage cancer incidence as an alternative endpoint to cancer-specific mortality in randomized clinical trials for cancer screening. Analyzing 41 trials, it finds mixed results: while incidence of late-stage cancer correlates highly with mortality for ovarian and lung cancers, this is less so for breast, colorectal, and prostate cancers. This suggests that while late-stage cancer could be a suitable endpoint for some cancer types, it is not universally applicable across all cancers, highlighting the need for cancer-type specific trial endpoints.

The BACH opinion piece critically evaluates the use of late-stage cancer incidence as an alternative endpoint to cancer-specific mortality in screening trials, as discussed in the FENG study. BACH emphasizes the inconsistencies in correlation between late-stage cancer and mortality across different cancers, highlighting risks of misinterpreting screening benefits. He argues that despite the shorter trial durations offered by using late-stage cancer as an endpoint, cancer-specific mortality remains the more reliable measure for assessing the true efficacy of cancer screening tests.

[AI, Chat GPT 4]

Brief Blog: CMS Issues Inpatient Draft Rule FY2025; SEP-1 noted

The summer rulemaking season kicked off this week with release of the Inpatient Proposed Rule for FY2025 (October 1).  It's logged as CMS-1808-P [proposed].    

This will be followed by Physician and Hospital Outpatient rules circa July 1, for CY2025 (January 1).

Find the detailed fact sheet here.

The press release (which is allowed more spin) is here.

As always, CMS has released a typescript "inspection copy," 1902pp, opened a 60 day comment period, and will publish the typeset Fed Reg rule on May 2.   Here.


Improved "New Tech Add On Payment" Evaluations

The clarity with which CMS handles the new tech add on payment applications (each year 10-20 of these) continues to improve.   CMS offers key questions, presented in rows in a tabular format where the  applicant provides his/her answer and justification.   For example, "Is the technology "new"?"   Answer: Yes.   Reason:  (Three sentences explaining why it is new.)    Some of this clarity probably would help in other areas like dossiers to MolDx or other MACs and payers or ADLT applications/

CMS provides these summary tables in the Fed Reg while directing the reader to online sources for back up detail.  Snippet here:

click to enlarge (IPPS pdf p 285)


For those who missed it, the Medicare Advantage final rule came out April 4, 2024, and can be found here.

Inpatient Rule - SEP 1

Last year, CMS determined a timetable to elevate SEP-1 from a reportable measure to a financial impactful value based purchasing measure.   This is shown on page 729, Table V-L-02 (2026-2030).  Later on page 852, CMS cites to a PCAST report, "Transformational Effort on Patient Safety," September 2023, which had four goals, e.g. research and deployment for safe care measures.  CMS then states that specific condition or specific procedure measures, like SEP-1, "are not suffficient to measure and incentivize investment in a resilient safety culuture or [investment in] the infrastructure necessary for sustainable high performance within the broad and complex doamin of patient safety."  And, "There is strong alignment among experts to shift to a more holistic, proactive, systems-based approach to safety."  But existing measues focus on improving "patient safety for a specific condition..[not] the overall culture in which care is provided."  This leads up (p 854) to proposing a Patient Safety Structural Measure with 5 complementary domains which will be "foundational."  There are no changes at this time in the policy forecast of chart-based SEP-1 measurement out to 2030. 

There are some concerns and changes regarding accuracy of chart-abstraced measures (p 986).  

Monday, April 8, 2024

FDA and RUO: FDA Issues Public Warning Letters to RUO Mfgrs

Header:  It's a commonplace to see RUO-labeled kits that are clinically validated within a CLIA lab for use in clinical tests.  It turns heads when FDA issues a warning letter to an RUO kit producer.


On the FDA website, see letters dated March 15, 2024 (to Beckman Coulter), and March 12, 2024 (To Agena) citing alleged cases of "misbranding" of non-approved kits as "intended for use in the diagnosis of disease."

Beckman letter here:

Agenda letter here:

See subscription coverage at Genomeweb and at BioWorld. See also a detailed April 3 essay by the Mintz law firm, here.

Extracts from the public FDA letter to Agena include the following;

  • Verbal statements made to the FDA investigator during the inspection on behalf of your firm by your firm’s Director of Regulatory and Quality indicating that your firm no longer sells the Impact Dx™ Factor V Leiden and Factor II Genotyping Test on the IMPACT Dx™ System (K132978) and has instead replaced it with a new system, the MA4, and admitting that your firm sells the MA4 and reagent panels to companies who analyze patient samples that are used for clinical diagnosis.
  • Distribution records and a customer list obtained during inspection that indicates your firm shipped MA4 Systems and iPLEX HS Colon Panels to clinical testing laboratories.
  • A customer notification your firm sent to customers dated June 1, 2021, regarding distributed iPLEX HS Colon Panels that alerted customers of an increase in false negative results of certain reagent lots and stated that Agena was taking steps “to not impact your laboratory workflow.” This customer notification was sent to a clinical hospital in Texas and there is no evidence that these reagents were intended for research use only.
  • A copy of the iPLEX HS Panels User Guide obtained during inspection that included instructions for preparing and testing samples as well as instructions for interpreting iPLEX HS Reports including the iPLEX HS Colon Panel intended for use with the MA4.
  • ...iPLEX HS Colon Panel “enables tumor profiling studies of colorectal cancer specimens”, and that, with the iPLEX HS Colon Panel, users can “Detect more than 80 clinically relevant variants across BRAF, EGFR, KRAS, NRAS, and PIK3CA”
  • Your firm’s website also includes the statement that these devices are “For Research Use Only. Not for use in diagnostic procedures.” But, as discussed in this letter, on numerous occasions your iPLEX HS Colon Panel and your MA4 were not being shipped or delivered for an investigation (21 CFR 809.10(c)). And these devices were not for use in the laboratory research phase of development but rather were represented as effective IVDs (21 CFR 809.10(c)(2)(i)). Your disclaimers that the MA4 and your iPLEX HS Colon Panel are intended for research use only (RUO) are inconsistent with the evidence obtained...

Historical Note

In  policymaking regarding RUO materials in 2013, FDA referred to its option to investigate manufacturers principally on the basis of selling to clinical labs or clinical centers (link, PDF page 9).   At least to my memory, FDA has pretty rarely used shipping invoices as primary evidence of FDA violations.

Intended Use of Anything

In addition to the 2013 website guidance about RUO test kits, the FDA's actions are related to 2017 and 2021 Federal Register rulemaking about intended use and manufacturer relationships to off label use.   (Of course, to have an "intended use" you have to be FDA approved in the first place).

So tangential to the FDA's news warning letters and their underlying 2013 guidance, see also a 2021 regulation on when manufacturers are responsible or implicated in any type of off label use. Essay from Hogan Lovells -  here. Hogan writes,

The 2020 proposed rule expressed FDA’s intention to eliminate longstanding confusion created by the view that, in their then-current form, the regulations indicated that a manufacturer’s mere knowledge of an unapproved use triggers a requirement to update a product’s labeling. The final rule similarly clarifies that a manufacturer will not be deemed to have intended an off-label use based solely on knowledge that its approved product is being used or prescribed off-label. This also reflects the longstanding “practice of medicine exemption,” whereby FDA will not regulate off-label use of a lawfully marketed drug/device by a licensed health care practitioner in the context of caring for an individual patient.

The final rule language also addresses [?]another highly contested issue by preserving the ability of laboratories to develop laboratory-developed tests (LDTs) using either in vitro diagnostic (IVD) or research use only (RUO) components, as long as such LDTs comply with Clinical Laboratory Improvement Amendments (CLIA) regulations in their development and are offered on the order of a health care provider...
I can't find [?] the IVD CLIA section that is "also addressed," per Hogan (discussing 86 FR 41383, 8/2/2021), neither in the 2021 rule or its 2017 predecessor.  There is a lot of discussion of first amendment and  case law from the courts.  

There is a short discussion of oft-perplexing difference between intended use and indication for use (p 41395), with a comment that "indication for use" will not be defined in this rule.

Brief Blog: OIG Issues Report on How CMS Set its COVID Test Rates

OIG issues an annual report on the Clin Lab Fee Schedule (CLFS) as a whole.  There was also a side report on inappropriate overbilling of code 81408 (a $2000 miscellaneous genetic code that autopaid in some geographies.)

Here's another special report.  OIG reviews how prices were set during the public health emergency (PHE) for COVID testing.

OIG asks, for example, whether principals for government procurement were followed (citing to Standards for Internal Controls in the Federal Government (Green Book).)

What OIG Found

Here's the summary by OIG of "What OIG Found."   

CMS’s procedures for CDLT rate setting could be improved for future PHEs. Specifically, CMS could improve its: (1) communication with laboratory associations and the MACs’ pricing coordinators, and (2) procedures to provide the MACs with additional flexibility when they set interim CDLT rates to respond to a PHE. Neither the Clinical Laboratory Fee Schedule statute (CLFS) nor its implementing regulations specifically address how pricing coordinators could quickly set rates for new CDLTs before the lengthy public consultation rate setting process. Normally, CMS fills that delay by using its longstanding MAC interim rate setting policy. Accordingly, in March 2020, MACs set rates for new COVID-19 viral tests through CMS’s interim MAC rate setting policy. However, CMS had to take additional action beyond its standard rate setting procedures to set and adjust rates for CDLTs.

As a result, CMS’s standard rate setting procedures did not allow the MACs to set rates that were adequate to cover the cost of conducting COVID-19 viral tests for all laboratories during a time when CMS was working to increase testing capacity. CMS may have missed opportunities to obtain important information that could have improved its response to the COVID-19 pandemic from laboratory associations and the MACs’ pricing coordinators when it made decisions about the new CDLT rates.

Here's another quote:

In a statement to USA Today, the CMS Administrator acknowledged that the payment rate initially set by the MACs may have played a role in testing shortages.[fn] The CMS Administrator also noted that a lot of laboratories were not performing the tests and acknowledged that the initial payment rate may have been set too low. 

Friday, April 5, 2024

AI Corner: Chat GPT Discusses CMS Policies vs CMS Hype

This week, an article by CMS policymakers in Health Affairs makes a familiar claim: that 100% of fee for service providers will be in accountable care settings, by 2030.  Find the article here.  (This new one updates us on two earlier articles in 2022 here and here).

I asked Chat GPT 4 a skeptical question about the article, and it provided an answer.  That is, this blog is almost entirely AI-generated.



The article from Health Affairs says that CMS plans to have 100% of providers in accountable care relationships by 2030.   However, i think this is "hype" because they are accounting for fee for service providers who have MINIMAL or DE MINIMUS quality based risk payments, such as a couple percent.   Is my skepticism justified?   Public readers of the statement (100% in accountable care) would expect more and not understand the de minimus aspect.


Your skepticism highlights an important distinction in the discussion about accountable care relationships, especially in the context of Medicare's transition towards value-based care models. The statement from the CMS Innovation Center about aiming for 100% of Traditional Medicare beneficiaries and the vast majority of Medicaid beneficiaries to be in accountable care relationships by 2030 indeed sets a bold goal. However, your concern about what constitutes an "accountable care relationship" is valid, particularly when considering the depth and meaningfulness of these relationships in terms of risk-sharing and quality-based payments.

Accountable Care Organizations (ACOs) and other value-based care models are designed to incentivize providers to improve the quality of care and reduce unnecessary costs. These models typically involve some level of risk-sharing, where providers can earn more if they save money while meeting quality benchmarks but might also risk losing money if they fail to meet those benchmarks. However, the level of risk and the proportion of payments tied to quality and cost outcomes can vary significantly across different models and contracts.

In some cases, providers might be in arrangements that technically qualify them as participating in an accountable care relationship, but the financial incentives (or penalties) tied to performance metrics might be minimal (de minimis). For example, a provider might be part of a model that offers a small percentage of payments based on meeting certain quality measures, which might not significantly impact their overall revenue or care delivery practices.

Your observation that public readers might interpret the statement as indicating a more profound shift towards risk-based, quality-driven care than what might actually be happening is astute. While moving all providers into some form of accountable care relationship is a step towards broader adoption of value-based care, the impact of such a shift depends heavily on the depth of the risk-sharing and the strength of the incentives for improving care quality and efficiency.

Thus, your skepticism underscores the importance of understanding the nuances behind such ambitious goals. It suggests that while broad participation in accountable care relationships is a positive direction, the effectiveness of these relationships in driving meaningful improvements in care quality and cost efficiency will depend on the specifics of how risk and rewards are structured.


Go back to the article and provide us a 75 word summary please.

"The CMS Innovation Center’s Strategy To Support Person-Centered Value-Based Specialty Care: 2024 Update" by Fowler et al. details efforts to transition all Medicare and the majority of Medicaid beneficiaries into accountable care by 2030. The strategy focuses on enhancing specialty care by improving data transparency, fostering collaborations between primary and specialty care, and implementing financial incentives. It involves testing models that support high-quality integrated specialty care, aiming to reduce care fragmentation and improve health outcomes by enabling better coordination and access to specialty care services.

? Medical policy as Potemkin village
AI Corner #2

I asked Chat GPT to write a speech for the leader of CMMI, based on 4 press releases, and then the "leader" also answered a skeptical audience question.

Thursday, April 4, 2024

Rare Event: Experts Warn CMS and FDA Against a Test that FDA OK'd in December

In December, I wrote a blog abou the FDA approval of a PGx-based opioid use test, AvertD (here).  I noted there were a lot of oddities, including an FDA panel voting 11-2 against approval.  I noted that company presentations seemed to ask for de novo 510(k), but the approval was PMA, when it came a couple years later.

These particular chickens seem to have come home to roost.  A lengthy list of experts has submitted parallel letters to FDA and to CMS - asking CMS to revoke approval, asking CMS not to cover.

Coverage at Genomeweb is here:

Genomeweb notes that SOLVD is conducting post-approval studies, consults with thought leaders, and has a path ahead which is evidence-based solutions to the addiction crisis.

Coverage at STAT here.  Coverage at NBC here.  At HealthcareDive here.  At New York Times here.


The FDA letter is here:

The CMS letter is here:


AI Corner

Using only the two PDF letters, Claude.AI wrote a "news article" about the story.

FDA Approval of AvertD Genetic Test for Opioid Use Disorder Faces Backlash

In a highly unusual move, dozens of leading experts in genetics, addiction, psychiatry, and public health are calling on the U.S. Food and Drug Administration (FDA) to reverse its recent approval of AvertD, a genetic test that claims to predict risk for opioid use disorder (OUD). The experts have also urged the Centers for Medicare and Medicaid Services (CMS) to deny coverage for the controversial test.

The AvertD test, approved by the FDA in December 2023, analyzes 15 genetic variants to estimate a person's genetic risk for developing OUD. However, in an open letter to the FDA Commissioner and CMS Administrator, the 48 experts argue that the largest and most rigorous genetic studies show AvertD does not actually predict OUD risk.

"Current scientific knowledge about OUD genetics is strong enough for us to state that AvertD does not predict genetic risk of OUD," the experts write. They cite independent analyses showing that with proper controls for ancestry, the 15 variants provide no more predictive power than chance.

The experts raise serious concerns that false negative results could provide clinicians and patients with a "false sense of security" about opioid use, potentially exacerbating the opioid crisis through overprescribing. Conversely, they warn that false positive results could lead clinicians to withhold beneficial opioid treatment and cause patients to face discrimination due to being inaccurately labeled as prone to OUD.

Notably, the FDA's own advisory committee voted 11-2 against approving AvertD last year, with committee members expressing concerns about the test's lack of evidence and potential for harm from false results.

"FDA's decision to approve AvertD despite opposition from geneticists, other experts, and from its own advisory committee, was surprising," the letter states. "Ironically, FDA has touted its approval of AvertD as a step toward addressing the opioid crisis. We believe AvertD may make the opioid crisis worse."

Among the signatories are leaders in addiction medicine and genetics from prestigious institutions such as Johns Hopkins, Yale, Stanford, the University of Pennsylvania, and Columbia University. Also signing are the presidents of National Center for Health Research and Public Citizen's Health Research Group.

The strong pushback on AvertD's approval highlights ongoing concerns about insufficient scrutiny of genetic tests making unproven clinical claims. It remains to be seen whether the FDA or CMS will heed these experts' calls to reverse course on the controversial genetic test.

[Text auto generated by

White House: Meetings on the FDA LDT Final Rule

Header:  FDA finalized its LDT rule around March 1.  Many meetings have been scheduled via Office of Management and Budget.


FDA finalized its LDT rule around March 1 - original blog here.

OMB has scheduled fourteen publicly-listed meetings.  If available, the listings include collateral like PDFs submitted.

March 18, 2024 
ADLM (Assoc Dx Lab Med; former AACC) (4 documents including a 58 page Hyman Phelps public comment that the LDT rule is on shaky legal ground)

March 19, 2024  
AMP (Assoc Molec Pathol) (2 documents, including a 19 page deck and a 40-page 2010 report on FDA and innovation)

March 20, 2024
ARUP (3 documents)

March 21, 2024
ACMG (Amer Coll Med Genet) (2 documents)

March 22, 2024
Center for Science in the Public Interest (CSPI), (six documents)

April 1, 2024
ACLA (Amer Clin Lab Assoc) (one document, letter to OMB, 4pp)

April 3, 2024
AHA (Amer Hosp Assoc)

April 3, 2024

April 4, 2024
Childrens' Hospital Association

April 4, 2024
Coalition for Iinnovative Laboratory Testing

(Also strongly opposed VALID Act)

April 5, 2024
Associatioin of Pathology Chairs

April 8, 2024
Natera Inc

April 11, 2024
Amer Soc Microbiol (ASM)
(Many pathogen tests = LDTs)

April 12, 2024
Coalition for 21st Century Medicine (C21CM)

April 15, 2024
"CRD Associates"

April 18, 2024
College of American Pathologists

April 19, 2024
Duke University

AI Corner

I cut and pasted this rudimentary blog into Chat GPT 4 and asked for an illustration.   Below is the machine-generated text as it planned the image.

For a blog header about the FDA finalizing its LDT (Laboratory Developed Tests) rule and the series of meetings with various stakeholders scheduled via the Office of Management and Budget, let's go with a vibrant, engaging illustration. Imagine an illustration that visually represents the intersection of healthcare policy, scientific innovation, and regulatory oversight.

We could have a central motif featuring a stylized image of a magnifying glass focusing on a DNA double helix, symbolizing the LDT rule's focus on genetic testing and diagnostics. Surrounding this central image, we see a series of calendar pages with specific dates marked, representing the scheduled meetings. Each calendar page could subtly feature the logos or symbols representing the various stakeholders (like ADLM, AMP, ACLA, etc.) that have meetings scheduled, implying a broad and collaborative review process.

In the background, we'd have a faint overlay of document pages and PDF icons, hinting at the extensive documentation and public comments involved in the rule's consideration. This mix of imagery would convey a sense of meticulous examination, scientific inquiry, and the regulatory process, all crucial elements of the FDA's rule finalization and the subsequent discussions.

This illustration would serve not just as a visual summary of the blog's content but also as an inviting entry point for readers to delve into the complexities and implications of the FDA's actions and the healthcare community's engagement with these regulations. Let's create this image.

Webinar April 23, 2024: Navigating the FDA LDT Regulation (DECIBIO)

Sign up for an April 23, 2024, webinar from DECIBIO on "Navigating the FDA LDT Regulation."   It's at 9 PT, 12 ET.

Home page here:

Wednesday, April 3, 2024

AI Corner: Chat GPT 4 Offers Local Picture Editing

AI Corner

Header: Chat GPT 4 (the subscription version) has offered image creation since last fall.   Today I see a new feature: You can highlight something in the illustration, and change it locally. [A journalist writes on this topic here.]


Chat GPT 4 (the subscription version) offers DALLE3 image creation.  It can follow instructions ("a black and white film noir image of a man in a city alley") or even work from policy materials ("make an interesting illustration for this new FDA press release.")

Today, I see a new feature, a "highlight" tool that lets you give focused revision instructions.

Here is a picture for a story about a bee in a home office:

I used the higlight tool, highlighting the computer monitor, and asked for it to add software.  It made no changes other than adding software.


I got software, but it looks washed out. 


I asked for bolder screen imagery, like a colorful bar chart.

And I got the on screen bar chart, but this time, it re-painted the whole perspective.


Bonus - bee story.