Monday, April 29, 2024

FDA LDT Rule: Quest, Labcorp Stocks Pop Up to 5%

The release of the final FDA regulation for reviewing and controlling LDT tests was released on April 29, 2024.  (Story here).

LabCorp and Quest reacted favorably.

Quest opened at $135 and popped as high as $142 before settling to a 4% gain. 

LapCorp opened at $200 and popped as high as 142 before settling to a 3% gain.

The two labs have a market cap of about $15b (Q) and $17B (L), so a 4% gain represents about $1.2B of market value across both.


___
For scale, LabCorp's range today was well inside its 365-day range of $180 to $230.

Guardant and Natera were each up several percent.  Exact Sciences floated up a couple percent but ended the day flat.  CareDx was up 8%, perhaps reflecting its experience with FDA approvals.

AI Corner: Chat GPT Discusses FDA LDT Final Rule, "Healthcare Systems" Tests for Their Own Patients

[Chat GPT; original article here.]

[The FDA exemption below is somewhat similar to a similar pathway for in-house hospital testing in the EU under the new IVD R].

[Begin Chat GPT]

The FDA's final rule regarding Laboratory Developed Tests (LDTs) specifically addresses LDTs manufactured and performed by laboratories integrated within a healthcare system. 

This policy focuses on tests designed to meet unmet needs of patients receiving care within the same healthcare system, reflecting a targeted approach to enforcement discretion. Here's a detailed summary based on the contents of the provided document:

Key Aspects of FDA's Policy on Local Healthcare System Tests

1. Enforcement Discretion

The FDA intends to exercise enforcement discretion, generally not enforcing premarket review and certain Quality System (QS) requirements for LDTs that are manufactured and performed within the same healthcare system, particularly for addressing unmet patient needs.

2. Conditions for Enforcement Discretion

This discretion applies only to LDTs where no FDA-authorized In Vitro Diagnostics (IVDs) meet specific patient needs, or where no such tests are available.

The policy does not cover all IVDs but is specific to those integrated within healthcare systems that demonstrate particular risk mitigations.

Full compliance with other applicable FDA regulations is still expected, excluding certain premarket and QS requirements.

3. Rationale for the Policy

Many healthcare systems develop unique LDTs for rare conditions or patient populations too small to be commercially viable, making the development and validation of such tests challenging.

The FDA acknowledges that the internal controls and patient feedback mechanisms within these healthcare systems can serve as effective mitigations against the risks associated with LDTs.

4. Feedback and Integration

Integrated healthcare systems often provide immediate feedback loops between laboratory professionals and clinicians, enhancing test validity and responsiveness to clinical needs.

This integration facilitates adjustments and improvements in LDTs based on direct healthcare provider inputs and patient outcomes.

5. Future Adjustments

The FDA remains open to revising this policy [through guidance not regulation] based on emerging evidence or changing circumstances, suggesting a dynamic approach to regulatory oversight.

6. Phase-out of General Enforcement Discretion

While specific premarket and QS requirements may be relaxed under this policy, the FDA plans a gradual phase-out of general enforcement discretion for these LDTs, aligning them more closely with broader FDA regulatory standards over time.

Implications for Stakeholders

This policy primarily benefits laboratories within healthcare systems that develop LDTs for specific, often underserved, patient needs. It allows these laboratories to focus on innovation and patient care without the immediate burden of stringent FDA premarket reviews. However, it also places responsibility on these systems to maintain high standards of internal review and patient safety monitoring, as they must still comply with other FDA regulations not covered by the enforcement discretion.

This summary outlines how the FDA's final rule aims to balance patient safety with the practical needs of healthcare systems, fostering an environment that encourages the development of necessary diagnostic tests while ensuring they are safe and effective.


User

Do they discuss or define what it means to be truly 'receiving care from' the same healthcare system?   I assume just getting a blood draw and test isn't a "patient" who gets care from "the healthcare system."  Does FDA get into that?  

ChatGPT

The FDA's approach to defining "receiving care from" the same healthcare system in the context of enforcing discretion for Laboratory Developed Tests (LDTs) is quite specific. The policy generally applies to situations where both the laboratory producing the LDT and the healthcare providers using the test are part of the same healthcare system. This ensures that there is an integrated approach to patient care, where the laboratory services are directly aligned with the clinical services provided to the patient.

Key Elements of the Definition:

Integration and Direct Care Relationship: The policy specifies that the laboratory and the healthcare providers must be part of the same healthcare system. This implies a level of integration where the laboratory services are not just adjunct but are an integral part of the patient care workflow.

Shared Healthcare Infrastructure: Typically, this means that the hospital or healthcare facilities and the laboratory share common governance, oversight, and a unified medical records system, which facilitates seamless communication and data sharing between laboratory professionals and healthcare providers.

The FDA emphasizes that this kind of integrated care setting enhances the quality and safety of LDTs by ensuring that there is immediate and effective communication about patient responses and test results, which can be promptly integrated into patient management strategies.

Not Covered:

Casual or One-Off Services: Merely getting a blood draw or a diagnostic test from a facility does not constitute receiving care from that healthcare system under this policy. The patient needs to be engaged in a broader, continuous healthcare relationship within the system.

FDA's Rationale:

The rationale behind this specific definition is to ensure that LDTs are used in environments where they can be most effectively monitored and where their impact on patient care can be directly observed and managed. This integration allows for better quality control, immediate response to any issues with test performance, and a more personalized approach to patient care based on comprehensive data integration and analysis.

The FDA has structured this policy to encourage and support the development of LDTs in a way that directly supports integrated patient care systems, recognizing that such systems are uniquely positioned to develop and utilize LDTs effectively due to their cohesive operational structure and shared focus on patient outcomes.

[End ChatGPT]

Bruce notes - See page 7.  The test must be ordered by providers who are credentialed or employed by the same health system as OWNS the laboratory   If their hospital is affiliated, but under separate ownership, the tests are not exempt.  

The tests are described as LDTs "manufactured" by a laboratory witin the healthcare system [eg ownership].  

Bruce notes, there could also be equity issues as wealthier patients, including children, might afford to travel to a health system offering the test.   Simply fedexing the sample to the AMC would not make the poor and geographically distanced child a patient.  A workaround might be having the test ordered pro forma by an on-staff clinician as a consult to the remote clinician and child, since the FDA seems to focus on the staffing or employment status of the physician plus whether such a physician 'ordered' the test per se.

AI Corner: Chat GPT Discusses FDA LDT Rule With Regard to New York State Review

In a final rule for regulation of LDTs, on April 29, 2024, FDA announced it would not require FDA review for LDTs reviewed and approved by New York State CLEP.

See the whole 175,000 word rule for yourself, here.

Below, find a Chat GPT summary of key pages discussed NYS CLEP below.  (Pages 43-49,302-307).  Remember this is an AI-driven interpretation about what the FDA itself writes, it is not a critique of the FDA's position.

[start Chat GPT]

###

The New York State Department of Health Clinical Laboratory Evaluation Program (NYS CLEP) plays a significant role in the regulation and oversight of Laboratory Developed Tests (LDTs). The FDA acknowledges that NYS CLEP has established a robust system for evaluating the analytical and clinical validity of high and moderate risk LDTs. This system is designed to mitigate the risk of harm from inaccurate and unreliable LDTs. Based on this, the FDA intends to exercise enforcement discretion, meaning that it will not enforce certain premarket review requirements for LDTs that have either been fully approved by NYS CLEP or received conditional approval.

Key Aspects of NYS CLEP's Oversight:

High Risk LDTs: Require full technical review and approval before they can be used on specimens from New York State.

Moderate Risk LDTs: Also require full technical review and may receive conditional approval, allowing the test to be offered while the review is still pending.

Review Process: Includes an evaluation of the LDT's analytical and clinical validity, ensuring that the test is both accurate and reliable.

Conditional Approval: Allows tests to be used while under review. If a test ultimately does not receive full approval, it is removed from the enforcement discretion policy.

Exemptions: Laboratories with proven method validation can request exemptions from full technical documentation for future tests using the same methodology.

FDA's Enforcement Discretion:

The FDA's decision to not enforce premarket review requirements for NYS CLEP-approved LDTs allows the FDA to allocate its resources more effectively to other oversight activities. This policy applies regardless of whether the specimen comes from New York State or elsewhere, but only to the version of the LDT that is approved by NYS CLEP.

Impact and Implications:

Efficiency: By recognizing NYS CLEP’s review process, the FDA avoids duplicative efforts and reduces the regulatory burden on laboratories.

Public Health: This policy is expected to maintain high standards for LDT safety and effectiveness, as NYS CLEP’s review process aligns closely with the FDA’s standards.

Phase-out of General Enforcement Discretion: While premarket review requirements may be relaxed for NYS CLEP-approved LDTs, other FDA requirements such as adverse event reporting, registration, and listing will still be enforced according to the phased timeline established by the FDA.

Concerns and Considerations:

Some stakeholders have expressed concerns that leveraging NYS CLEP's program should not replace FDA regulation entirely. They emphasize that while NYS CLEP’s oversight is robust, FDA’s regulatory framework includes broader and potentially more stringent standards. The FDA acknowledges these concerns and clarifies that this policy is a form of enforcement discretion, not a complete handover of regulatory authority.

In summary, the FDA’s approach to utilizing NYS CLEP’s established programs for certain LDTs reflects a strategic use of regulatory resources to ensure safety and effectiveness while minimizing unnecessary duplication of regulatory efforts. This decision underscores the FDA's commitment to a risk-based regulatory framework that protects public health without stifling innovation in laboratory testing.

[end Chat GPT]


FDA Publishes Final Rule for LDT Regulation

On Monday, April 29, 2024, the FDA released the final rule for its regulation of LDTs.   The rule had been released in proposed form for public comment last fall.

FDA continues to plan an enforcement deadline of 3.5 years from now for high risk LDTs and 4 years from now for moderate risk LDTs.  However, as FDA had proposed, submitting a PMA (or  510k and de novo) application will "stay the clock" for the duration of its review.  

In thinking about 510K's and PMA's, remember #1 the FDA has promised to downclassify many PMA tests to 510k and #2 has early plans to completely revamp and speed the 510K process.  Consider also wider use of predetermined change plans.

FDA reserves the right to march in faster on "problematic" tests (the word problematic appears 46 times in the final rule, 6 in the draft rule).   

FDA creates several new flexibilities for itself:

  • NYS CLEP: Rule does not generally apply to NYS CLEP approved tests;
    • The term CLEP appears 145 times in the final rule, 1 time in the proposed rule.
    • See ChatGPT Sidebar here.
  • WITHIN SYSTEM: Generally not applying to LDTs "by a laboratory within a healthcare system to meet an unmet need of patients receiving care in the same healthcare system."   
    • (This is similar to a pathway in the new European IVDR system).
    • The term "same healthcare system" occurs 42 times in the final rule, 0 times in the proposed rule. 
    • See ChatGPT Sidebar here
  • GRANDFATHER: Generally enforcement discretion for LDTs marketed prior to the rule, and not altered.
  • VA-DOD: Rule does not apply to V.A. or DOD;
All of the above points are not part of the regulation; they are part of the free-flowing and shape-shifting "discretion" that FDA may use or revisit as leadership changes, without returning to rule-making.  For example, the regulatory change (several words long) says it is "effective" 60 days after publication; that follows a federal law for regulations.  But FDA says that its grandfather date is THE DAY of publication, and chooses that at is own fancy.

See ACLA comment here. ARUP here.  Congr. Rodgers (R) here.  Sen. Cassidy (R) here. AdvaMed here (wait for VALID!). Biopharma Dive here and here.  Endpoints here.  Deep dive day 2 article at Genomeweb here.  See a listing of more articles from FDA GROUP here.  Kevin Nichols here.   Deep dive from Hogan Lovells here.

###
Massive Down-Shift in Expected Costs

In the Fall 2023 projection, FDA predicted costs to industry to be $40-50B dollars in the first five years (reflecting a impossible workload in man-years).   

In the Spring 2024 final project, FDA expects costs no higher than $500M over the first 3 years, and then onward costs of $1.5B per year.   That works out to about $5B not $50B in the first five years.  Using my own ballpark figure of $200,000 per regulatory professional (fully loaded overhead), that is still 25,000 man-years of regulatory time in the first five years (down from 250,000 man-years in the proposal).




click to enlarge (shows fall table 35, spring table 40, regulatory impact document)

Benefits are still calculated in value per statistical life year (VSLY) (new regulatory impact, page 68ff).  That is, costs continue to be real costs like professional labor and benefits continue to be  projected life-years added or projected deaths delayed by IVDs.

###
Final Rule Six Times Bigger

The inspection (typescript) copy of the proposed rule was 83 pages; the inspection copy of the final rule is 528 pages, or 6X.  The 528 pages add up to 175,000 words.  

###

Regulatory Change Tiny; Satellite Rules All FDA Discretion

Most of the important stuff is still "FDA discretion," the actual rule change is only a few words long.   Everything else (like pivotal proposals like grandfathering) relates to FDA internal plans and/or FDA discretion as time goes by.  (E.g. FDA remarks on its ability to change the rules by guidance, p 42.) 

###

WHERE TO FIND IT

See the FDA webpage for the rule's release here:

https://www.fda.gov/news-events/press-announcements/fda-takes-action-aimed-helping-ensure-safety-and-effectiveness-laboratory-developed-tests?utm_medium=email&utm_source=govdelivery

See the Fed Reg location for the final rule here. The typeset rule will appear May 6.

https://www.federalregister.gov/public-inspection/2024-08935/medical-devices-laboratory-developed-tests

See the typescript or pre-publication version of the actual final rule here--528 pages.

https://public-inspection.federalregister.gov/2024-08935.pdf

See the webpage for regulatory impact analysis here:

https://www.fda.gov/about-fda/economic-impact-analyses-fda-regulations/laboratory-developed-tests-regulatory-impact-analysis-final-rule

See the 178-page final impact analysis here:

https://www.fda.gov/media/178133/download?attachment

##

See a new guidance document for "public health response" issues absent an actual declared emergency:

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/enforcement-policy-certain-in-vitro-diagnostic-devices-immediate-public-health-response-absence

See a new guidance document for declared public health emergencies:

https://www.fda.gov/regulatory-information/search-fda-guidance-documents/consideration-enforcement-policies-tests-during-section-564-declared-emergency


##
More About NYS CLEP

The final rule mentions CLEP 145 times, vs. just once or twice in the proposal.  In my white paper on the proposed rule, I suggested that we should just require CLEP-like reviews - not 510(k) reviews by NYS CLEP, but the CLEP review itself.   ("I think the [FDA"s] accuracy concern could be met by less drastic measures, such as having something roughly the equivalent of New York State review of LDTs.")

FDA notes (FN29) that certain tests exempted from NYS CLEP review are NOT exempted from this FDA rule.

FDA discusses CLEP processes at p.43 ff.   Tests approved under CLEP are still subject to various listing and reporting rules, just like 510K/PMA tests  (p. 47), including labeling requirements [claims].  

AMCs are discussed at page 50ff.  See ChatGPT Sidebar here.

CLEP is discussed again, in the context of response to comments, at p.302ff.

FDA discusses phaseout policy at section V ("5") (p30ff), V.B.2 including CLEP (p. 41ff).

See a Chat GPT sidebar about the FDA's CLEP pages, here.

 



Wednesday, April 24, 2024

AI Corner: SCITE.AI Auto-Generates Short Medical Articles with Clear Citations

A year ago, the bad news around any AI-generated writing was hallucinations (such as made-up bibliography.)   The field moves fast and some sources seem to have completely licked that problem.

I sometime use SCITE.AI (subscription) which has two main modes.  

In the first mode for SCITE, you search or input a key paper ("key paper.")   SCITE instantly gives you all the papers that have cited it, AND, if full text is available, the paragraph or sentences that cite the key paper.   If the key paper has been cited 15 times, at a glance you can see how the key paper was cited, rather than you go and download and/or buy all 15 of the PDFs and read them one by one.

But here's a second mode for SCITE.   It's called SCITE Assistant.  https://scite.ai/assistant   You can give it a targeted question or request, and it instantly provides a correctly footnoted essay in response.

For example, I gave it this question:

  • "How important is minimal residual disease molecular testing today, in oncology?"

Here is the automated answer:

[START SCITE AI WRITING]

Minimal residual disease (MRD) molecular testing is a crucial aspect of oncology today. MRD refers to the small number of cancer cells that persist in a patient after treatment. It is a significant prognostic factor in various types of cancer, including acute lymphoblastic leukemia (ALL) (Tierens et al., 2021; Pui et al., 2015; Brüggemann & Kotrová, 2017). Detecting MRD is essential for identifying patients at high risk of relapse and shorter survival, aiding in refining risk stratification and treatment response assessment (Ravandi et al., 2018; Pui et al., 2015). 

Advancements in molecular testing techniques such as real-time quantitative polymerase chain reaction, digital polymerase chain reaction, and next-generation sequencing have enabled the detection of MRD at levels far below what is required for morphological remission (Ball & Stein, 2019). These techniques have revolutionized the field by providing highly sensitive methods to monitor MRD and predict the risk of relapse in cancer patients (Johnston et al., 2023). 

MRD testing is not limited to hematologic malignancies but extends to solid tumors as well. Studies have shown that assessing MRD in solid tumors post-surgery or treatment can guide clinical decisions related to adjuvant therapy, emphasizing the importance of MRD detection in improving patient outcomes (Larribere & Martens, 2021). 

Furthermore, MRD monitoring is essential in acute myeloid leukemia (AML), where it serves as a critical biomarker for prognostic, predictive, monitoring, and efficacy-response assessments (Heuser et al., 2021). The ability to quantify MRD using various molecular techniques, including whole-genome sequencing and immunoglobulin clonality assays, has become integral in tailoring treatment strategies for patients with leukemia and other cancers (Subhash et al., 2021; Lee et al., 2022; Fürstenau et al., 2019).

 In conclusion, MRD molecular testing plays a vital role in oncology today by providing clinicians with valuable information to guide treatment decisions, predict relapse risk, and improve patient outcomes across a spectrum of cancers.  [END SCITE AI WRITING]

##
Then, it provides a listing of all the papers it cited, AND, for each citation, it clips several sentences to show you what facts it was pulling from that citation.

Here's the top part of the footnotes section, which explains why it chose Tierens et al. as the first reference in the AI essay above.

[SCITE.AI Continues]

Reference #1 Tierens et al

“…Measurable (minimal) residual disease (MRD) is an established, key prognostic factor in adult B-cell acute lymphoblastic leukemia (B-ALL), and testing for MRD is known to be an important tool to help guide treatment decisions.…”

[click] See full context
Section quoted: Abstract

Paper:
Consensus Recommendations for MRD Testing in Adult B-Cell Acute Lymphoblastic Leukemia in Ontario

Anne Tierens1, 
Tracy L. Stockley2, 
Clinton J. V. Campbell3 
et al. 2021
Journal: Current Oncology

8       0  7  0
The paper (Tierens) has been cited 8 times.   Of these, 7 have full text.    Of the 7 with full text, 0 "support" the Tierens, 7 "mention" Tierens, 0 "disagree" with Tierens.

[Continues bibliography #2-#11]   [End SCITE writing.]

####
I've also put the full MRD AI Essay, and the full listing of 11 annotated citations, in an open access cloud PDF file here:


##

  • I first mentioned SCITE in August 2023.
  • See JAMA 2024 on rules for use of AI assistance..
  • SCITE has full-text access to all public articles, plus licenses that allow it to present short quotes even from many firewalled articles.





Two Links: on Physician Income; on Physician Industry Payments

Two sources appeared almost at the same time, regarding physicians and finances.

INCOMES

Advisory Board issues a detailed report on 2023 physician income levels.

   https://www.advisory.com/daily-briefing/2024/04/15/physician-compensation

It traces back to a 21-page report at MedScape (registration required):

   https://www.medscape.com/slideshow/2024-compensation-overview-6017073

The report has break-outs by specialty, gender, racial/ethnic group, etc.

click to enlarge

INDUSTRY DOLLARS

Over at JAMA, see a report by Sayed et al. on industry payments to US physicians by specialty, and product type.  There are also break-outs by company. The highest-dollar drug was Xarelto, the highest device was Da Vinci Surgical System.  This week in Washington Post, former NIH director Francis Collins announced he was having a radical prostatectomy (cancer took off after a period of surveillance) and with the Da Vinci system.

   https://jamanetwork.com/journals/jama/fullarticle/2816900

click to enlarge


Tuesday, April 23, 2024

Stanford Biodesign Addresses Coding vs the Innovation Ecosystem

 Innovation policy experts from Stanford's Biodesign center have published a new paper about the coding ecosystem and the innovation system and where they can come into opposition.

Find the paper here:

https://hmpi.org/2024/04/12/current-common-procedural-terminology-cpt-coding-process-challenges-impact-on-the-healthtech-innovation-ecosystem/

Here's Stanford Byers Center overview from Linked In:

“For medical innovators, the process of applying for and achieving a CAT I CPT code is a challenging and costly process that has become fundamental to unlocking reimbursement and patient access in the United States. The CAT I requirement of “widespread use,” in particular, has become a roadblock for innovators with novel products or services and the physicians that use them, that is having a negative impact on the innovation ecosystem and patient care. There is a strong and near-term need for a more transparent, predictable, and achievable CAT I CPT code process to ensure that healthcare innovation and patient access to FDA-cleared and clinically proven therapies can flourish and be preserved for future generations.”

Read our latest research on the challenges of the CPT coding process and its impact on the innovation ecosystem.

__________

AI Corner

A GPT4 summary:

The report explores significant challenges in obtaining Category I Current Procedural Terminology (CPT) codes, which are essential for medical innovators to secure reimbursement and patient access for new medical technologies in the U.S. The process is marked by stringent criteria including "widespread use" and documented clinical efficacy, which pose barriers particularly for novel technologies. Survey data from stakeholders highlighted issues such as the high costs, resource demands, and lack of transparency in the CPT code application process. Respondents advocated for a more transparent and predictable process to foster healthcare innovation and improve patient access to new therapies. Recommendations include refining the "widespread use" criteria and enhancing collaboration between innovators, physicians, and medical societies to streamline the CPT code application and approval processes.

Monday, April 22, 2024

Court Case: Agendia Loses Coverage Battle; Long Summary of its Legal Filings

 In late March 2024, Agendia got an unfavorable ruling regarding Medicare coverage, from the District of Columbia federal court.   The decision was covered by Bloomberg Law (subscription, here.)    

While that's firewalled, the judge's 30pp decision is not, it's  open access here.  

The government's winning 36pp brief is also open access, here.


AI CORNER

Here's a Chat GPT4 summary of the judge's decision.

Dalle3 via GPT4


The court case, Agendia, Inc. v. Becerra, deals with disputes over Medicare coverage for two molecular diagnostic tests, BluePrint and TargetPrint, developed by Agendia, Inc. These tests are used for the diagnosis and treatment of breast cancer. Agendia sought coverage from Medicare for these tests but faced repeated denials by Medicare contractors based on coverage determinations made under the Molecular Diagnostic Services (MolDX) Program.

Here's a summary of the key points and implications for CMS policy decision-makers:

Substantial Deference Scheme: The heart of the dispute is the "substantial deference" scheme, where HHS adjudicators must defer to the coverage determinations made by private Medicare contractors. Agendia challenged this scheme as unconstitutional and contrary to the Medicare statute. However, the courts have consistently upheld the scheme, emphasizing that it does not unconstitutionally delegate governmental power to private entities, as the final decision-making authority remains with the public administrators.

Coverage Denials Upheld: The court in this instance ruled against Agendia, stating that the decisions denying coverage were supported by substantial evidence and were in accordance with the law. This included evidence that the tests were considered investigational and not medically necessary under existing Medicare standards.

Legal Precedents and Claim Preclusion: The decision also discusses the application of legal doctrines such as claim preclusion, which prevents Agendia from re-litigating issues that were or could have been raised in prior litigation. This highlights the importance of the legal strategy in ongoing and repetitive litigation against government bodies.

Impact on Policy: For CMS policy decision-makers, this case underlines the strength of the current framework governing the adjudication of Medicare coverage, especially the role of MolDX in assessing clinical and analytical validity of tests. It also highlights the robustness of the deference given to contractor determinations, suggesting that significant evidence or legal changes would be required to challenge such determinations successfully.

Future Considerations: The continued validation of the "substantial deference" rule may encourage more precise and evidence-based submissions from providers seeking Medicare coverage for new or controversial tests. It also suggests a stable regulatory environment but raises questions about the balance of power between private contractors and public oversight.

This case serves as a significant precedent in understanding and navigating the complex landscape of Medicare coverage for innovative medical tests, and it reinforces the structured, evidence-based approach required to alter or influence existing coverage determinations under Medicare policies.

Here's an AI Summary of the government's winning position.

The Government Brief in the case Agendia, Inc. v. Xavier Becerra, Secretary of Health and Human Services, presents a detailed legal argument defending the role of Medicare Administrative Contractors (MACs) in issuing local coverage determinations (LCDs) for specific medical services under Medicare Part B. Here are the key points summarized for those engaged in CMS policy:

Statutory Authority and Medicare Part B Coverage: The brief emphasizes that under Medicare Part B, coverage is generally limited to items or services that are "reasonable and necessary for the diagnosis or treatment of illness or injury" (42 U.S.C. 1395y(a)(1)(A)). MACs are authorized to determine coverage for services by applying regulations and national coverage determinations set by CMS.

Role of Medicare Administrative Contractors: The Government argues that MACs play a crucial role in the initial determination of coverage through the issuance of LCDs, which are localized and not nationally binding. LCDs help in applying the statutory "reasonable and necessary" standard on a regional basis.

Constitutional and Procedural Legitimacy: The brief counters claims that the delegation of authority to MACs to issue LCDs is unconstitutional. It argues that MACs function under the supervision of the Secretary of Health and Human Services and do not have independent regulatory power. Thus, their actions are constitutionally valid.

Challenge to LCDs and Judicial Review: The brief addresses the process through which a party can challenge LCDs and seek judicial review, emphasizing that there are robust mechanisms for appeal and review within the administrative process.

Implications of the Case: For CMS policy makers, the case highlights the balance between maintaining flexibility in Medicare coverage determinations at a regional level while ensuring that these determinations are consistent with overarching federal standards and statutory requirements.

The decision in this case reinforces the established framework where MACs are key operational components in administering Medicare coverage determinations, maintaining that their role and the existing procedures for issuing LCDs are both legally valid and practically essential for the efficient functioning of the Medicare program. This framework supports nuanced local application of Medicare policies without overstepping legal or constitutional boundaries, which is crucial for adapting to regional medical practice variations and needs.

##

The case particularly stands out to me because the LCD at issue is MolDx's foundational LCD, which stipulates that tests are not covered until they have been reviewed. This LCD is concise and predates the development of the tests in question. Given that judges are statutorily required to defer only to "reasonable" findings in LCDs, it is surprising to see this LCD granted deference. It seems quirky to determine that an LCD written before a test's creation could contain reasonable findings about the test's necessity and appropriateness. Additionally, the judge's decision skims past this point.

Nerd note

Note also this is a sentence about LCDs and lab tests in PAMA (SSA 1834A) at (g).

1834A(g) Coverage.—(1) Issuance of coverage policies.—(A) In general.—A medicare administrative contractor shall only issue a coverage policy with respect to a clinical diagnostic laboratory test in accordance with the process for making a local coverage determination (as defined in section 1869(f)(2)(B)), including the appeals and review process for local coverage determinations under part 426 of title 42, Code of Federal Regulations (or successor regulations).


FDA Offers Very Odd (I'm Being Polite) Assessment of LDTs

When FDA released its regulatory proposal to regulate LDTs, on September 29, 2024, it provided a very large online library of supporting documents.   I was very annoyed by the online financial analysis, which buried the all-important projected costs to industry in an obscure cloud document, and even there, in tables far in the back.

Another online document is, "Examples of IVDs Offered as Laboratory Developed Tests that Raise Public Health Concerns," part of Docket FDA 2023 N 2177 as item -0076.

https://downloads.regulations.gov/FDA-2023-N-2177-0076/content.pdf

To my eye, as a board-certified pathologist, the report deals strangely with false positives and false negatives.   FDA approved (cleared) tests have false positives and false negatives.   They quote that the Oncotype Dx test has a "69% false positive rate."  This is a breast cancer test where a high score indicates a 30-40% risk of recurrent breast cancer.   That does not mean a "69% false positive rate."  It means a positive score IS a 30% recurrence risk of cancer.  The Agendia Mammaprint test is FDA-cleared and has a similar function, where a positive score is a high risk of breast cancer, such as the 30% range.   FDA would say that this IVD has a "70% false positive rate"  but that is not the report, the report is a risk category, not a diagnosis of (future) cancer.

The same type of odd examples occur throughout the memo.   In other cases, single cases of a FP or FN are cited, but all FDA tests have single cases of FP and FN too.



Sunday, April 21, 2024

Very Brief Blog: FDA Promises Reform, Efficiency in Device Review

 At the Medical Device Manufacturer's Association MDMA, in April 2024, FDA CDRH director Dr Jeff Shuren promised reforms, over time, that will speed up the device review process while reducing the failure rate.  He refers to it as "agile" review.  In software, this refers to rapid sprints to hit new deliverables.

Find an article open access at RAPS:

https://www.raps.org/News-and-Articles/News-Articles/2024/4/Shuren-CDRH-reimagining-medtech-premarket-review-p?utm_campaign=Regulatory-Focus&utm_source=linkedin&utm_medium=social

See also some notes by an observer at Linked In:

https://www.linkedin.com/feed/update/urn:li:activity:7056666727924842496/

Jared Seehafer, at that link, shows a slide about "Statutory Red Tape," the "47 year old framework is not fit for purpose," and solutions to "remove red tape."

Recall February 2024, FDA promised (in a press release) to eventually shift many PMA diagnostics to  Class II / 510(k)...here.

Friday, April 19, 2024

Tuesday April 23: DECIBIO Webinar on FDA LDT Regulation

 Don't miss DECIBIO's April 23, 2024 90 minute panel on FDA regulation of lab developed tests.  It's at 9 pacific, 12 eastern.   I am lucky to be one of 6 panelists.

Find the registration here:

https://data.decibio.com/navigating-fda-ldt-regulation-webinar

My presentation will largely recap findings in my October 10, 2023 white paper:

https://www.discoveriesinhealthpolicy.com/2023/10/online-ahead-of-print-white-paper-on.html

###

###

A DeciBio Consulting Webinar

Executive Roundtable: Navigating the FDA's Laboratory Developed Tests Regulation

When: 

Tuesday, April 23, 2024

9:00 AM PT / 12:00 PM ET / 04:00 PM UK

What to expect:

Join us for a digital executive roundtable discussion of DeciBio's white paper "Regulatory Reckoning: Navigating the FDA's Laboratory Developed Tests Regulation". Free download available here.

JAMA on Medical AI Accuracy; Stanford Releases 500-page report on AI 2024

JAMA releases an important short article, "Comparative Analysis of Multimodal Large Language Model Performance on Clinical Vignette Questions," by Han et al.  (here).  Authors from Berlin, Munich, Aachen.  See also a 9-minute podcast.  

AI performance across a variety of tests and AI models continues to improve, with some differences between models and subject areas.   Many models are competitive with, or exceed,  human performance.

The graphic below shows six models and 1 human group when graded on harder (left) and easier (right) questions.  In each cluster, the humans are on the right.   All answers get better performance with easier (right) questions, but gpt4V consistently outperforms humans (far left vs far right column in each cluster).  There are several other graphics in the article.

click to enlarge

##

Stanford releases its April 2024 500-page report on progess in AI.   See a news article about it here.

See the home page here.   Get the full PDF here.  Get the 32-page Science/Medicine chapter here.  See a summary of the Stanford report, by Margaretta Colangelo, here.



AI Corner

ChatGPT4 summarizes the Sci-Med chapter.

The chapter on Science and Medicine in the 2024 Artificial Intelligence Index Report highlights the significant advancements in AI's role within these fields, emphasizing both scientific discovery and medical innovations. Here's a detailed summary:

In 2023, AI's influence in scientific research led to substantial developments in various areas, from weather forecasting to materials science. One key highlight was the introduction of AlphaDev, an AI system that developed efficient sorting algorithms, marking significant progress over decades of manual research. Another noteworthy project, FlexiCubes, utilized AI for optimizing 3D mesh constructions, crucial for applications ranging from gaming to scientific visualization.

The medical field also saw remarkable AI-driven innovations. Systems like SynthSR have transformed brain scan analyses by synthesizing standard medical scans into high-resolution images, aiding in both research and clinical settings. Furthermore, AI's role in diagnosing and managing diseases has expanded, illustrated by the development of AI tools like EVEscape for pandemic prediction and the advanced mutation classification capabilities of AlphaMissense.

Moreover, AI models have been increasingly recognized by regulatory bodies such as the FDA, which approved a significant number of AI-related medical devices, underscoring their reliability and effectiveness. The chapter also delves into AI's potential to enhance healthcare administration and patient care, citing systems like MedAlign, which aims to reduce administrative burdens through AI-driven solutions.

Overall, the chapter provides a comprehensive look at how AI is reshaping the landscapes of science and medicine by accelerating research processes, enhancing diagnostic accuracy, and improving overall efficiency in healthcare operations.

Proteomics performance enhanced by AI interpretation of signals (Alz.)


Sidebar: 10 Key Takeaways

AlphaDev: AI system developed new efficient sorting algorithms, impacting computational research significantly.

FlexiCubes: AI-driven optimization of 3D mesh constructions, improving applications in diverse fields.

SynthSR: AI transformation of brain scans into high-resolution images, enhancing medical analysis.

EVEscape: AI tool for predicting pandemic patterns, aiding in preemptive healthcare strategies.

AlphaMissense: AI model for classifying genetic mutations, potentially revolutionizing genetic research.

FDA Approvals: Increased approval of AI medical devices, confirming their efficacy and safety.

GraphCast: Enhanced weather forecasting through AI, demonstrating superior accuracy.

GNoME: AI's breakthroughs in materials science, discovering new materials faster.

MedAlign: AI application in healthcare administration, aiming to streamline operations.

CoDoC: Integration of AI in clinical settings, optimizing diagnostic processes and outcomes.


[/end ChatGPT4]

Tuesday, April 16, 2024

Brief Blog: CMS Posts Info for Summer Lab Pricing Meetings (June 25, July 25)

CMS holds two meetings each summer for pricing of new lab tests.  One is a public comment meeting (June 25, 2024), the other is an expert advisory panel which is livestreamed (July 25-26, 2024).

Find the expert panel announcement here:

https://www.federalregister.gov/documents/2024/04/16/2024-08008/medicare-program-meeting-announcement-for-the-medicare-advisory-panel-on-clinical-diagnostic

Find the public meeting announcement here:

https://www.federalregister.gov/documents/2024/04/16/2024-08005/public-meeting-on-june-25-2024-regarding-new-and-reconsidered-clinical-diagnostic-laboratory-test

Track updates on the CMS CLFS annual meeting page:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs/annual-public-meetings

UPDATE 4/18:  CMS has posted a code list (74 codes), a template for comments (the same 74 codes but with more columns), and an updated Powerpoint template for public presentations.   All the new GSP codes from last year (such as CGP liquid biopsy) are under Reconsideration.  Otherwise there are only a handful of new CPT 80,000 series codes - several variants of Alzheimer testing (tau, beta-amyloid).   

CMS knows the code list is preliminary.  We've seen that some 60 PLA codes, plus some new GSP codes, will be at the May 2024 AMA CPT meeting, and those will probably be rolled in to the final June agenda.

I think the PowerPoint template is updated to bring more focus on cost information.  There's a slide to provide "test background" and "purpose and methodlogy."  There's a slide asking you to present, "resources for test," and "costs" and "charges."  On the last slide, they ask you to petition for either a crosswalk or gapfill price, and explain why.


For updates specific to the expert panel meeting, here:

https://www.cms.gov/medicare/payment/fee-schedules/clinical-laboratory-fee-schedule-clfs/clfs-advisory-panel

For several years, the meetings have been virtual-only.  This year they will be "hybrid" (both in the CMS auditorium in Baltimore, and live-streamed).

PUBLIC COMMENTS

Public comments must be submitted by May 30, 2024 5 pm ET.  Use a PPT template for your opening slides which captures basic info in a fixed format.

The Code Agenda - Wait for it....

The code list (likely well over 100 codes this year) will be released 30 days before the meeting, e.g. May 25.   Assuming you know whether you have a code in the ring or not, you don't have to wait for the May posting by CMS to start working on your slides.   CMS will include codes that "pass" the CPT around May 10, but it takes a while for AMA to get those codes over to CMS to post.

International:

Although I don't see any explicit remarks, in some past years, if you have a non US passport, the advance requirements to attend were very elaborate and no exceptions.   If this pertains to you, I suggest you touch base with CMS ASAP.



###

AI CORNER

Having read the PDF about the experts advisory meeting, retell the story in the style of an old fairy tale.

ChatGPT

Once upon a time, in a realm governed by the wise and just, there existed a venerable gathering known as the Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests. This illustrious panel, summoned by the Secretary of the Department of Health and Human Services, was a convocation of the most learned and sagacious minds from across the kingdom, each a master in their own right of the arcane arts of laboratory science and health economics.

In the heart of a grand hall, under the luminous glow of chandeliers that danced with light, the panel convened twice each circling of the sun. They were charged with a task of great import: to deliberate upon the mystical codes of new and reconsidered clinical diagnostic laboratory tests, deciding how gold—the coin of the realm—should be apportioned for their use.

The Panel, composed of molecular pathologists, seasoned researchers, and sages with deep knowledge of laboratory workings, employed ancient methods known as "crosswalking" and "gap filling" to cast their decisions. Crosswalking was a method where the value of a new test was measured against the worth of tests of old, linking them like the stones of a castle wall. Gap filling, on the other hand, was akin to charting unknown territories, setting values in places where no prior paths existed.

Each meeting was a spectacle of debate and discussion, with the panelists' words weaving through the air like spells, casting the future of medicine. They were watched by many, from near and far, through magical mirrors and devices that could capture their likeness and voice, spreading their deliberations across the kingdom in the blink of an eye.

The Panel’s recommendations were like golden keys that unlocked the treasury of Medicare, ensuring that the alchemists and healers across the land could continue their noble work, armed with the best of tools and knowledge. Their decisions were recorded in great ledgers and announced in the town squares and through messengers on swift horses, so that all could know of the wise determinations made within the grand hall.

Thus, the realm thrived, with the Panel’s wisdom guiding the health of its people. And so the story of the Medicare Advisory Panel on Clinical Diagnostic Laboratory Tests was woven into the fabric of the kingdom's history, a tale of wisdom and stewardship, told and retold wherever scholars gathered under the stars.

##

Write a short, snappy, exciting summary of this CMS policy meeting announcement. Highlight some of the informational content like deadlienes, what kind of info to submit if you comment, etc.

Chat GPT4

Join us on June 25, 2024, for a pivotal public meeting hosted by the Centers for Medicare & Medicaid Services (CMS), where we will delve into the future of diagnostic healthcare through discussions on the Clinical Laboratory Fee Schedule (CLFS) for the upcoming year. This is your chance to contribute to shaping Medicare payment structures for new and reconsidered diagnostic laboratory test codes.

At this meeting, we will gather crucial feedback and recommendations on establishing payment amounts for innovative and substantially revised test codes. Whether you're a healthcare provider, patient advocate, industry professional, or a stakeholder in the medical laboratory community, your insights will help ensure equitable and effective payment schemes.

Important Deadlines:

Presentation and Comment Submission: If you wish to present or have your comments considered during the meeting, ensure your presentations and written comments are submitted electronically to the dedicated CMS email (CLFS_Annual_Public_Meeting@cms.hhs.gov) by May 30, 2024, at 5:00 p.m. EDT.

Special Accommodations Request: If you need special accommodations due to a disability, requests must be submitted by the same deadline of May 30, 2024.

What to Include in Your Comments:

Data-Based Recommendations: We seek detailed recommendations based on data that support your views on the appropriate basis for establishing payment amounts.

Details on New or Revised Tests: Include comprehensive information on the purpose, methodology, costs, and charges related to the new or reconsidered tests. This will aid in a transparent evaluation process.

Virtual and In-Person Participation:

The meeting will be accessible both virtually and in person at the CMS Central Building in Baltimore, Maryland, allowing participants from across the nation to join and contribute. If attending in person, registration is required, and we recommend arriving early to complete the security process.

Post-Meeting Actions:

Proposed Determinations Publication: CMS will publish proposed determinations by early September 2024. These will outline preliminary decisions on the payment amounts for each test code discussed during the meeting.

Final Comments on Proposed Determinations: Stakeholders will have until early October 2024 to submit written comments on these proposed determinations.

This meeting not only offers a platform to influence current policies but also to engage with key decision-makers and peers in the healthcare sector. Your participation ensures that the voice of the community is heard loud and clear in the decision-making process. Join us to help define the standards that will govern clinical diagnostic testing payments in 2025!     [end/chat gpt]


Brief Blog: AMA Considering 60 New PLA Codes This Quarter

Typically, there have been about 20 new proprietary PLA codes created each quarter by AMA CPT.   For the Q2 cycle (April-May-June), there are about 60.  A couple of those are revisions.  My tally of new codes doesn't include the 3 or 4 being deleted.    

AMA is posting the roster for public comment here:

https://www.ama-assn.org/practice-management/cpt/cpt-pla-codes

(Look for, "review the PLA agenda").

https://www.ama-assn.org/system/files/pla-public-agenda-may-2024.pdf

AMA will have an invitation only April 23 teleconference of the PLA committee to wordsmith and finalize the codes.  Those will then be voted on (usually very quickly) by the AMA CPT committee of the whole in Chicago in May.   

Final codes will be released both for the upcoming CMS summer pricing meetings (June 25, July 25) and posted on the AMA website June 30.

Sunday, April 14, 2024

AI Corner: JAMA Updates Guidelines for AI Writing Assistance

[Note, I am traveling so I asked Chat GPT 4 be the substitute author, and write up this blog about this new JAMA article.]

[Here is a blog about how Scite.AI generates biomedical writing.]

###

Input:

https://jamanetwork.com/journals/jama/fullarticle/2816213

Flanagin A et al. (2024) Reporting use of AI in research and scholarly publication - JAMA NETWORK guidance.  JAMA 331:1096-8.

###

[AI write-up follows.]

###

Exploring the Nuances of JAMA's Updated AI Reporting Guidance

The Journal of the American Medical Association (JAMA) Network has recently updated its guidelines for reporting the use of Artificial Intelligence (AI) in research and scholarly publications. This is the second update within a year, highlighting the rapid advancements and increasing integration of AI in medical research. These guidelines are designed to ensure transparent, appropriate, and accountable use of AI, fostering a foundation of trust and replicability in scientific publications.

  • Understanding the Updated Guidance

JAMA's revised guidelines reflect an effort to address the challenges and potential pitfalls associated with the use of AI in medical research. With AI's capabilities expanding from predictive analytics to complex decision-making tools, the need for clear reporting standards has never been more critical. Here are several key areas outlined in the guidance:

Transparency in Manuscript Preparation:

Authors must disclose any use of AI in creating or editing manuscript content, specifying the AI tools used, their versions, and their roles in the manuscript preparation process.

Detailed AI Involvement in Research:

For research involving AI, authors are required to adhere to specific reporting guidelines such as CONSORT-AI for clinical trials with an AI component and CLAIM for AI in medical imaging. These guidelines ensure that each study's AI methodology is fully reproducible.

Ethical Considerations and Data Integrity:

The guidance emphasizes the importance of ethical use of AI, including considerations for patient privacy and the handling of data. Authors must ensure that no identifiable patient information is included and must address copyright concerns related to AI-generated content.

Methodological Rigor:

Detailed descriptions of AI applications in study design, hypothesis generation, and data analysis are mandated. This includes a clear delineation of the AI models used, data sets involved, and the assumptions tested.

Impact on Clinical Decision-Making:

The guidance raises concerns about AI's role in clinical decision-making processes, urging authors to discuss the potential biases and inaccuracies of AI tools and their implications for clinical outcomes.


  • Implications for Medical Experts

For medical professionals and researchers, the updated JAMA guidance serves several vital functions:

Enhancing Reliability: By standardizing the reporting of AI applications in research, the guidelines help ensure that studies are reliable and their results are reproducible, a cornerstone of scientific inquiry.

Fostering Innovation: Clear guidelines provide a roadmap for incorporating AI into research in a way that is both innovative and ethically responsible, pushing the boundaries of medical science while maintaining rigorous standards.

Educating the Community: The guidelines also play an educational role, informing researchers about the potential and limitations of AI in medical research and encouraging a thoughtful approach to its application.

Preparing for Future Challenges: As AI technology continues to evolve, these guidelines will likely be updated to address new challenges and opportunities. This dynamic approach ensures that the medical research community remains at the forefront of technological advancements.

  • Conclusion

JAMA's updated AI reporting guidelines are a testament to the evolving landscape of medical research in the era of artificial intelligence. They not only promote transparency and accountability but also encourage thoughtful engagement with new technologies. For medical experts, staying informed about these guidelines is crucial in navigating the complexities of AI in research, ensuring that their work is both impactful and in line with the highest standards of scientific integrity.   [ / Chat GPT4]


Bonus: "Write a second article, this time in the voice of a clinical researcher."

###
 
ChatGPT
Navigating JAMA's Updated AI Guidelines: A Clinical Investigator's Perspective

As a clinical investigator who frequently publishes in JAMA and regularly integrates Artificial Intelligence (AI) into my research, the updated guidelines on reporting AI use are both a welcome development and a crucial framework for my work. These guidelines reflect the evolving landscape of AI in medical research and underscore the importance of transparency and accountability. Here, I will share how these updates influence my research process and the broader implications for the scientific community.

Embracing Transparency in Manuscript Preparation
In my recent projects, I've leveraged generative AI tools to refine the language of manuscripts and generate preliminary data analyses. According to JAMA's new guidance, detailing the use of AI in manuscript preparation is mandatory. Now, in the acknowledgment section of my papers, I explicitly list the AI software used, including the version, the manufacturer, and the specific roles these tools played in preparing the content. This level of transparency ensures that peers can understand the extent of AI involvement and assess the integrity of the reported findings accurately.

Adhering to Rigorous Reporting Standards
The guidance provides a comprehensive list of AI-related reporting standards, such as CONSORT-AI for clinical trials and CLAIM for studies involving medical imaging. My research often involves predictive modeling in clinical prognostication, necessitating adherence to guidelines like TRIPOD-AI. These standards compel me to meticulously document AI methodologies, from the datasets used for training and validation to the assumptions tested and the model's performance metrics. This discipline not only enhances the reproducibility of my work but also elevates the scientific rigor of the studies.

Ethical Considerations and Patient Privacy
One of the paramount concerns in using AI in medical research is the ethical handling of data, particularly patient information. The updated guidelines reinforce the necessity of omitting identifiable patient data and of being vigilant about copyright and intellectual property rights when using AI-generated content. In my practice, I ensure that all AI-generated data, texts, or images are rigorously checked for compliance with these ethical standards, a practice that not only safeguards patient privacy but also aligns with JAMA's emphasis on ethical responsibility.

Methodological Innovations and Challenges
The guidelines also push me to constantly innovate and critically evaluate the AI technologies I employ. For instance, when using AI to generate hypotheses or assist in data analysis, I now provide a thorough description of the AI's role, detailing the specific tasks it performed and the rationale behind using AI for these purposes. This careful documentation helps in addressing potential methodological biases and inaccuracies that AI tools might introduce, ensuring that the research outcomes are robust and reliable.

The Future of AI in Clinical Research
Looking ahead, the rapid development of AI technologies suggests that these guidelines will continue to evolve. Staying abreast of these changes is not just about compliance; it's about actively contributing to the dialogue around AI in medical research. By adhering to JAMA's guidelines, I not only ensure the integrity of my research but also help shape the standards that will govern future innovations.

Conclusion
The updated JAMA guidelines on reporting AI use are more than just procedural necessities; they are the pillars upon which we can build a transparent, ethical, and scientifically rigorous future for AI in medical research. As someone at the forefront of integrating AI into clinical investigations, I view these guidelines as essential tools that guide my work and enhance the credibility and utility of our scientific contributions.

Thursday, April 11, 2024

JAMA Weighs In: How to View Endpoints for Cancer-Screening Technologies

Header:  This week, JAMA offers a large meta-analysis of different endpoints in cancer screening trials, and Bach offers a conservative op ed with regard to endpoints other than mortality itself.

##

Find Feng et al here.  

To oversimplify, imagine this is a classic surrogate biomarker study.  You have the best biomarker, [say] mortality, at 5 years, and you have an early biomarker, incidence-of-late-stage, at 1 year, and you ask, if you made decisions on "late-stage," at 1 year, how well does that predict if you had waited for "mortality" at 5 years?   Or again, it's the same logic as asking if you have only PFS today, how well does that predict OS later.  

Find Bach here.

AI Corner

The FENG systematic review and meta-analysis explores the use of late-stage cancer incidence as an alternative endpoint to cancer-specific mortality in randomized clinical trials for cancer screening. Analyzing 41 trials, it finds mixed results: while incidence of late-stage cancer correlates highly with mortality for ovarian and lung cancers, this is less so for breast, colorectal, and prostate cancers. This suggests that while late-stage cancer could be a suitable endpoint for some cancer types, it is not universally applicable across all cancers, highlighting the need for cancer-type specific trial endpoints.

The BACH opinion piece critically evaluates the use of late-stage cancer incidence as an alternative endpoint to cancer-specific mortality in screening trials, as discussed in the FENG study. BACH emphasizes the inconsistencies in correlation between late-stage cancer and mortality across different cancers, highlighting risks of misinterpreting screening benefits. He argues that despite the shorter trial durations offered by using late-stage cancer as an endpoint, cancer-specific mortality remains the more reliable measure for assessing the true efficacy of cancer screening tests.

[AI, Chat GPT 4]




Brief Blog: CMS Issues Inpatient Draft Rule FY2025; SEP-1 noted

The summer rulemaking season kicked off this week with release of the Inpatient Proposed Rule for FY2025 (October 1).  It's logged as CMS-1808-P [proposed].    

This will be followed by Physician and Hospital Outpatient rules circa July 1, for CY2025 (January 1).

Find the detailed fact sheet here.

The press release (which is allowed more spin) is here.

As always, CMS has released a typescript "inspection copy," 1902pp, opened a 60 day comment period, and will publish the typeset Fed Reg rule on May 2.   Here.

###

Improved "New Tech Add On Payment" Evaluations

The clarity with which CMS handles the new tech add on payment applications (each year 10-20 of these) continues to improve.   CMS offers key questions, presented in rows in a tabular format where the  applicant provides his/her answer and justification.   For example, "Is the technology "new"?"   Answer: Yes.   Reason:  (Three sentences explaining why it is new.)    Some of this clarity probably would help in other areas like dossiers to MolDx or other MACs and payers or ADLT applications/

CMS provides these summary tables in the Fed Reg while directing the reader to online sources for back up detail.  Snippet here:

click to enlarge (IPPS pdf p 285)


###

For those who missed it, the Medicare Advantage final rule came out April 4, 2024, and can be found here.

###
Inpatient Rule - SEP 1

Last year, CMS determined a timetable to elevate SEP-1 from a reportable measure to a financial impactful value based purchasing measure.   This is shown on page 729, Table V-L-02 (2026-2030).  Later on page 852, CMS cites to a PCAST report, "Transformational Effort on Patient Safety," September 2023, which had four goals, e.g. research and deployment for safe care measures.  CMS then states that specific condition or specific procedure measures, like SEP-1, "are not suffficient to measure and incentivize investment in a resilient safety culuture or [investment in] the infrastructure necessary for sustainable high performance within the broad and complex doamin of patient safety."  And, "There is strong alignment among experts to shift to a more holistic, proactive, systems-based approach to safety."  But existing measues focus on improving "patient safety for a specific condition..[not] the overall culture in which care is provided."  This leads up (p 854) to proposing a Patient Safety Structural Measure with 5 complementary domains which will be "foundational."  There are no changes at this time in the policy forecast of chart-based SEP-1 measurement out to 2030. 

There are some concerns and changes regarding accuracy of chart-abstraced measures (p 986).  


Monday, April 8, 2024

FDA and RUO: FDA Issues Public Warning Letter to RUO Mfgr

Header:  It's a commonplace to see RUO-labeled kits that are clinically validated within a CLIA lab for use in clinical tests.  It turns heads when FDA issues a warning letter to an RUO kit producer.

##

On the FDA website,  see letter March 12, 2024 (To Agena) citing alleged cases of "misbranding" of non-approved kits as "intended for use in the diagnosis of disease."

Agenda letter here:

https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/warning-letters/agena-bioscience-inc-665159-03212024

NOTE:  The earlier version of this blog set a Beckman Coulter FDA letter was also about RUO kits; that was incorrect.  The Beckman letter was about various labeling and design change concerns.

Beckman letter here:

https://www.fda.gov/inspections-compliance-enforcement-and-criminal-investigations/warning-letters/beckman-coulter-inc-678042-03152024

See subscription coverage at Genomeweb and at BioWorld. See also a detailed April 3 essay by the Mintz law firm, here.

Extracts from the public FDA letter to Agena include the following;

  • Verbal statements made to the FDA investigator during the inspection on behalf of your firm by your firm’s Director of Regulatory and Quality indicating that your firm no longer sells the Impact Dx™ Factor V Leiden and Factor II Genotyping Test on the IMPACT Dx™ System (K132978) and has instead replaced it with a new system, the MA4, and admitting that your firm sells the MA4 and reagent panels to companies who analyze patient samples that are used for clinical diagnosis.
  • Distribution records and a customer list obtained during inspection that indicates your firm shipped MA4 Systems and iPLEX HS Colon Panels to clinical testing laboratories.
  • A customer notification your firm sent to customers dated June 1, 2021, regarding distributed iPLEX HS Colon Panels that alerted customers of an increase in false negative results of certain reagent lots and stated that Agena was taking steps “to not impact your laboratory workflow.” This customer notification was sent to a clinical hospital in Texas and there is no evidence that these reagents were intended for research use only.
  • A copy of the iPLEX HS Panels User Guide obtained during inspection that included instructions for preparing and testing samples as well as instructions for interpreting iPLEX HS Reports including the iPLEX HS Colon Panel intended for use with the MA4.
  • ...iPLEX HS Colon Panel “enables tumor profiling studies of colorectal cancer specimens”, and that, with the iPLEX HS Colon Panel, users can “Detect more than 80 clinically relevant variants across BRAF, EGFR, KRAS, NRAS, and PIK3CA”
  • Your firm’s website also includes the statement that these devices are “For Research Use Only. Not for use in diagnostic procedures.” But, as discussed in this letter, on numerous occasions your iPLEX HS Colon Panel and your MA4 were not being shipped or delivered for an investigation (21 CFR 809.10(c)). And these devices were not for use in the laboratory research phase of development but rather were represented as effective IVDs (21 CFR 809.10(c)(2)(i)). Your disclaimers that the MA4 and your iPLEX HS Colon Panel are intended for research use only (RUO) are inconsistent with the evidence obtained...

Historical Note

In  policymaking regarding RUO materials in 2013, FDA referred to its option to investigate manufacturers principally on the basis of selling to clinical labs or clinical centers (link, PDF page 9).   At least to my memory, FDA has pretty rarely used shipping invoices as primary evidence of FDA violations.


Intended Use of Anything

In addition to the 2013 website guidance about RUO test kits, the FDA's actions are related to 2017 and 2021 Federal Register rulemaking about intended use and manufacturer relationships to off label use.   (Of course, to have an "intended use" you have to be FDA approved in the first place).

So tangential to the FDA's news warning letters and their underlying 2013 guidance, see also a 2021 regulation on when manufacturers are responsible or implicated in any type of off label use. Essay from Hogan Lovells -  here. Hogan writes,

The 2020 proposed rule expressed FDA’s intention to eliminate longstanding confusion created by the view that, in their then-current form, the regulations indicated that a manufacturer’s mere knowledge of an unapproved use triggers a requirement to update a product’s labeling. The final rule similarly clarifies that a manufacturer will not be deemed to have intended an off-label use based solely on knowledge that its approved product is being used or prescribed off-label. This also reflects the longstanding “practice of medicine exemption,” whereby FDA will not regulate off-label use of a lawfully marketed drug/device by a licensed health care practitioner in the context of caring for an individual patient.

The final rule language also addresses [?]another highly contested issue by preserving the ability of laboratories to develop laboratory-developed tests (LDTs) using either in vitro diagnostic (IVD) or research use only (RUO) components, as long as such LDTs comply with Clinical Laboratory Improvement Amendments (CLIA) regulations in their development and are offered on the order of a health care provider...
I can't find [?] the IVD CLIA section that is "also addressed," per Hogan (discussing 86 FR 41383, 8/2/2021), neither in the 2021 rule or its 2017 predecessor.  There is a lot of discussion of first amendment and  case law from the courts.  

There is a short discussion of oft-perplexing difference between intended use and indication for use (p 41395), with a comment that "indication for use" will not be defined in this rule.