Friday, September 5, 2025

Bipartisan Policy Center: Report: Rise of AI in Federal Health Agencies

 On August 10, 2025, the Bipartisan Health Center released a detailed report, with numerous graphics, on "Mapping the Rise of AI in Federal Health Agencies."

It's cited in the briefing memo for the September 3, 2025, House Health hearing on AI in healthcare.

Find the full report here:

https://bipartisanpolicy.org/blog/mapping-the-rise-of-ai-in-federal-health-agencies/


AI Corner

Federal health agencies are rapidly expanding their use of artificial intelligence, moving from pilot projects to core operations in areas like fraud detection, outbreak surveillance, and regulatory review. Under new White House and HHS mandates, agencies such as CDC, CMS, and FDA now track and disclose hundreds of AI use cases, with applications ranging from internal workforce support to high-impact public services. While tools like FDA’s new generative AI assistant Elsa promise major efficiency gains, they also raise questions about transparency, safeguards, and the role of AI in sensitive regulatory decisions.

##

Here’s a TL;DR of the Bipartisan Policy Center report Mapping the Rise of AI in Federal Health Agencies (Aug 10, 2025):


Federal AI Expansion

  • FDA launched Elsa (June 2025), a secure generative AI chatbot for staff productivity.

  • HHS has tracked AI use since 1996, with cases quadrupling between 2022–2024 (270 cases across nine agencies).

  • Growth accelerated after generative AI tools (e.g., ChatGPT) emerged.

Policy & Oversight

  • 2020 executive order required agencies to inventory AI uses.

  • April 2025 OMB memoranda mandated Chief AI Officers, transparency, and standardized evaluation.

  • Agencies must now report AI tools’ purposes, safeguards, and metrics.

Agency Highlights

  • CDC: Uses AI for literature reviews, outbreak prediction, and internal operations (ChatCDC).

  • CMS: Applies AI to fraud detection, claims monitoring, and customer complaints; recent AI-supported fraud takedown blocked $4B in false claims.

  • FDA: Uses AI for data extraction from submissions and document review. Elsa supports writing and summarization but raises concerns about reviewer influence and legal liability. FDA also launched two AI councils: one for regulating AI-enabled products, another for internal oversight.

Key Applications (6 categories):

  1. Education & Workforce (training, chatbots).

  2. Emergency Management (social media monitoring, outbreak tracking).

  3. Government Services (benefits delivery, adverse event monitoring).

  4. Health & Medical (surveillance, real-world data).

  5. Law & Justice (fraud detection, case routing).

  6. Mission-Enabling (internal workflows like drug labeling review).

Takeaway
AI adoption in federal health agencies is accelerating, with a focus on transparency, fraud prevention, regulatory review, and workforce support. Success will depend on governance, safeguards, and clarity around AI’s role in high-stakes regulatory decisions.