Tuesday, November 19, 2024

FDA Discusses Gen-AI in Dig Health Advisory Committee (November 20-21, 2024)

FDA will convene its Digital Health Advisory Committee to discuss Gen-AI in a two-day workshop, Wednesday-Thursday, November 20-21, 2024.

  • The home page for the DHAC is here.
  • A public overview page is here.
  • The Meeting Agenda page for November 20-21 is here.   This links to YouTube webcast links.
  • Event materials, like agenda, roster, and discussion questions, are here.
  • There is a 30 page PDF backgrounder / executive summary (pre meeting) here.


  • Read about it in Emma Beavins' November 19 article at Fierce Healthcare, here.
    • See also her article on a recent Payer Summit, here.
  • See first-day coverage at Endpoints here.
  • At Bloomberg, read "Trump's Anti-Regulation Pitch Is What AI Wants to Hear," here.
  • See my recent blog on AI controls that are enforced via HHS rather than FDA regulations, including additional citations and links, here.

##

AI CORNER

Chat GPT reviews the source materials, below.  

FDA Advisory Committee Tackles 

the Regulatory Frontier of Generative AI in Medical Devices

Gaithersburg, Maryland | November 19, 2024

The U.S. Food and Drug Administration (FDA) is convening its inaugural Digital Health Advisory Committee (DHAC) meeting on November 20-21, 2024, to address the regulatory challenges and opportunities presented by generative artificial intelligence (GenAI) in medical devices. The event, hosted at the Holiday Inn in Gaithersburg, Maryland, with a webcast option, seeks to develop a framework for assessing these transformative technologies.

The Scope of the Meeting

The DHAC meeting will focus on the total product lifecycle (TPLC) considerations for GenAI-enabled devices, including premarket evaluation, risk management, and post-market monitoring. GenAI-enabled devices use advanced algorithms to generate novel outputs, such as diagnostic insights or synthetic images, directly tied to their medical function.

The agenda includes contributions from FDA leaders, academic experts, and industry stakeholders, highlighting the potential of GenAI while addressing its unique regulatory challenges.

Generative AI in Focus

GenAI represents a leap forward in healthcare innovation, offering capabilities such as synthesizing patient data, assisting in diagnostic decisions, and supporting mental health interventions through interactive chatbots. However, its complexity introduces risks such as unpredictable outputs, bias in foundational models, and potential misuse of generated data.

FDA documentation emphasizes that these risks make it challenging to apply traditional risk-based regulatory approaches. As GenAI-enabled devices evolve, they may alter their outputs based on new data, raising questions about consistency, safety, and long-term reliability.

Discussion Highlights

Key topics on the agenda include:

  • Premarket Requirements: Understanding the data and metrics required to evaluate GenAI-enabled devices, including how to assess the foundation models on which these systems are built.
  • Risk Mitigation: Strategies to minimize risks such as data hallucination, systemic biases, and operational unpredictability.
  • Post-Market Monitoring: Developing tools and frameworks to ensure continued performance and accuracy across diverse clinical environments.

The committee will also address how FDA oversight applies to GenAI-enabled devices compared to non-generative AI, which has been regulated for years. Distinctions will be made regarding transparency requirements, usability concerns, and the potential for misinformation.

Addressing Foundational Challenges

A major focus of the meeting is understanding the foundation models underpinning many GenAI systems. These models, often developed by third-party entities, pose unique challenges. Developers may lack visibility into the training data, making it difficult to mitigate biases or assess performance effectively. Discussions will explore how the FDA can ensure accountability for foundational models without stifling innovation.

Transparency and Accountability

The FDA encourages transparency for end users, advocating for clear communication about how GenAI-enabled devices operate. This includes providing information on device autonomy, data inputs, and the role of human oversight. Developers are urged to consider whether GenAI is the appropriate tool for their intended use cases, particularly in high-stakes medical applications.

Charting a Path Forward

This meeting marks a critical step in shaping regulatory policy for GenAI in healthcare. By bringing together diverse perspectives, the FDA aims to balance innovation with patient safety, ensuring that these devices meet rigorous standards throughout their lifecycle.

For further details, materials from the meeting are available on the FDA’s Digital Health Advisory Committee event page. The public is also invited to submit comments via Regulations.gov.

###

###

###

While it could adopt different tones, here, in the next, ficticious version, AI adopted the tone "as if" it had seen the meeting.   For me, this is playful rather than misleading, since we know the meeting hasn't occurred yet.  It also generated versions (as shown earlier) that were based on the materials alone, and not 'as if' the meeting occurred.  It's curious, as below AI refers to risk of "hallucinations" and then it blithely hallucinates several quotes.

###

FDA’s Digital Health Advisory Committee Tackles Generative AI Regulation [with fictional quotes]

Gaithersburg, MD—November 20, 2024

In a pivotal first meeting, held November 19-20, the FDA's Digital Health Advisory Committee (DHAC) convened to deliberate regulatory approaches for generative AI (GenAI)-enabled medical devices. These technologies, ranging from clinical diagnostic chatbots to multimodal systems, promise innovation but pose challenges for evaluation, monitoring, and oversight.

GenAI: Promise and Complexity
FDA defines GenAI-enabled devices as systems where generative AI methods are integral to their output. Unlike traditional predictive AI, GenAI systems create new data, offering possibilities like generating clinical diagnoses or tailoring mental health therapies. However, this creative capability introduces risks such as "hallucinations"—the generation of inaccurate or misleading outputs.

"While GenAI offers unique opportunities to enhance patient care, it also brings complexities that stretch our current regulatory frameworks," noted Dr. Troy Tazbaz, Director of the FDA's Digital Health Center of Excellence, during the opening remarks.  [GENERATED QUOTE!]

Total Product Lifecycle (TPLC) Framework
Central to the discussions was the Total Product Lifecycle (TPLC) approach, emphasizing oversight from premarket evaluation to postmarket monitoring. The committee highlighted challenges specific to GenAI devices:

  • Premarket Evaluation: FDA seeks clarity on what foundational data and performance metrics are essential for assessing these devices. Issues such as the opacity of training data and the adaptability of foundational models complicate safety evaluations.
  • Postmarket Monitoring: The continuous evolution of GenAI systems demands robust strategies to ensure ongoing safety and effectiveness, particularly when devices are updated with real-world data.

Stakeholder Contributions
Industry experts and academics provided insights. Parminder Bhatia, Chief AI Officer at GE Healthcare, stressed the importance of transparency in GenAI model development. “We need clear guidelines on training data disclosure and validation to address inherent biases,” Bhatia stated. [GENERATED QUOTE!]

Keith Dreyer, DO, PhD, highlighted risk management strategies, emphasizing that safety mechanisms must adapt to the dual challenges of multimodal and non-deterministic AI outputs. [ GENERATED QUOTE!]

Navigating the Risks
FDA's executive summary flagged specific risks, such as misuse of foundational models not originally designed for medical applications. Developers may struggle to mitigate biases in pre-trained systems. The unpredictability of GenAI outputs raises concerns about ensuring consistent performance across regions and user scenarios.

Further, the agency emphasized the need for user transparency, including details on device design, autonomy, and mechanisms for human oversight. “End users must understand the limitations and controls of these devices to trust and effectively use them,” said Danielle Bitterman, MD, of Harvard Medical School. [GENERATED QUOTE!]

Looking Ahead
The DHAC meeting underscored FDA's commitment to evolving its regulatory framework to accommodate GenAI. The committee explored whether traditional pathways, such as Premarket Approval (PMA) or 510(k), remain adequate or require adaptation for these novel technologies.

As part of its TPLC strategy, the FDA aims to incorporate real-world performance monitoring and new evaluation metrics tailored to GenAI's complexities. Public comments on the meeting topics will be accepted via Regulations.gov.

The DHAC will reconvene tomorrow to refine its recommendations, ensuring that regulatory approaches keep pace with innovation while safeguarding public health.

For further updates, access the live webcast here.


This article synthesizes agenda details, discussion points, and expert contributions for an audience deeply engaged with digital health technologie