FDA lists top 10 artificial intelligence regulatory concerns

Last week, U.S. Food and Drug Administration (FDA) Commissioner Robert Califf, M.D., and other senior FDA officials published a “Special Communication” in JAMA titled, “FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine.” The article emphasizes the agency officials’ chief concerns with the use of AI in medical product development, clinical research, and clinical care, while also acknowledging AI’s “potentially transformative” applications. We have provided a brief summary of the article below, and will publish more in-depth client alerts on the article in the coming weeks.

FDA published a JAMA article on October 15, 2024, describing several areas of focus for regulation of AI used in the medical products. The JAMA article follows a June 2024 FDA blog post by Troy Tazbaz, director of CDRH’s Digital Health Center of Excellence, titled “The Promise Artificial Intelligence Holds for Improving Health Care,” in which he provided insights on how FDA is working to ensure that AI innovations in health care are safe and effective for end-users. The new JAMA article further identifies FDA’s ten AI-related areas of concern:

  1. AI Regulation Within the U.S. Government and Global Context. FDA recognizes the importance of US regulatory standards being compatible with global standards to the fullest extent possible and, to that end, notes its participation on the AI working group of the International Medical Device Regulators Forum and the International Council for Harmonisation.

  2. Keeping Up with the Pace of Change in AI. The JAMA article opines that AI products require an adaptive, science-based regulatory scheme and that successfully developing and implementing innovative programs for emerging technologies may require FDA to be granted additional resources and new statutory authorities.

  3. Flexible Approaches Across the Spectrum of AI Models. Explaining its risk-based approach to regulating medical devices, FDA highlights that AI models used for clinical decision support “fall in the middle,” where the degree of risk presented by the algorithm informs the FDA’s application of regulatory requirements, particularly when the basis for the algorithm’s output is not sufficiently transparent such that the clinician can understand how the recommendation is derived.” The article warns: “These risk-based regulatory approaches will need careful consideration and adaptation."

  4. Use of AI in Medical Product Development. Dr. Califf and his co-authors write that “FDA sees great potential in the application of AI in drug development and clinical research.” Therefore, FDA reviewers “must have a deep understanding of the discipline to be able to appropriately review” the deluge of marketing applications being submitted to the agency.

  5. Preparing for LLM Unknowns and Generative AI. Although FDA is yet to authorize a large language model (LLM), many proposed applications in health care will require FDA oversight given their intended use for diagnosis, treatment, or disease prevention. Thus, the FDA officials write that they see “a need for specialized tools that enable better assessment of LLMs in the contexts and settings in which they will be used” so as to not unduly burden individual clinicians.

  6. Central Importance of AI Life Cycle Management. The JAMA article asserts it is “increasingly evident that recurrent AI performance monitoring should occur in the environment in which it is being used,” given the potential for unlocked AI models to “evolve.” While praising the potential benefits of “external assurance laboratories” and “site-specific localized validation,” FDA notes that more approaches and tools are needed and admonishes that “an unmonitored AI system deployed in practice could do significant harm.”

  7. Responsibilities of Regulated Industries. Writing that at “its core, FDA regulation begins with voluntary compliance by the regulated industries themselves,” the agency’s JAMA article places the burden on industry to help evaluate whether health benefits of AI applications impact patients. The FDA officials further emphasize that “the concept that regulation of AI in medical product development . . . begins with responsible conduct and quality management by sponsors does not fundamentally differ from the FDA’s general regulatory regime.” Still, the agency expresses concern that the recurrent, local assessment of AI throughout its life cycle is both necessary for the assurance of safety and effectiveness of the product over time and that the scale of effort needed to do so could be beyond any current regulatory scheme or the capabilities of the development and clinical communities.

  8. Maintaining Robust Supply Chains. FDA recognizes that AI models could play a critical role in managing supply chains, but warns that AI models may be vulnerable to outages and shortages, highlighting the need to minimize susceptibility to cybersecurity threats, and to develop “resilient backups.”

  9. Finding the Balance Between Big Tech, Start-Ups, and Academia. Although “big tech companies…have the capital, computational resources, and expertise needed for” AI development, the JAMA article authors express concern that start-ups, entrepreneurs, and academic institutions may need assistance to ensure that AI models implemented are “safe and effective across the total product life cycle in diverse settings.”

  10. Tension Between Using AI to Optimize Financial Returns vs Improving Health Outcomes. The article concludes by emphasizing the importance of FDA’s public health safeguards as a bulwark against systemic pressures that may occur when AI innovations benefitting patients “may come at the price of traditional jobs, capital structures, and revenue streams in health care.” Here, FDA praises the role of keeping human clinicians involved, who can advocate for high-quality evidence of health benefits that inform the clinical application of AI.

In addition, the JAMA article provides detailed lists of AI’s potential uses in early-stage drug development and in clinical trials. Among others, these examples include the following:

Potential Uses of AI in Early-Stage Drug Development and Manufacturing:

  • Drug target identification, selection, and prioritization
  • Screening and designing compounds
  • Modeling pharmacokinetics and pharmacodynamics
  • Advanced pharmaceutical manufacturing

Potential Uses of AI in Clinical Research:

    • Participant recruitment
    • Selection and stratification of trial participants and sites
    • Protocol adherence and subject retention
    • Clinical trial data collection, management, and analysis
    • Postmarket safety surveillance and evaluation

These examples provide useful insights into (a) some of the potential applications of AI in drug development and clinical trials as envisioned by senior FDA leaders, and (b) where the agency might start to place its regulatory focus on this technology. FDA previously described many of these examples in a May 2023 discussion paper on using AI and machine learning in the development of drug and biological products, which we wrote about here.

As additional background, we note that in March, FDA published a discussion paper on AI aiming to provide greater transparency on how FDA’s medical product Centers are collaborating on the development of harmonized standards, guidelines, best practices, and tools; we summarized that paper at the time online here.

 

As artificial intelligence becomes an increasingly significant focus area for FDA, we will continue to monitor and evaluate updates to agency guidance, legislative developments, industry trends, and emerging issues that may impact medical product manufacturers. In particular, FDA is expected to release a guidance before the end of the year on the use of AI to support regulatory decision-making for drugs and biological products.

In the coming weeks, we will publish additional in-depth client alerts on the JAMA article, addressing some of the more specific issues discussed therein, such as the impact on clinical trials. In the meantime, if you have any questions on AI-related products, feel free to contact any of the authors of this alert or the Hogan Lovells lawyer with whom you regularly work.

 

Authored by Robert Church, Melissa Bianchi, Bert Lao, Melissa Levine, Will Henderson, Jodi K. Scott, Blake Wilson, Ashley Grey, and Lauren Massie

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.