Evolution of FDA regulation of AI-based technology

Hogan Lovells partners Kelliann H. Payne and John J. Smith, M.D., J.D. recently joined Richard Frank, MD, PhD, Chief Medical Officer, Siemens Healthineers, and other industry leaders in person and virtually at the Health Care AI Law and Policy Summit to discuss how the creation and adoption of artificial intelligence (AI)/machine learning (ML) systems in the health care industry are being addressed by FDA. Below, we summarize key takeaways from their panel discussion.

"Trust but verify"

John J. Smith, M.D., J.D., partner in the Hogan Lovells Medical Device and Technology Regulatory practice, opened the discussion by providing an overview of how FDA regulates AI-based technologies, which he explained involves a consideration of whether the AI is used to augment or displace the standard of care, with the latter requiring an increasing level of regulatory scrutiny and supporting data.

Kelliann H. Payne, also partner in the Hogan Lovells Medical Device and Technology Regulatory practice, noted that we have seen many AI devices go through the de novo approval pathway, and Dr. Smith pointed out that a combination of factors contribute to this result. The de novo pathway is appropriate for low to moderate risk devices, and the key question for regulators is what level of disruption the AI-based device may have to the clinical standard of care. However, Dr. Smith also noted that being the pioneer company in applying for de novo approval can be a daunting yet manageable task.

Richard Frank, MD, PhD, Chief Medical Officer at Siemens Healthineers, pointed out that regulation should be commensurate to level of risk to the patient population, noting that the more autonomous the device, the less a user would be relied upon. In the current context, Dr. Frank noted that most commonly AI is work done by a machine to augment, not replace, what is done by health care professionals.

The panelists then discussed the FDA’s approach to stakeholders seeking clearances for multiple indications for use in one application. Dr. Smith noted that the FDA is still very much in “trust but verify” mode with AI, and wants to see indication specific data. This means that the FDA from a statistical and clinical view prefers to go indication by indication, rather than considering clearance of a platform at the outset.

Dr. Frank explained that the “Pre-determined Change Control Plan” (PCCP), a developing regulatory framework option for machine learning devices, can be a constructive way to add enhancements to a product without burdensome levels of documentation that need to be submitted to FDA. Stakeholders are, however, still eagerly awaiting official PCCP guidance, which the panelists thought might be available by mid-year.  

How can RWE factor into AI devices?

The panel turned to the question of how real-world evidence (RWE) factors into AI development. Dr. Frank sees RWE as having great potential, although it is not immediately obvious how it will be used. It could, for example, support a pre-determined change protocol plan. Commentators have also contemplated using RWE for gleaning signals of bias, but there are many issues with determining the validity of RWE, so we are a little ways from deploying RWE for its real potential. Dr. Smith added that there are still questions on how to define real-world evidence and related ground truth in various circumstances.  Examining mortality as an outcome may be easy, but even with sophisticated data collection, if the outcome is not precisely defined, utilizing the RWE data set may be challenging.

Dr. Frank noted that transparency in compliance with standards and best practices has a lot to do with ensuring the data is fit for the purpose of the artificial intelligence in building user trust. Patients need reassurance of the regulatory approval process as applied to AI/ML devices. FDA has done a good job at accumulating expertise at its Digital Health Center of Excellence, and it is important to work to maintain trust of users by maintaining standards for safety and efficacy.

Continuing on the theme of trust and transparency, Dr. Frank noted that users must be confident in the tools that they have for clinical use. This comes from proving performance of the product and having accurate product labeling, as well as having standards and best practices for what constitutes good quality data across development stages. This transparency includes making sure data that are annotated and curated are fit for purpose and adequate to support the intended use.

Solving the "black box" problem

Ms. Payne posed the question of what users should be able to understand about the logic of the AI algorithm itself. Linking input to output could help solve this so-called black box problem. Ms. Payne also noted that we are seeing ever evolving requests for additional data and subgroup analyses.

According to Dr. Smith, there is increasing emphasis on questioning whether data is truly global. FDA also is realizing that asking for wholesale data to retrain data sets is overly burdensome and has started to demonstrate some flexibility on the reuse of data. Subgroup analyses may be one way to see whether newer data behaves as expected. Dr. Frank noted that the relationship between data for training and the labeling is essential; these have to match. Access to data will remain a key issue in development of AI.

Asked for their final thoughts, Dr. Smith noted that FDA has undergone a lot of change in recent years, so stakeholders should stay tuned for more progress. Dr. Frank brought industry leaders back to the “Quintuple Aim” as to why it is worthwhile to consider the benefits of AI, which has the potential to enhance prong 4- the provider experience and prong 5- enhancing equitable access to care. The panelists agreed that the benefits of AI are worthwhile, and stakeholders should continue to work together on bringing these benefits forward.

 

You can view video recordings and summaries of the other panels from the Health Care AI Law and Policy Summit online here:

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.