Publicly Traded Life Sciences Companies and Artificial Intelligence: Disclosing Risk Factors

Although the rise of artificial intelligence (AI) has created many opportunities and breakthroughs for life sciences companies in preclinical and clinical development, risks associated with the use of AI cannot be ignored. Therefore, publicly traded life sciences companies that use AI are strongly encouraged to consider including detailed, company-specific risk factors in their filings with the U.S. Securities and Exchange Commission (SEC).

In October 2023, President Biden issued an Executive Order that discussed the use of AI in drug development, mandating that the Secretary of Health and Human Services develop a strategy for regulating the use of AI or AI-enabled tools in the drug development process by the end of this month (October 2024).  The U.S. Food and Drug Administration has also published two discussion papers, requesting feedback on Using Artificial Intelligence & Machine Learning in the Development of Drug & Biological Products and Artificial Intelligence in Drug Manufacturing.

Moreover, the SEC has been increasingly focused on public statements regarding, and risks associated with, the use of AI by public companies. SEC Chair Gary Gensler has advised against “boilerplate” AI-related risk disclosures and recommended that issuers detail company-specific risks. This is consistent with all risk disclosures: companies should keep in mind that if AI-related risks have materialized, the hypothetical risk disclosures should be supplemented with descriptions of factual circumstances where those hypothetical risks have occurred.

Since last October, public life sciences companies have been increasingly disclosing risk factors related to life sciences companies using AI in drug development and related activities in SEC filings. Whether it be using AI to collect patient data for clinical trial recruitment and enrollment or for identifying potential drug candidates and related drug development efforts, the risks are material enough such that companies feel that the risks warrant specific discussion in their filings with the SEC.

These AI risk factors have included considerations such as: (i) reputational harm; (ii) competitive harm; (iii) patient harm; (iv) data quality and bias; (v) cybersecurity and data privacy issues; (vi) intellectual property infringement and misappropriation; (vii) legal liability; (viii) capital investment; and (ix) regulatory considerations. Public life sciences companies are also including similar discussions even if the company is not using AI directly, but if they have vendors that utilize AI in their provided services.

Public life sciences companies that use AI in any of the following activities (or have partners that use AI) should consider including risk factors in their upcoming Quarterly Report on Form 10-Q and Annual Report on Form 10-K, addressing the following potential risks, among others:

  • Drug target identification;
  • Drug development and discovery;
  • Clinical trial design, recruitment or enrollment;
  • Clinical trial data analytics;
  • Medical imaging;
  • Research collaboration or partnership agreements with third parties;
  • Decision-making based on AI-generated recommendations; and
  • European Union business operations.

 

Authored by Stephen Nicolai, Amanda Brown, and Kayvon Paul

Contacts
Stephen Nicolai
Partner
Philadelphia

Amanda Brown
Senior Associate
Philadelphia

Kayvon Paul
Associate
Philadelphia

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.