Texas AG uses consumer protection law to enforce against B2B clinical health GenAI company

Texas Attorney (“AG”) General Ken Paxton announced a first-for-Texas settlement against a generative AI company using patient data and providing products to health care facilities.

On September 18, 2024, the Texas AG announced that it had secured a settlement with Pieces Technologies (“Pieces”), a Dallas-based generative AI health care technology company, resolving allegations that the company deployed and marketed its AI-powered services in a false, misleading, or deceptive manner in violation of the Texas Deceptive Trade Practices – Consumer Protection Act (“DTPA”). According to its website, Pieces integrates AI solutions into the electronic health records systems of health organizations to deliver support for clinical care teams by automating tasks including summarizing patient data and drafting clinical notes.

Although Pieces’s marketing materials and products were provided in a business-to-business (“B2B”) context, the AG investigation found that Pieces made deceptive claims about the accuracy of its health care AI products, putting the public interest at risk. At least four major Texas hospitals had been providing their patients’ health care data in real time to Pieces and the products were deployed at in-patient health care facilities. According to the Texas AG, Pieces made deceptive claims regarding the accuracy of its products and services including advertising that its products have a “critical” or “severe hallucination rate” of “<.001%” and “<1 per 100,000,” and suggesting that “Pieces summarizes, charts, and drafts clinical notes for your doctors and nurses . . . so they don’t have to.” To support these claims, Pieces developed metrics and benchmarks that were, as alleged by the Texas AG, inaccurate and potentially deceiving to hospitals about the reliability or efficacy of the company’s products. 

The settlement agreement to resolve these allegations includes the following assurances, which Pieces committed to over a five-year term:

  • Clear and Conspicuous Disclosures – Marketing and Advertising. Pieces marketing statements regarding metrics or similar measurements related to the outputs of its generative AI products must contain disclosures that are easily noticeable and easily understandable. The disclosures must clearly and conspicuously disclose the meaning of such metrics along with the method, procedure, or other process used in its calculation.
  • Prohibitions Against Misrepresentations. Pieces may not make any false, misleading, or unsubstantiated representations regarding any feature, characteristic, function, testing, or use of its products. This prohibition includes making misleading statements related to the accuracy, functionality, or purpose of its products along with misrepresentations regarding the data and methodologies used by Pieces to test, monitor, and train its products.
  • Clear and Conspicuous Disclosures – Customers. Pieces must provide all its customers, both current and future, with documentation that clearly and conspicuously discloses any known or reasonably knowable harmful or potentially harmful uses or misuses of its products. Requirements include disclosures regarding the type of data and models used to train its products, detailed explanations of the intended purpose and use of its products, relevant information to effectively train users, and known limitations of the products (including potential risks to patients and health care providers, such as physical or financial injury from an inaccurate output).

Takeaways for Businesses

Regulators are paying particularly close attention to the use of AI products in high-risk settings such as health care, even when such products are provided to businesses (i.e., not provided directly to consumers).

As to the Texas AG specifically, this investigation and settlement serve as the latest example of the office’s pursuit of companies it believes have harmed consumers by violating consumer protection laws and demonstrates an increased willingness to flex its enforcement muscle even in the B2B context. Earlier this year, the Texas AG’s office established a team within its Consumer Protection Division responsible for “aggressive enforcement” of Texas privacy laws including the Data Privacy Act and Security Act, the Deceptive Trade Practices Act, and the Biometric Identifier Act. Soon after, the announcement was made that the Texas AG had sued General Motors for allegedly unlawfully collecting and selling drivers’ private data in violation of  the DTPA.

The settlement echoes recent FTC guidance on AI. Earlier this year, the FTC reminded companies to abide by their privacy commitments to users and customers and warned against them surreptitiously changing their terms. These developments emphasize the need for companies developing or deploying AI technologies to document that they are doing so responsibly and transparently, as regulators are paying close attention and have enforcement authority under a range of laws. Failing to provide appropriate notice or making exaggerated or potentially misleading claims about a technologies’ capabilities can expose a company to regulatory risks at both the state and federal levels.

 

Authored by Marcy Wilder, Donald DePass, Alyssa Golay, Sophie Baum, and Pat Bruny.

 

Contacts
Marcy Wilder
Partner
Washington, D.C.
Donald DePass
Counsel
Washington, D.C.
Alyssa Golay
Senior Associate
Washington, D.C.
Sophie Baum
Senior Associate
Denver
Pat Bruny
Associate
Washington, D.C.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.