California signals intent to regulate AI broadly

The California Privacy Protection Agency’s proposal of new regulations for automated decisionmaking technology marks a significant step to govern how businesses may leverage those automated tools. The new framework focuses on familiar privacy principles – transparency and choice – but broadens their application beyond data protection and to technological innovation, including AI and other automated decision technology. While formal rulemaking on the new framework is expected to begin in 2024, Agency signals thus far suggest that the final rules may be as broad as the proposals.

The California Privacy Protection Agency (“CPPA”) proposed a new regulatory framework for automated decisionmaking technology (“ADMT”), implementing significant consumer rights to opt out of and access information about businesses’ use of such technology. While the framework purports to focus on ADMT, it will have sweeping implications for AI technologies generally.

CPPA Board discussions indicate that more work is needed in refining the framework. Even without the final text, the proposed regulations articulate a future for ADMT and AI that require immediate actions for developers and users who are planning for the near future.  

Important Updates Proposed by Draft Regulations

Much like the EU AI Act, the breadth of this proposal is the headline, especially compared to analogous AI-focused regulatory and legislative efforts in the US. The draft proposed rules go beyond other state efforts that address the use of ADMT in specific contexts (e.g., housing, education, or insurance) and encompass settings such as profiling employees, contractors, applicants, students, and consumers in publicly accessible places. Further, the new framework contemplates extending its requirements, including the ability to opt out, to consumer profiling for behavioral advertising. If adopted, the CPPA’s proposed regulations would have wide-ranging operational impacts as businesses work to address California’s new and expanded notice, access, and opt-out rights.

Broad definitions seeking to align with related guidance

There are important definitions to note, because they highlight the Agency’s efforts to create broadly applicable rules. We flag two as examples.  First, the proposed definition of ADMT, which sets the foundation for the framework, contains few limitations and covers “any system, software, or process … that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” This definition largely mirrors the term “automated system” in the White House’s AI Bill of Rights. It includes profiling but excludes networking, caching, and data storage, and appears to focus on human decision making. Second, the Agency also broadly defines the term “publicly accessible place,” which is an important component of determining when opt outs apply, to refer to places open to or serving the public, such as retail stores or hospitals. Other definitions, such as “profiling” and “decisions that produce legal or similarly significant effects,” will also be important to track, as these terms feature variations that serve to broaden further the scope of the new framework.

Opt-out requirements

The draft regulations broadly frame the opt-out right with limited exceptions. In particular, the new framework contemplates always giving consumers the ability to opt out of profiling for behavioral advertising. Further, in other contexts, businesses do not need to provide an opt-out if the automated tool is used to provide the goods or services the consumer specifically requested. The exception is unavailable, however, if alternative methods of processing are used in the business’s industry or similar industries to provide similar goods/services and the business cannot demonstrate futility, an impact to the validity, reliability, or fairness of the good/service, or extreme hardship to the business. While the draft regulations also include exceptions for safety and security, the contemplated opt-out right would significantly affect businesses’ compliance efforts if adopted.

Increased transparency with pre-use notices and access rights

The draft regulations also aim to help consumers understand better how businesses use ADMT for covered purposes through pre-use notices and an expanded access right. Under the proposed framework, businesses would need to provide consumers with detailed information, such as the technology’s logic, intended output, decision-making usage, and evaluations for validity, reliability, and fairness. Further, the proposed rules contemplate providing consumers with information specific to each decision made upon request. For example, the draft regulations would require businesses to explain how key parameters were applied to the consumer’s circumstances and provide a method for the consumer to obtain information on the range of possible outcomes.

Heightened requirements for individuals under 16

The new framework also extends California’s approach for sales and “sharing” in connection with individuals under the age of 16 to ADMT. Businesses would need individuals known to be under the age of 16 to opt into profiling for behavioral advertising. For individuals known to be under the age of 13, businesses would need parental consent, in addition to verifiable parental consent under COPPA.

CPPA Priorities and Potential Challenges for Regulating AI

During discussions of the draft proposed rules at a meeting held on December 8, 2023, the CPPA’s Board focused on several challenges in regulating this space, including accurately defining ADMT and future proofing the regulations due to the rapidly developing technological landscape. Inherent in the Board conversations are the same issues that other legislators and regulators are experiencing. The Board recognizes it has an obligation to regulate ADMT to help protect Californians without stifling innovation. That trade-off is difficult and thus public comment from industry will be especially important to help guide revisions to the proposed regulations.

Here are some highlights of how the Board is weighing these competing priorities:

  • Defining ADMT proved to be a sticking point, especially when “profiling” may occur in “publicly accessible places.” One Board member expressed concern that the regulations as drafted would scope in any time consumers used an application on their phone (such as apps for navigation or ordering food) and proposed adding the term “systematic monitoring” to the profiling definition to help alleviate these concerns. Other Board members called into concern the breadth of the ADMT defined term—signalling a desire a regulate ADMT but not the use of other technologies such as non-ADMT software. Carefully crafting this definition, the Board argued, would lead to more clarity for businesses about expected enforcement and compliance obligations.

  • Future proofing was another Board discussion, highlighting the difficulty of regulating a rapidly evolving technological landscape. The Board considered how to balance a regulation that is specific enough to avoid vagueness concerns while also leaving space to be applicable to newly developing technologies that they are currently unable to anticipate. One example of this was the Board’s discussion around the opt-out exception found in proposed regulation 7030(m)(4)—allowing businesses to not provide opt-out rights when providing goods or services specifically requested by the consumer. To use ADMT for this purpose, the draft regulations include a rebuttable presumption that the business has a reasonable alternative method of processing if there is an alternative method of processing that is or has been used in the business’s industry or similar industries to provide a similar good or perform a similar service. Although some Board members were concerned with the strictness of this standard, the Board appeared to settle on leaving this detailed language in the regulations, hoping that broad public comment from industry during formal rulemaking would help guide their thinking on this issue.

  • Profiling of employees at work also prompted a robust discussion from the Board. Special attention was paid to employee rights to opt out of surveillance from their employers—with one Board member encouraging staff to frame this right as “the right to opt-out of intrusive surveillance.” Board members encouraged staff to tie requirements around employee surveillance to a “reasonable expectations” standard to ensure that businesses could still use necessary mechanisms, such as monitoring employee badge swipes, for building important operational metrics.

While Board discussions indicated a strong desire to regulate ADMT, members acknowledged that these technologies can be useful for both consumers and businesses. Despite this appetite for regulation, Board members all agreed to send the proposed regulations back to staff to incorporate edits. They will consider another draft of these regulations before submitting them for the formal rulemaking process.

The draft regulations will be edited by staff and presented to the Board in early 2024. From there, the regulations will likely be submitted to formal rulemaking, where the public will be able to submit comments concerning the draft. Once the regulations are through formal rulemaking, the Board will vote on making them binding. Businesses will then be required to provide consumers with pre-use notice about ADMT, access rights to information about ADMT, and a broad opt-out right that allows consumers to opt-out of all profiling related to ADMT.

Key Takeaways & Next Steps

While much of the US media focus on AI has been at the national level, states are also playing a role in shaping policy. Unfortunately, we know where this leads for businesses—potentially having to deal with another patchwork of requirements across the U.S. The draft regulations and CPPA board discussions further emphasize California’s desire to position itself as a leader in AI-related regulation. This builds upon prior actions like California’s Governor, Gavin Newsom’s executive order issued in September directing state agencies to develop new reports, guidance, and requirements specifically for generative AI. In addition, the proposed rules work in tandem with the draft risk assessment regulations released earlier this year.

Now is the time to engage in the policymaking process. The CPPA is encouraging active engagement and feedback from industry on several key points within the draft regulations. Engaging with policymakers like the CPPA on AI regulation is essential given the breadth and potential impact of these rules and the lack of deep understanding by regulators on how these rules will impact businesses and consumers in practice.

The proposed rules continue to emphasize the need for immediate action for developers and users of ADMT and AI products and services who are looking to proactively manage risk in this space by:

  • Documenting assessments conducted to evaluate the ADMT and supporting AI;

  • Implementing controls and appropriate oversight for use of ADMT in higher risk scenarios;

  • Confirming existing optout and consumer rights process can support existing and potentially expanded rights;

  • Evaluating existing notices and disclosures regarding their use of ADMT; and

  • Identifying where operations may need to be adapted to account for heightened concerns over minors.

Given the wealth of activity in this space and significant impact of these regulations, evaluating potential AI rules now can help businesses prepare for when new rules are effective. Waiting until rules are finalized to evaluate internal governance or implement a compliance program may magnify the amount the resources needed to understand what AI technologies the business is using and how to operationalize new requirements.

Contacts
Scott Loughlin
Partner
Washington, D.C.
Mark Brennan
Partner
Washington, D.C.
Alyssa Golay
Senior Associate
Washington, D.C.
Sophie Baum
Senior Associate
Denver
Harsimar Dhanoa
Associate
Washington, D.C.
Rachel Parrish
Associate
Washington, D.C.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.