AI and the EU – a proposal for regulatory reform

In February, the EU Commission announced its strategy for shaping the digital future of the bloc. This included the publication of its long-awaited white paper on the future of artificial intelligence, with proposals for introducing a regulatory framework to govern the adoption and application of AI in both the commercial and public realms.

The reforms come in response to growing concerns amongst the public and in the media about the potential harms that may be caused by autonomous machines, and follows on from work that has been undertaken by the Commission’s High Level Expert Group on Artificial Intelligence (AI HLEG).

An ecosystem of trust

Developing AI that is considered trustworthy by EU citizens is the foundation for the regulatory proposals, with the objective being to build an ‘ecosystem of trust.’ As digital technology becomes an ever more central aspect of people’s lives, the Commission argues that it is fundamental that citizens are able to trust it and that this is a prerequisite to its uptake. The proposed solution is to develop a proportionate and consistently applied regulatory framework that is fit for adoption across Europe.

The white paper identifies three primary categories of risk that its regulatory framework needs to address in order to prevent both material and immaterial harm being caused to individuals. These are: protecting the fundamental rights of individuals as laid down in the EU Charter (e.g., privacy and non-discrimination), ensuring the safety of AI applications and addressing issues surrounding the allocation of liability, and dividing responsibility for the effects and consequences of autonomous machines.

Regulatory proposals

There are various challenges of proposing new regulation in this field. One of these is how existing laws that already govern areas such as data protection, product liability, and anti-discrimination will align with any new regulatory framework.

The Commission proposes to address this issue by taking a two-pronged approach, with existing EU laws being subject to review and potential modification in order to address issues specific to AI and then further supplemented by a new dedicated law.

Perhaps surprisingly, the new regulation that has been proposed is relatively limited in scope, particularly when compared to the more expansive ambitions put forward by the AI HLEG in a paper published in April 2019. The white paper advocates a risk-based approach, whereby AI use-cases are each assessed on a case-by-case basis to determine the potential risks posed to individuals and society. Those applications that would be deemed to be ‘high risk,’ taking into account the potential safety implications and threats to the fundamental rights of individuals, will be the only ones that become subject to the new regulation.

The six requirements for high-risk AI applications

Where the proposed law would be deemed to apply, the Commission has proposed applying additional mandatory requirements, which would be split into six fields:

  • Training data – data sets that are used to train machine learning algorithms would need to be sufficiently broad so as to ensure that all relevant scenarios the application may come across in a live environment are addressed. Reasonable measures would need to be taken to ensure that AI systems do not lead to outcomes which result in bias or discrimination, while the privacy and personal data of individuals whose data is being used for training would need to be adequately protected.
  • Record-keeping & data – organisations would be expected to be able to be able to demonstrate their compliance with the law in practice. This would include needing to retain records of how AI algorithms are developed and integrating traceability into such algorithms that would allow problematic decisions and determinations to be challenged and verified.
  • Information provision – individuals would need to be provided with clear information about an AI system’s capabilities and limitations (including the expected level of accuracy).
  • Robustness & accuracy – AI applications would be expected to behave reliably and as intended, with an appropriate level of statistical accuracy. Systems would need to be designed so they are resilient against attacks and attempts to manipulate the underlying data and algorithms.
  • Human oversight – in order to avoid undermining human autonomy, the AI system would need to be subject to appropriate levels of human oversight, taking into account the circumstances.
  • Specific requirements for certain applications – for particular applications, such as facial recognition technologies, additional obligations or restrictions may be introduced.

What next?

The white paper is subject to open consultation until 31 May 2020, following which it is likely that the Commission will put forward revised proposals.

 

 

 

Authored by Dan Whitehead

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.