The impact of AI on the insurance Industry: novel loss scenarios

A UK government report highlights a "race to the bottom" among AI developers, where the rush to innovate may sideline safety, presenting several novel risks for the insurance industry.

Product Liability: AI integration into everyday objects, like autonomous cars and enhanced healthcare, can result in software errors leading to accidents.

Intellectual Property and Copyright: AI introduces new IP challenges, including the protectability and ownership of AI systems and AI-generated content. The existing IP framework struggles to address these issues, raising questions about patent eligibility and copyright protection. Liability concerns from unlicensed data use and potential infringements highlight the need for clearer regulations.

Data Protection and Cybersecurity Risks: AI enhances cybercriminal capabilities and poses significant risks, including cyberattacks leading to data destruction or ransom demands, and privacy breaches due to AI's extensive data requirements. Recent incidents in 2023 and 2024 highlight these vulnerabilities. AI, enables sophisticated phishing and malware attacks, training on cyberattack techniques, and creating adaptive viruses. AI-driven social engineering scams and advanced malware further complicate detection and prevention efforts.

Other Risks: AI can spread health misinformation, degrade online information quality, and increase advanced push payment (APP) fraud through deepfakes, burdening financial institutions. The shift in risk pools due to AI advancements, like enhanced surgical robots, requires insurers to adapt. Job displacement caused by AI may lead to sabotage, and malfunctioning AI tools in professional settings could result in negligence claims.

In summary, AI's rapid development introduces numerous risks that the insurance industry must address. From product liability and intellectual property issues to data protection and cybersecurity, insurers need to adapt to these emerging challenges to effectively mitigate potential losses.

Introduction

The AI market size is expected to reach $407 billion by 2027. AI offers significant opportunities across all sectors, including the insurance industry which is rapidly adopting AI solutions to enhance business processes such as claims handing, risk modelling, pricing and reporting. AI has also opened the door to new insurance products which facilitate greater accessibility and choice for customers.

At the same time, AI poses risks which must be managed both at a business level and on a societal scale. According to a report on "Capabilities and Risks from Frontier AI" published by the UK government before the AI summit on AI safety in November 2023, risks from AI may arise due to a "race to the bottom" among AI developers who are not incentivized to prioritize safety:

Individual companies may not be sufficiently incentivized to address all the potential harms of their systems. In recent years there has been an intense competition between AI developers to build products quickly. Competition on AI has raised concern about potential ‘race to the bottom’ scenarios, where actors compete to rapidly develop AI systems and under-invest in safety measures. In such scenarios, it could be challenging even for AI developers to commit unilaterally to stringent safety standards, lest their commitments put them at a competitive disadvantage. The risks from this ‘race’ dynamic will be exacerbated if it is technologically feasible to maintain or even accelerate the recent rapid pace of AI progress.

This environment presents several potential novel loss scenarios that could significantly impact the insurance industry, which are explored below.

Direct and Intermediary Liability

An Accenture report forecasts that the manufacturing sector will reap the greatest financial benefit from AI adoption, with a gain of $3.8 trillion expected by 2035. However, everyday consumer products and infrastructure that use AI which could go ‘rogue’ and cause personal injury to consumers and the public. For example, a software error in a self-driving car could lead the car to hit pedestrians or a software vulnerability in a smart health device that allows a hacker to injure a patient remotely. Generative AI tools can also produce AI hallucinations, which are inaccurate and biased content that can cause harm to users and the society for example such as gender and race bias.

At the EU level, there are key develop­ments in the product liability legislation which any insurer needs to have in mind if insuring for risks that could arise in the EU in relation to AI products. For example, on 13 March 2024, the European Parliament adapted the consolidated EU AI Act, which prohibits AI systems that pose an “unacceptable risk” and increases requirements for “high-risk” AI systems. However, it does not contain specific liabil­ity provisions. The provision of the AI Act will come into effect at dif­ferent intervals from the end of 2024 to 2027. This is in parallel with the proposed EU AI Liability Directive on adapting non-contractual civil liability rules to ar­tificial intelligence.

In January 2024, the consolidated text for the Directive of the European parliament and of the council on liability for defective products was ap­proved by the European Council. The proposal widens the scope of the existing EU Product Liability Direc­tive and confirms software and products containing AI are “products”. It seeks to ensure claimants can claim compensation when a defective AI-based prod­uct causes death, personal injury, property damage or data loss. It reduces and eases the burden of proof on consumers by in­cluding provisions requiring manufacturers to disclose evidence as well as rebuttable presumptions of defect and causation where there is technical complexity. The hope is to maintain the balance between encouraging new technological innovation, while ensuring the same level of protection for smart product users.

In the realm of intermediary liability, online intermediaries in the insurance space, which process third party information and employ AI, for instance in the context of insurance price comparisons and brokerage, or to evaluate whether or not a person is eligible for insurance and at which premiums, also need to carefully vet the impact of AI on their liability. The use of AI to evaluate large sets of third party data creates challenges to the hosting provider liability safe harbour, since it is as yet unclear whether AI-generated knowledge of potential infringing third party content hosted by the intermediary can amount to “actual knowledge” that would obligate the intermediary to take action to avoid liability.

Intellectual Property and Copyright

AI can also create new IP challenges. While AI-assisted cyberattacks can steal valuable intellectual property, the IP law itself faces several difficulties when dealing with AI. First and foremost, in the area of intellectual property law, the protectability of AI systems themselves is still largely unsettled. If an AI model is leaked there is a real risk that its parameters can be copied, causing economic and intellectual property losses to the owner of the model. However, most IP rights are not tailored to AI, making it difficult to assign protection to a specific legal area. Instead, various IP rights need to be considered. For example, whether a patent can be granted for an AI system is highly controversial and depends on numerous specifics of the particular system.

Equally difficult is the assessment of the protectability of AI-generated content under IP law. While the fundamental question (“who owns AI-generated content?”) initially seems straightforward, the answer is highly complex. For example, can AI-generated content be protected under copyright law? Protectability under copyright law often requires a human creation, so some experts reject protection for AI-generated content. Others believe that it is sufficient for a human to have created the AI, which then generated the content. This view raises another question in the area of copyright law. If AI content is protected, who owns it? An AI can hardly be the owner of rights itself, but the attribution to the programmer as the author is also by no means clear. While some countries have adopted this approach, the assessment in other countries is still completely open, which leads to legal uncertainties that have to be mitigated as well as possible.

Particularly relevant are the issues around liability associated with AI. The race to create the best product, paired with a sense of acute FOMO (fear of missing out), means that, in addition to security measures, risk mitigation and IP considerations may be neglected. AI systems often use large volumes of data from the internet without explicit permission from the owners, leading to potential copyright infringements but also “copyright laundering” and potentially making works derivative of existing material without breaking copyright. If AI systems are trained using unlicensed data or internet content, this may constitute a violation of the IP rights associated with the content, such as copyrights and trademarks, and in the case of sensitive information, also trade secrets, for which claims may be filed in the event of a violation. However, the international dimension can make it difficult for those affected to enforce their rights - a remedy could come from European regulations, which legal experts are calling for.  
Also unclear is the legal status of AI-generated content, which can also affect intellectual property rights and, for example, infringe the copyrights or trademarks of non-participating parties, for example if protected products under IP law appear in AI-generated content. The effort to reduce the associated legal risks (risk mitigation) has often been given little importance in development so far, which can lead to liability risks that need to be mitigated.

Data Protection and Cybersecurity Risks

AI may also pose significant data protection risks and enhance the capabilities of cybercriminals if appropriate guardrails are not implemented for responsible AI development and use. AI systems can be used to carry out sophisticated cyberattacks, leading to data destruction, ransom demands, or privacy breaches. The potential to process large volumes of personal data raises concerns about data storage, usage, and access. AI's ability to infer sensitive information, such as a person's location or habits, also poses risks of unauthorised data dissemination and identity theft.

AI can create more effective and large-scale cyber intrusions through tailored phishing methods and advanced malware. At the same time, as AI has rapidly evolved, it has helped create adaptive computer viruses that avoid detection, a task previously requiring significant specialist expertise.

Techniques like voice impersonation, deepfakes and persuasive spear phishing have become increasingly sophisticated and also pose the risk of real harm to individuals.

Other Risks

AI's impact extends to various other areas. It can spread health misinformation online, potentially accelerating the spread of diseases. It may degrade the quality of online information, leading to harmful decisions and exposing vulnerable populations to dangerous content. For example, if there was another pandemic like Covid the misinformation may be able to spread quicker and kill more people.

AI-driven advanced push payment and particularly through deepfakes, will increase the burden on financial institutions [link to HL article on this topic on Engage].  As AI changes industries, insurers must adapt to new risk and profit pools at a macro-level. For instance, virtual reality applications may reduce certain risks but introduce new ones. As these pools shift, new lines of specialist insurance products will emerge, and how consumers interact with their insurers will also evolve significantly.

There is also the fear of AI systems replacing jobs as it becomes more and more sophisticated. Goldman Sachs recently predicted that AI technology could replace 300 million jobs, however there is no consensus on the overall impact on employment in the long term.

Conclusion

AI holds significant potential to bring about a wider range of benefits for businesses, individuals and society. At the same, the societal and business risks presented by AI introduce numerous novel loss scenarios that the insurance industry must address. From product liability and intellectual property issues to data protection and cybersecurity risks, insurers need to understand and adapt to these emerging challenges to mitigate potential losses effectively.

As AI continues to evolve, the insurance industry must remain vigilant and proactive in its approach to managing these complex risks. Acknowledging and addressing these challenges, and being able to measure and price associated risks, will be essential. If insurers invest in understanding these novel loss scenarios, there is significant opportunity for insurers to develop new product offerings that meet the needs of businesses and individuals as the world changes.

 

Next steps

Hogan Lovells is actively monitoring developments in this space - keep an eye out for our future updates.

 

 

Authored by Karishma Paroha.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.