Digital health implications for insurers

The emergence of digital health has led to a dramatic increase in the usage of medical health apps and fostering a culture of self-care. Pivotal trends in 2024 that will affect medical device manufacturers are digitalization via the use of software, devices that are enabled by artificial intelligence (AI) and the adoption of telehealth and digital tools for purposes such as remote patient monitoring. Virtual hospitals offer expanded care and bed capacity, potentially allowing over 17% of total admissions to be treated remotely. This approach not only lowers costs but also improves patient experience, particularly for those with chronic conditions who prefer receiving care at home.

This article delves into the future landscape of digital health, associated risks and vulnerabilities, and the evolving legal landscape surrounding smart medical devices.

The innovation of smart medical devices and digital health apps continues at pace, with a constant flow of new products to the market such as wearables, diagnostic monitors and implantable devices. Established products include wearable coin-size blood glucose monitors with an app to allow patients to monitor and scan their blood sugar readings in real time, as well as wearable blood pressure monitors and pulse oximeters that allow users to self-monitor vital conditions and minimize hospitalization. Virtual reality applications are also transforming health care, for example to treat mental health issues and to assist surgeons during procedures. Under UK medical device legislation, any device that uses patient data to make a diagnosis could be considered a medical device.

AI in health care is revolutionizing the field by analysing vast data sets to expedite diagnostic processes, aiding health care professionals in making timely decisions, and enabling predictive analytics for preventative care. Safety, efficacy, and regulation are key considerations in recommending digital health tools and smart medical devices.

Products of the future hitting the headlines include cutting-edge smart implantable devices that can personalize treatment for patients with movement disorders such as epilepsy and paralysis. Ground-breaking research is also under way for life-transforming smart patches, which automatically release necessary drug doses into the bloodstream as required. Wearable wireless smart contact lenses and nano-electronic smart chips for medical diagnosis and treatment are also being developed. The potential benefits of these products are vast:  more cost-effective health care, increased service quality and access, interoperability, convenient mobile health, and personalized medical treatment where the real-time diagnostic data can be accessed by the treating medical staff to improve patient outcomes and enhanced patient engagement.

Emerging risks

At the same time, insurers will be considering the types of risks that can arise. As with any technological advancement, there is the potential for an alleged defective design, possible inadequate warnings or potentially software interruptions and malfunctions. Risks also include, for example, whether patients input correct data (where inputs are required) and whether the treating doctor can accurately use and accommodate any new device. The risk of bias and differential performance across different patient sub-groups also needs to be considered in relation to AI-enabled medical devices.

Connectivity and prevention of harm from cyber-attacks are also relevant risks for the insurer to consider. For instance, hackers could potentially manipulate these devices, leading to severe health risks. Scale of risk is also a consideration: an error by a doctor is likely to affect a limited number of patients; an error in a smart medical device or digital health app could potential affect many more patients.

UK developments

The UK is updating its medical device legislation with an emphasis on post-market surveillance and cybersecurity. Historical product recalls, such as insulin pumps and pacemakers due to cybersecurity vulnerabilities, highlight the importance of robust design and vigilant post-market surveillance.

On 9 January 2024, the UK Medicines, and Healthcare Products Regulatory Agency (MHRA) updated its roadmap for revising existing UK medical device legislation via a series of new statutory instruments. The MHRA is prioritising measures to protect patient safety through enhances post-market surveillance requirements, which will come into place in 2024. Further measures will be introduced in 2025 includ­ing improvements for implantable medical devices with higher classification, resulting in more stringent pre- and post-market requirements for most devices, and unique device identifiers for all devices. There will be greater alignment of the requirements for medical devices being placed on the UK market with those in the EU market (for example, cyber security requirements for software as a medical device and compliance with regulations and legislation relevant to the defence of product liability claims).

The MHRA also has a change program tasked with delivering a clear regulatory framework that addresses the specific challenges and risks presented by software and AI medical devices. This runs alongside its broader work on updating medical device legislation. The UK government consulted on its white paper on AI in 2023. The response to the consultation is expected to be published this year setting out policies regarding AI and providing guidance to regulators on how to deploy their existing powers, which will feed into the MHRA’s software and AI as a medical device change programme.

EU developments

Looking at the EU, there are also key developments in the product liability legislation which any insurer needs to have in mind if insuring for risks that could arise in the EU in relation to smart medical devices and health apps. For example, on 13 March 2024, the European Parliament adapted the consolidated EU AI Act, which prohibits AI systems that pose an “unacceptable risk” and increases requirements for “high-risk” AI systems. However, it does not contain specific liability provisions. In July 2024, the EU published the final text of the AI Act in the Official Journal of the EU (OJEU) (available here), with the Act coming in to force from 2 August. It will become fully applicable two years thereafter, on 2 August 2026, save for some requirements that are subject to longer transitional periods. We have discussed specific implications for medical device stakeholders here.

This is in parallel with the proposed EU AI Liability Directive on adapting non-contractual civil liability rules to artificial intelligence. In January 2024 the consolidated text for the Directive of the European parliament and of the council on liability for defective products was approved by the European Council. The proposal widens the scope of the existing EU Product Liability Directive and confirms software and products containing AI are “products”. It seeks to ensure claimants can claim compensation when a defective AI-based product causes death, personal injury, property damage or data loss. It reduces and eases the burden of proof on consumers by including provisions requiring manufacturers to disclose evidence as well as rebuttable presumptions of defect and causation where there is technical complexity. The hope is to maintain the balance between encouraging new technological innovation, while ensuring the same level of protection for smart product users.

U.S. developments

In October 2023, the Biden Administration issued an Executive Order (EO) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The EO signals a clear interest in the impact of AI on the health care industry and stands to reshape AI use in the sector, as we have described here. The EO outlines dozens of actions, including many for which the U.S. Department of Health and Human Services (HHS) will be responsible. In addition, the Office of the National Coordinator (ONC) for Health Information Technology recently implemented a final rule (HTI-1) establishing new reporting obligations for developers of certified health IT who integrate AI tools. When used within certified health IT, such as electronic health records (EHR), AI and other predictive algorithms must comply with certain transparency (source attributes), and risk management requirements (risk assessments, mitigations, and governance) need to comply with principles of Fairness, Appropriateness, Validity, Effectiveness, and Safety (FAVES). ONC has a stated goal of bringing greater transparency to decision support intervention (DSI) in order to ensure trust in optimizing the use of AI algorithms. Additional agency activity to implement the EO will likely be forthcoming, with potential opportunities for stakeholder comment and involvement as the regulatory paradigms for AI continue to evolve.

We also anticipate continued scrutiny by the U.S. Congress. While members of Congress have expressed enthusiasm about the potential applications of AI in health care, they have also raised deep concerns regarding transparency, privacy, and bias, including AI potentially incorporating human biases and the safeguarding of patient information used to train AI models. We expect continued discussions around the incorporation of AI guardrails into existing data privacy legislation and discussion of appropriate Medicare coverage and payment for AI applications. While comprehensive legislation may be unlikely prior to the November election, we expect AI to remain a top issue on Capitol Hill, including as part of upcoming appropriations bills.

Establishing liability

Apportioning and determining liability in relation to the malfunction of smart health devices and apps is particularly complex. The chain of custody and contributory negligence are critical factors in apportioning liability. There could be potential multiple defendants such as the system designer, the manufacturer, shipper, retailer, software or network provider and/or professional intermediaries such as doctors and hospital staff. Ultimately, much may depend on the contract between the treating hospital/doctor and those in the supply chain of the AI technology service/product and the relevant contractual warranties, liability, indem­nity and limitation clauses in place. Moreover, liability risks may vary by jurisdiction.

From a causation perspective, the patients could also be found contributory negligent for any harm suf­fered as a result of their own failure to care for their smart medical devices and/or to use them in accord­ance with the instructions. Overall, manufacturers are likely to bear the major share of any potential li­ability and could face huge reputational damage and costly product liability and data breach related group actions and product recalls.

As the landscape of digital health continues to evolve, smart medical devices and digital health app stake­holders can look to mitigate their liability risks by staying abreast of evolving legal and regulatory requirements, having robust software design, development protocols and effective post-market surveillance; reviewing label­ling, warnings and instructions to ensure they accurately and clearly reflect risks and performance accuracy; and conducting comprehensive global risk management for cyber security, data privacy including an incident response plan for a cyber-attack.

Next steps

Manufacturers, health care providers, and regulators must collaborate to address the associated risks and ensure these devices are safe, effective, and compliant with legal standards. This collaborative approach will be crucial in steering the future of digital health and smart medical devices. There needs to be comprehensive compliance with existing and emerging legal and regulatory requirements and there needs to be a re-evaluation of insurance coverage bearing in mind insurance policies may not provide coverage for every consequence of a cyber-attack and considering bespoke combined cover. These products promise significant benefits in terms of patient care and health care efficiency and can transform and improve patient outcomes. That said, they give rise to complex potential risks and liability issues that need to be navigated.

Hogan Lovells is actively monitoring developments in this space - keep an eye out for our future updates.

 

 

Authored by Karishma Paroha.

 

This website is operated by Hogan Lovells International LLP, whose registered office is at Atlantic House, Holborn Viaduct, London, EC1A 2FG. For further details of Hogan Lovells International LLP and the international legal practice that comprises Hogan Lovells International LLP, Hogan Lovells US LLP and their affiliated businesses ("Hogan Lovells"), please see our Legal Notices page. © 2024 Hogan Lovells.

Attorney advertising. Prior results do not guarantee a similar outcome.