In our first blog, which you can read here, we examined the exponential rise of AI in healthcare, its potential to revolutionise the sector, and the current regulatory landscape, including the EU's comprehensive AI Act, the UK's pro-innovation approach, and the anticipated change of direction in AI regulation under the new Labour Government. In our second blog on AI, we turn our attention to the legal issues associated with the use of AI in the health and care sectors, and look at some of the risk management implications for providers as they navigate this rapidly evolving field.
Legal issues
Aside from the regulatory challenges, the growing integration of AI in health and care raises numerous important and complex legal and liability issues. Determining liability for decisions made by AI, particularly in cases where AI errors lead to patient harm, is likely to be challenging and costly for service users, providers and insurers alike. Some of the key legal issues can be summarised as follows:
Data related risks
Information governance, data protection, and confidentiality (IG/DP) are critically important legal risks in this area. AI systems often process vast amounts of sensitive patient data, so maintaining patient confidentiality and ensuring GDPR compliance are key challenges for AI developers and end users, who will have a significant legal exposure for potential privacy and data security breaches. The legal threshold for compliance is very high in this area: an individual’s health data is classified as ‘Special category data’ under the Data Protection Act 2018. As such, the DPA and GDPR impose additional safeguards and conditions on data processors to ensure that any processing is lawful, fair and transparent.
Discrimination and bias
Another significant concern is the potential for discrimination and bias within AI systems. These systems can inadvertently perpetuate or even exacerbate existing inequalities if the data they are trained on reflects biased patterns. This could lead to unequal treatment of service users based on race, gender, or socioeconomic status, posing serious ethical and legal challenges. Mitigating these legal risks involves thorough due diligence during AI procurement, including comprehensive vetting of AI developers and vendors, robust contractual agreements outlining compliance and liability, ongoing monitoring of AI system performance, and implementing ethical guidelines to address bias and discrimination.
Other potential liability exposures
Myriad other potential liability exposures come into play when AI systems are embedded into health and care. These include contractual liability, product liability and intellectual property (IP) exposures. Again, due diligence will be critical to mitigate these risks. It is essential that contracts between healthcare providers and AI developers clearly delineate their respective roles and responsibilities to reduce the risks of contractual disputes.
Perhaps the greatest area of uncertainty surrounds potential negligence claims involving AI. Who will owe a duty of care, and who will be to blame if (or when) AI-driven decisions result in incorrect diagnoses or treatments that cause harm to patients or service users? This gives rise to more questions than answers at this stage. Will liability attach to the healthcare professional delivering the care (in conjunction with AI)? Will it attach to the health or care entity that employs or engages that individual? Or to the organisation that procured, purchased or embedded the AI? Or will the IT developer that built the device or the supplier that imported, marketed and sold it, be liable?
The answers to these questions are, of course, likely to be fact-specific – for example, was the harm due to user error, an alleged incorrect decision to use/not use AI in a particular treatment pathway, an error in the integration of AI into a clinical workflow, a failure to train staff properly, or a defect in the AI system/tool itself?
These claims are likely to be complicated (for which read expensive). They are also expected to be fertile territory for arguments about apportionment of liability, with an increased risk of satellite litigation as health and care defendants look to recover contributions and enforce contractual indemnities against partners, sub-contractors and suppliers. The lack of transparency in how AI systems make decisions will also complicate investigations of not just claims, but also service user complaints and adverse incidents. These may become more complex (and expensive) to investigate and will pose a significant challenge to health and care providers.
Risk management priorities
Given the range of exposures and the potential costs associated with claims involving AI, effective risk management is obviously vital. There are several critical risk management steps to enhance patient safety and system efficacy.
First, as stated above, comprehensive due diligence is essential. This involves thoroughly evaluating AI vendors and technologies, focusing on the quality and transparency of data sources, identifying potential biases and ensuring that the AI system complies with relevant regulations and standards.
Maintaining patient confidentiality and ensuring GDPR compliance is also key, so robust data privacy and cyber security measures need to be in place and regularly audited.
Having contracts and policies in place that establish clear lines of accountability and liability is also crucial. Contracts that clearly set out the responsibilities between all stakeholders and that clarify roles and legal obligations are particularly important in the event of AI errors or malfunctions.
Investing in comprehensive training for healthcare staff on AI use and limitations – and escalation procedures when things do go awry - is essential, as well as educating patients about AI applications, benefits and risks - to win their trust and secure their informed consent.
Continuous monitoring and evaluation of AI system performance is vital. Healthcare providers should set up mechanisms to track incidents (including near misses) and assess outcomes against clinical standards, ensuring ongoing assessment and improvement to address any issues promptly and maintain high standards of care.
Insurance implications
Healthcare providers looking to implement AI should obviously carefully consider their existing indemnity insurance to ensure that they have cover for AI-related errors and omissions. It is crucial that providers engage fully with their brokers and their insurers before they introduce AI systems into their provision.
Reviewing limits of indemnity will also be essential to ensure these are sufficient to cover potential claims arising from AI use, particularly maximum financial exposure in worst-case scenarios. Additionally, the insurance should cover regulatory compliance issues related to AI, such as breaches of data protection laws such as GDPR.
Healthcare providers should also assess the scope of cover to confirm it encompasses all aspects of AI implementation, including software, hardware, and third-party integrations. Comprehensive protection should extend to all AI applications within the provider's operations. Understanding the procedures for reporting AI-related incidents and making claims is vital. The process should be clear, efficient, and well-communicated to all relevant staff members, ensuring that any incidents are promptly addressed and managed to minimise disruption and financial impact.
"The information contained in this article does not represent a complete analysis of the topics presented and is provided for information purposes only. It is not intended as legal advice and no responsibility can be accepted by Altea Insurance or WTW for any reliance placed upon it. Legal advice should always be obtained before applying any information to particular circumstances."