Here at LS Law, we continue to be immersed in looking at the impact of AI on our life sciences sector and we recently held a lively discussion forum with our consultants to explore the key risk factors of AI in life sciences organisations and the practical steps needed to manage those risks. What came out of that discussion was both illuminating and daunting, as we were left wondering whether AI governance and risk management is really being given sufficient priority at this time. As one of our consultants put it:
“It is really quite scary. I know there will be focus on compliance with the EU AI Act, as deadlines loom, much like with GDPR, but that isn’t the point. The reality is that the risks of AI exist now and pervade right across and to the heart of the business and responses to risk management are not joined up. There needs to be holistic governance on this issue. It isn’t enough just to be concerned about compliance with the legislation or regulatory guidance, or to believe the use of technical controls alone to address security risks will suffice, or to determine there are no issues with using AI in medicines research simply because an exemption in the AI Act for scientific research and development applies to medicines development. There needs to be an overriding understanding of how AI works and is being used right across the organisation and full consideration given to its impact now and in the future: the indirect, sometimes hidden risks that using AI can create, so that a coordinated approach and overarching framework can be created to fully manage those risks and protect the organisation. I really worry about how we are currently protecting our confidential information and intellectual property rights and the impact on privacy for instance.”
Much has been made of the AI Act, which aims to ensure that AI systems in the EU are safe and respect fundamental rights, whilst supporting innovation and investment in AI. The Act takes a risk-based approach, defining four categories of risk for AI systems and imposing a range of obligations on different operators depending on the level of risk, to ensure protection of fundamental rights. At societal level therefore, the AI Act provides a set of harmonised rules to protect safety and fundamental rights, but what it very much does not do is provide a one-stop-shop for legal compliance and risk management around the deployment of AI.
Similarly, the UK ‘s sector focused principles-based approach, whilst flexible and allowing regulators to develop specific tools and guidance, risks regulatory overlaps, gaps and complexity, with guidance only really focusing on the implementation of the five regulatory principles of (i) safety, security & robustness; (ii) appropriate transparency and explainability; (iii) fairness; (iv) accountability and governance; and (v) contestability and redress.
In our discussion forum we therefore looked beyond legislative and regulatory compliance and considered key risk factors of AI deployment in life sciences organisations and the practical steps needed to manage those risks. Some key points from our discussion (primarily those related to life sciences R&D) are highlighted immediately below, but we will be covering other issues in greater detail, as well as providing practical next steps in future articles. Here are a few key points related to life science R&D:
- In April EFPIA issued a statement setting out that it considers the scientific research and development exemption in the AI Act applies to AI-based drug development tools used in the research and development (R&D) of medicines. The statement received wide publicity and was welcomed by many, but it really misses the point, since there remain very considerable risks of using AI in medicines research, which require detailed consideration. These include questions of who owns the output of generative AI, an issue which is not straightforward to answer and depends on the jurisdiction (and legal position in the relevant jurisdiction), roles played by humans and terms and conditions of the relevant AI platform. If using AI to create or within R&D processes it may in fact be essential to ensure that AI is used only as a supplementary tool and is not contributing to invention conception in order to retain ownership of intellectual property (IP) rights.
- Protection of confidential information is another significant area of concern. Terms and conditions of use of AI platforms may seek to prohibit IP protections and even allow the creator of the AI platform to use your data and confidential information to better improve their offerings. For this reason, it is important to review the terms and conditions of use and to implement contractual protections and safeguards with AI vendors to minimize the risk of disclosure and use of confidential company information. Some companies may be minded to go even further and restrict the use of generative AI altogether, given the difficulties of protecting data, IP and confidential information.
- Ensuring the quality and relevance of data generated by AI is a challenge and rigorous testing and monitoring measures need to be implemented to guarantee data integrity. We envisage this will be an area where regulators may wish to become actively engaged to ensure medicines meet applicable standards for safety, quality and efficacy.
- The EU is proposing to implement an AI liability Directive alongside the AI Act, which aims to modernise the current EU liability framework to make it easier for individuals to bring claims for harms caused by AI. Whilst companies will clearly need to implement measures to manage their AI liability risk, it is important to consider other potential liability risks associated with AI use, including IP infringement and breach of confidentiality. In the context of R&D for instance, it is worth remembering that large AI data models use considerable amounts of data from a wide number of sources. It is essential therefore for life sciences organisations to understand the origin of data used in large AI data models and ensure they get contractual assurances (and indemnities where appropriate) as to its origin and lawfulness of its collection/processing.
We have also been looking at some interesting questions affecting life science lawyers and compliance professionals and how they can best prepare themselves to address and manage legal and compliance risks arising from use of AI. Here are some interesting questions we posed in our discussion forum. What are your views?
- Do lawyers need to become technical experts? How do we ensure lawyers/compliance professionals know enough about the potential risks of AI to be able to discuss key issues with staff?
- The challenges of AI governance and risk management are considerable. What should the lawyer’s/compliance professional’s role be in the governance process? How is it possible to ensure that complex legal issues and legal risk are appropriately managed?
Next month we will be looking in detail at what the right approach to governance of AI risk should be, as one of our senior compliance professionals answers questions and provides her thought leadership on the issue.
We are continuously monitoring the developments in the governance of Artificial Intelligence, and what this means for our sector. Please do not hesitate to contact us if you require any assistance in preparing for the implementation of the AI Act or developing and implementing AI governance and risk management procedures.