In this article we once again focus on artificial intelligence, but this time looking at some of the key data privacy issues arising with the use of AI tools and what in-house lawyers need to consider to ensure data protection compliance. 

The EU AI Act, which came into force on 1st August 2024, is essentially a product safety law, providing for the safe development, deployment and use of AI systems. Both the EU and UK General Data Protection Regulations (GDPR) on the other hand give individuals fundamental rights in relation to the processing of their data. Although these two sets of laws are intended to dovetail each other, there are inevitable tensions created between the development and use of AI tools and application of data privacy laws. In fact, it is probably fair to say that data protection risks have considerably increased and the application of GDPR has become considerably more complex with the advancement in AI technology and its use to process personal data. 

The opportunities and benefits from AI may be far reaching, but as AI tools increasingly encroach on all areas of our lives, the concerns about the potential for loss of control over our personal data are real. AI tools use significant amounts of data to operate, learn and continuously improve, sourcing data from a vast array of different sources, including social media, scraping from the internet and voice assistants, to name but a few. These data sources inevitably contain large quantities of personal information and given the broad definition of personal data under European data protection laws, the development and use of an AI system will most often result in the processing of personal data. This raises the inevitable question as to how AI tools can lawfully process this personal information- and what is the legal basis for processing this data?  

AI tools process personal data in different ways and for different purposes, so it is essential, prior to processing personal data, to determine what personal data is being used, the purpose for which it is being processed and the lawful basis upon which it is being handled. As AI systems have a broad potential range of applications (from recruitment processing, recording meetings, clinical trial eligibility assessments, to name but a few) it may be difficult to determine the lawful basis upon which it can be used. Likewise, relying on ‘legitimate interests’ (personal data is being processed to carry out a legitimate interest, and the rights and interests of the data subject do not outweigh those interests) as a catch all lawful basis for processing may not be so straightforward and will ultimately depend on the purpose for which the AI tool is to be used, and the type of personal data being processed. Carrying out a legitimate interest assessment (LIA) Sample LIA template (Word) prior to any processing is therefore essential.   

Further complexities arise when AI tools process personal data which involves automated decision making or profiling or which involves special category data. In these scenarios it may be difficult to determine a lawful basis for processing and the only solution may be to obtain explicit consent to the processing – a complicated task in itself.  

As well as ensuring there is a lawful basis for processing the data, any processing must also be fair and transparent. The potential for bias with AI systems is well documented but fairness in processing personal data is a prerequisite, so it is essential the AI system is statistically accurate and avoids discrimination/bias. Additionally, controllers are obliged to inform data subjects what will be done with their data. Since AI tools use complex algorithms, explaining decisions made by AI is not straightforward. Whilst transparency requirements have frequently been fulfilled by the provision of a privacy notice, it is likely that explainability statements, which seek to explain how the AI system makes decisions, will become more commonplace. In the UK, the Information Commissioner’s Office (ICO) has issued guidance on Explaining decisions made with AI , which is a useful reference to assist in this regard. 

It is perhaps worth mentioning that in addition to the requirements of GDPR, the EU AI Act also imposes transparency requirements, most notably on providers of high-risk AI systems, (some transparency requirements also apply to deployers). Regulators are being increasingly proactive in taking steps against organisations for lack of transparency when processing data using AI tools (e.g. Italian data protection authority’s investigation into ChatGPT). The direction of travel seems clear therefore: existing transparency requirements under GDPR are strengthened by the EU AI Act and data protection regulators are increasingly focusing their efforts to enforce data protection obligations where AI technologies are used to process personal data. 

In addition to ensuring that the processing of personal data in an AI context is fair, lawful and transparent, other important data protection issues which arise with the use of AI tools include: 

  • Purpose limitation – where AI tools use data for multiple purposes, a major tension exists in ensuring that personal data is collected for “specified, explicit and legitimate purposes” only and not further processed in a way which is incompatible with those purposes.
  • Data minimisation – how will it be possible to ensure that a vast and expanding data set adheres to the data minimisation principle that requires you to identify the minimum amount of personal data you need to use for your purpose, and to only process that information, and no more?
  • Storage limitation – since AI tools arguably become more effective, the more data they are trained on, it seems likely that there will be a tendency to hold on to data on a long-term basis. Such indefinite data retention without appropriate justification breaches the storage limitation principle in GDPR. It will therefore be important to demonstrate compliance in this area with appropriate policies and audit trails to appropriately justify retention and/or evidence deletion.
  • Minimising risks of privacy attacks on AI Tools – AI systems can increase security risks and make them more difficult to manage. With an increased security risk profile likely with AI system use, it is essential to ensure that risk management practices are reviewed and appropriate additional technical security measures adopted to ensure personal data is secure in the AI context. The right approach to security will depend on many factors including the types of risk and specific processing activities undertaken. In the UK, the ICO has published  guidance to assist in assessing and managing the risk of privacy attacks in an AI context. 

With the use of AI continually evolving, it is essential to understand data privacy implications arising from the use of AI tools. Adopting the right approach to demonstrate compliance and adapting this approach as technology develops will be key. In this regard, it is recommended to:   

  1. Undertake (and keep up to date) a detailed risk assessment to understand all uses of AI, which data subjects are affected and consider the wider regulatory requirements;
  2. Review and update data privacy policies and procedures, considering how to demonstrate compliance with data protection laws in the AI context i.e. the need for comprehensive data protection impact assessments (DPIAs) will almost always be required but these will also be essential to demonstrate compliance to a regulator; 
  3. Raise awareness and educate the wider business about data protection risks in the context of AI, how data is used in AI tools and the sources of data;
  4. Ensure that data protection compliance in the AI context is given sufficient prominence in the business, including at senior management level in order to set the tone from the top;
  5. Keep up to date with and review the latest guidance from regulators. The ICO for instance has developed several guidance documents. See here and here for further information;
  6. Keep up to date with enforcement decisions from regulators. These often provide valuable guidance. For instance, the recent ICO decision on Snap’s My AI contained useful commentary on the ICO’s expectations as to the level of detail to be included in DPIAs in general and observations on particular areas of concern when engaging with genAI and children.  

We are continuously monitoring the developments in the governance of Artificial Intelligence, and what this means for our sector.  Please do not hesitate to contact us if you require any assistance in preparing for the implementation of the AI Act or in reviewing and updating your policies and procedures to ensure compliance with data protection laws in the AI context.