ICO warning on AI data privacy risks

The ICO has warned businesses to address the privacy risks associated with generative AI technology before adopting it, stating that it will be taking action against businesses who fail to do so because “there can be no excuse for ignoring risks to people’s rights and freedoms before [a generative AI] rollout.”

The ICO outlined 8 areas that businesses must consider before adopting generative AI, which largely reflect what business must do already before processing any personal data. By way of a reminder, these steps are as follows:

  1. Considering lawful reasons for processing personal data using the generative AI technology – such as consent or legitimate interests;
  2. Mitigating security risks that generative AI may bring;
  3. Ensuring transparency with data subjects – AI systems are difficult to understand, and transparency may not be easy – but it is important for data subjects to understand how their personal data will be processed by any new AI technology;
  4. Checking whether they are a controller or processor of personal data – if businesses are developing generative AI using personal data, they have obligations as the data controller, and potentially as data processor (although these may differ depending on the role the business is playing);
  5. Preparing a data protection impact assessment to address and mitigate privacy risks associated with the AI technology;
  6. Working out how to limit unnecessary processing of data by the AI system;
  7. Deciding how to comply with individual rights requests when implementing generative AI systems; and
  8. Establishing whether generative AI would make solely automated decisions using personal data.

Guidance can be found in further detail here: Generative AI: eight questions that developers and users need to ask | ICO.

How does it relate to your business

Businesses looking to invest in generative AI must ensure that data privacy risks are carefully considered and addressed before any investment is made. Amongst the 8 steps outlined above, establishing how the relevant AI model uses personal data, considering what data is being fed into the systems, and understanding the type of content that can be generated by the AI, are key. It is best practice to stay proactive rather than reactive and risk hefty ICO fines and subsequent damage to reputation.

It is important to note that the UK Government will host a global AI safety summit which will take place on the 1st and 2nd November at Bletchley Park. It will be interesting to note what, if any, data protection regulation steps are discussed over AI during this summit and the subsequent impact on organisations investing in AI systems.

Key contacts

Related