People walking and targeting by blue squares

Data Privacy and AI: Everything You Need to Know

Published 13 October 2023

Deploying AI within your organisation may seem simple but, if personal data is involved, its use will likely have a fundamental impact on your data privacy programme. 

AI applications don’t always process data fairly, lawfully or transparently. So, if you’re considering deploying an AI program, you may have to make some significant changes to how you process data. This isn’t just to maintain best practice, but to ensure you don’t pose a risk to the rights and freedoms of data subjects. 

This is a new area, and UK and EU legislation is still in the process of catching up. It was only in July of this year (2023) that the EU published a regulatory framework proposal on artificial intelligence. Despite this, it’s a subject that’s worth considering now if you are to ensure that you abide by current and future legal requirements.

In this blog, we’ll be covering the ways AI programs impact your data privacy programme, how you can reduce associated risks, and the accountability and governance implications.

For more on how you can make the most of AI within your business while maintaining best practice for data privacy, register for our upcoming Data Privacy and AI webinar.

Can AI Be a Threat to Data Privacy?

If you don’t carefully consider how you use AI within your organisation, it can create problems for your data privacy program. Some major considerations are:

  1. Joint controllership arrangements
  2. Underlying bias and unfairness
  3. Necessity and Proportionality

Joint Liability with AI Developers

If you deploy an AI program, you may create a joint controllership arrangement between your organisation and the program’s developer. In practical terms, this means that your organisation and the developer would mutually process the same personal data. This has several impacts and isn’t a decision that should be taken lightly.

Firstly, when becoming a joint controller, you must have an arrangement in place with the other controller that establishes your respective responsibilities. Controllers have the highest level of responsibility to meet data privacy principles – as well as legislation such as the GDPR – which can make this a very complex process. 

Here are just a handful of challenging questions you will have to answer when undertaking a joint controllership arrangement:

  • Who will collect personal data and why?
  • What data will be predicated or classified?
  • What types of personal data will be collected, and will this include special category data?
  • How long will personal data be retained for?
  • How will you respond to individual's rights requests?

Beyond answering these, the Information Commissioner’s Office (ICO) can take action against both controllers in the event of a data breach (i.e. you and the programme developers). If the AI program developer doesn’t meet their requirements, the ICO may take action against you as well. This can come with severe financial and reputational costs.

Underlying Bias and Unfairness

AI can only follow pre-determined rules and is unable to cope with ‘edge cases’ that don’t align with the data set it was trained on. This can give AI a narrow world-view or bias. For example, if an AI model is trained on a data set from the 1970s it will develop a ‘world-view’ that reflects contemporary biases. This could include biases around gender, sex, race and so on.

While statistical bias may sound like a small concern, this could have significant impacts on your data privacy program. An AI program may predict details about an individual which fall into the category of special category data, such as their sexual orientation or religious beliefs. If these predictions are made using datasets that contain significant bias, the outputs are likely to lead to discrimination and significant, incorrect decisions being made about an individual. For example, denying credit to all residents in specified neighbourhoods, when postcode often acts as a proxy for ethnic background.

It is also worth noting that while you may not use these predictions within a live environment, the underlying activity will still involve the processing of special category data and must be processed appropriately. Whether you have to treat these predictions as special category data depends on:

  • How certain the inference is and whether it is the purpose to draw special category data (i.e. if you intend to treat individuals differently as a result of the prediction)
  • If the predictions are being made about affinity groups
  • Tangible and intangible harms on the members of the affinity group and/ or wider society as a whole.

Necessity and Proportionality: Addressing the Risk of AI

You should always consider whether AI is the appropriate solution to the problem you’re facing. There are many use cases where choosing AI instead of relying on a person to make decisions isn’t necessary or the best choice. In fact, having AI support human decisions is often a better option than having the AI operate independently.

A simple way of determining whether AI is the ‘right’ choice for your organisation is asking whether it is necessary and proportional to the problem you face. You shouldn’t be deciding to deploy AI solutions simply because they’re available, but because they are the best way to address your business challenge. If there is a less intrusive way to achieve these objectives, you must be able to justify why you’ve not chosen it.

When using AI for decision-making, you should decide whether to use it to support a human decision-maker, or whether it will make solely automated decisions. People have the right not to be subject to solely automated decisions with legal or similarly significant effects. If you are making such decisions, people can request a human review of the decision made about them.

What Are the Accountability and Governance Implications of AI?

As you may expect, accountability and governance surrounding AI is entirely dependent on your specific use case. However, when personal data is involved in the activity, to meet your legal and regulatory requirements you must be able to demonstrate on an on-going basis that you have baked-in 'data protection by design and by default'.

Generally speaking, you should always undertake a Data Protection Impact Assessment (DPIA) to measure the risks of deploying AI within your organisation’s personal data processing operations. This assessment will assist you in determining potential harms associated with the activity and, ultimately, the right approach to implementing AI (or else to determine if it shouldn’t be implemented at all).

There’s a lot more that can be said on Data Privacy and AI. In our upcoming webinar, I’ll dive deeper into the topics mentioned in this blog, as well as looking at:

  • The state of AI regulation in the UK and the EU
  • The interplay between AI and data privacy regulation – assessing impact to the core principles of the GDPR
  • How to conduct risk assessments and understand the impact of AI on individuals
  • Balancing minimisation and statistical accuracy, and other considerations for eliminating bias
  • Mitigating risks in your use of AI for personal data processing

For more on how Bridewell can help you make the most of AI while remaining compliant and secure, get in touch with one of our team!

Author

Chris Linnell

Data Privacy Principal Consultant

Linkedin