Series
Blogs
Series
Blogs
The term “artificial intelligence” potentially covers a wide spectrum of technology but normally refers to systems that do not follow pre-programmed instructions and instead learn for themselves. This might either be using an existing data set, for example in supervised or unsupervised learning, or by prioritising actions that lead to the best outcomes, in the case of reinforcement learning.
One of the implications of this behaviour being learned, and not programmed, is that it may not be clear how the system makes a decision. It operates in a “black box”. The system might work perfectly in a development environment but become unpredictable or unreliable in the real world. Unlike a human, the algorithm has no higher-level assessment of whether what it is doing is obviously “wrong”. There is no “common sense” or “ethical” override.
This creates a number of legal concerns. The underlying algorithm might be making decisions that are biased or discriminatory and in breach of the broad fairness requirements of the GDPR.
The use of artificial intelligence to replace human decision making expands the scope data protection law.
This is because the GDPR does not regulate the minds of men. Human decisions cannot generally be challenged on the basis that they are unfair or unlawful under the GDPR, unless based on inaccurate or unlawfully processed data. For example, you cannot use the GDPR to ask that your exam is re-marked or your insurance coverage is not reassessed (assuming those decisions are taken by a human - see Nowak (C-434/16) and Johnson v Medical Defence Union [2007] EWCA Civ 262).
In contrast, decisions taken by machine are directly subject to the GDPR, including the core requirement for fairness and accountability. In other words, the data subject can challenge the substantive decision made by the machine on the grounds that it is not fair and lawful.
Given the accountability duties under the GDPR, defending such a claim will not only require you to ensure the machine’s decision-making process is fair but to demonstrate this is the case. This is likely to be challenging where that decision is taken in a “black box”, though the use of counterfactuals and other measures may help (see below).
Finally, there is a risk the system will make decisions that are either discriminatory or reflect biases in the underlying dataset. This is not just a potential breach of data protection law but might also breach the Equalities Act 2010 and raises broader ethical concerns.
There are further protections where automated decision making takes place – i.e. where an artificially intelligent system is solely responsible for a decision that has legal effects or significantly affect a data subject.
This reflects the common-sense expectation that important decisions, for example whether to offer someone a job or provide a mortgage, should not be entirely delegated to a machine.
Under the GDPR, this type of automated decision making can only take place in the following situations:
Even where automated decisions are permitted, you must put suitable safeguards in place to protect the individual’s interests. This means notifying the individual (see below) and giving them the right to a human evaluation of the decision and to contest the decision.
The GDPR also requires you to tell individuals what information you hold about them and how it is being used. This means that if you are going to use artificial intelligence to process someone’s personal data, you normally need to tell them about it.
More importantly, where automated decision making takes place, there is a “right of explanation”. You must tell affected individuals of the fact of automated decision making, the significance of the automated decision making, and how the automated decision making operates.
The obligation is to provide “meaningful information about the logic involved”. This can be challenging if the algorithm is opaque. The logic used may not be easy to describe and might not even be understandable in the first place. These difficulties are recognised by regulators who do not expect organisations to provide a complex explanation of how the algorithm works or disclosure of the full algorithm itself (see the Guidelines on automated individual decision making and profiling, WP 251 rev 01).
However, you should provide as full a description about the data used in the decision-making process as possible, including matters such as the main factors considered when making the decision, the source of the information and its relevance.
None of these challenges necessarily prevents the use of artificial intelligence so long as it is used in a safe and controlled manner. Deployed properly with appropriate safeguards, artificial intelligence offers a number of potential benefits when it comes to decision making, such as reducing the error or unconscious biases that arise in human decision making.
The sorts of safeguards you might expect to see include:
Similar controls are already required under MiFID II for financial services firms carrying out algorithmic trading and high-frequency trading.
The interaction between artificial intelligence and the GDPR thus engages a number of relatively complex legal and technical issues that require a number of value judgements.
In most cases, you will need to document this evaluation. This will be through either a:
In many cases, the deployment of artificial intelligence systems will trigger the need for a full data protection impact assessment. EU guidance indicates that the use of new technology, automated decision making and similar activities will trigger the need for a data protection impact assessment (Guidelines on Data Protection Impact Assessment, WP 248 rev 01). In the UK, the Information Commissioner has issued a list of activities that prima facie will require a data protection impact assessment. It specifically refers to “Artificial intelligence, machine learning and deep learning” as a factor that may trigger the need for such an assessment.
These issues are all explored in greater depth in our AI Toolkit, along with related issues such as ownership, liability and financial services regulation.
Our AI Toolkit is available here.
By Peter Church