Series
Blogs
Series
Blogs
The scope of the new EU AI Act is largely driven by the definition of an “AI system”. That definition is opaque and, unfortunately, the new guidelines from the EU Commission muddy the waters further. We consider if this matters in practice.
The EU AI Act defines an AI system in Article 3(1). It means:
“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
There are some important points to note about this multifaceted definition:
This may be an “elephant” definition. In other words, you know it when you see it. (Or to put it more eloquently, it is “characterised more by recognition when encountered than by definition”, Ivey v Genting [2017] UKSC 67).
Given these difficulties, the EU Commission’s guidelines have been eagerly awaited. This provided an opportunity for a more pragmatic approach. For example, the EU Commission could indicate there is a presumption that, for the time being, only certain types of technology are AI, such as machine learning systems.
Unfortunately, the EU Commission has stuck closely to the elements in the original definition and has attempted to define them in a way that is frequently unhelpful and unclear. To take some examples:
The guidelines refer to the helpful clarification in recital 12 that AI systems should not include: “simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations.”
However, even here confusion reigns. For example, the guidelines suggest that the definition excludes systems used for “mathematical optimisation”. This is odd as many AI tools are inherently just solving optimisation problems.
Equally, the suggestion that AI systems do not include “physics-based systems”, such as weather modelling of “complex atmospheric systems”, is difficult to understand. It appears this is based on a subjective assessment of the purpose, and not an objective assessment of the technology.
The guidelines then suggest an exemption for “basic processing…used in a consolidated manner for many years”. Perhaps this harks back to the idea that AI is “anything computers still can’t do” (see What Computers Still Can’t Do: A Critique of Artificial Reason by Hubert L Dreyfus) but is hardly a principled basis to delineate this term.
As set out above, there was always going to be a grey area containing systems that sit uncomfortably on the boundary between dumb code and smart AI. The critical question is how large that grey area is and how well it aligns to the harms the law is intended to address. Unfortunately, the EU Commission’s guidelines leave plenty of grey and ask more questions than they answer.
Does this matter in practice? Perhaps not. The EU AI Act is “inch wide; mile deep”. The key obligations are focused on the narrow categories of prohibited, high-risk and general purpose AI systems (together with limited transparency and literacy obligations).
The sorts of sophisticated technology needed to implement those highly regulated use cases will often clearly be an “AI system”. Outside of those use cases, the question is largely academic.
The Commission’s Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 are here.