Series
Blogs
Series
Blogs
Earlier today, the EU adopted the EU AI Act. The only remaining step is its publication in the Official Journal (which is expected to happen in the next month). The Act will come into force 20 days after its publication, although the substantive obligations will then be phased in over a three-year period.
There has been considerable excitement about this new law, but getting to grips with it in practice is challenging. It is a formidable and complex piece of legislation running to 113 articles, 13 annexes and 180 recitals, and for many organisations its effect may be limited to only a subset of products.
With these factors in mind, we set out 10 key points on the new EU AI Act.
The obligations under the EU AI Act apply in tiers based on the purpose for which the AI system is intended to be used. The table below summarises those tiers.
|
Tier |
Example/Description |
Position |
| Prohibited | e.g. Use of AI system that applies subliminal techniques to manipulate behaviour or cause harm | Use is prohibited |
| High-risk | e.g. AI safety components integrated in certain products
|
Subject to significant regulation |
| GPAI(systematic risk) | General-purpose AI models that create systematic risk because of their capabilities, e.g. those requiring more than 1025FLOPS for training | Subject to significant regulation |
| GPAI(other) | Other general-purpose AI models | Limited obligations, focussing on documentation and copyright |
| Human interaction | AI systems that interact with humans or create deepfakes | Transparency obligations |
| Other | Other AI systems | Limited regulation, such as AI literacy requirements |
The overall effect of this tiered approach is that – in practice – many AI systems will likely be subject to only limited regulation.
For example, the definition of “prohibited” AI systems is short, focusing on AI systems used to manipulate or exploit individuals, for social scoring or for remote biometric identification. These are unlikely to be relevant to most organisations. The list of “high risk” AI systems is slightly broader, capturing safety components of certain products and uses such as recruitment or credit assessments, but these are still narrowly drawn.
In practice, many organisations are likely to only have a handful of AI systems subject to the strongest tiers of regulation. However, determining with certainty which ones are caught is critical, as the cost of compliance for high-risk systems (and the sanction for getting it wrong) are significant.
The EU AI Act is also an EU Regulation (so directly applicable in every EU Member State) and likely applies maximum harmonisation – i.e. prevents individual Member States from creating their own AI laws within the scope of the Act.
Arguably for many organisations it has more of a deregulatory effect. In other words, the Act not only applies very limited obligations to most AI systems but also prevents national laws being created to impose extra obligations. Throughout the adoption process, this has also been touted as one of the key benefits of the Act.
There will, however, be very significant new obligations for high risk systems under the new EU AI Act.
The specific obligations vary according to your role, and the law includes the concepts of a “provider” (being the person who develops the AI system) and “deployer” (being the person using the AI system). There are also separate obligations for “distributors” and “importers”.
The most burdensome obligations unsurprisingly fall on the “provider” who, amongst other things, must:
In contrast, a “deployer” must comply with a more limited set of obligations. For example, they must:
Separate obligations apply to importers and distributors. Importantly, if a deployer, importer or distributor puts their trade mark on an AI system, substantially modifies an AI system or uses it for a high-risk purpose not foreseen by the provider, they will be deemed to be a provider themselves.
“AI systems” are defined as:
“a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”
That definition mirrors the definition used by the OECD to define AI systems, but from a technological perspective it’s not entirely clear what this means in practice, and there remains considerable scope for argument as to whether or not a system is “AI”. Having said that:
Ultimately, like an elephant, the concept of AI may be difficult to describe but you will generally know it when you see it.
The territorial scope of the EU AI Act is exceptionally broad and so could potentially capture many international organisations with only a tangential connection with the EU. For example, the EU AI Act applies in a wide range of circumstances including:
Where the provider of a high-risk AI system is established in a third country, they must appoint an authorised representative in the EU.
As set out above, the Act now needs to be published in the Official Journal (which is expected to happen in the next month) and will come into force 20 days later.
The key stages after that are:
The EU is not the only jurisdiction looking to enact specific new AI laws, with other countries such as China and the US (particularly through state regulation such as Illinois Biometric Information Privacy Act, New York AI Bias Law and Colorado Artificial Intelligence Act) passing laws in this area.
It is important that, to the extent possible, your compliance plan factors in these new and emerging obligations.
While the scope of the EU AI Act is narrow, it is important to remember that these systems continue to be heavily regulated under other frameworks, particularly the GDPR, consumer protection and IP law, and that the obligations under the EU AI Act are without prejudice to those other obligations. The EU AI Act will also be supplemented by the proposed AI Liability Directive and new Product Liability Directive.
This means that even if your new AI system is not prohibited, high-risk or a general purpose AI, you will not fall into a regulatory lacuna. You will still need to consider compliance with these other obligations beyond the EU AI Act, including for example identifying a legal basis for any training of the system, considering if the system processes special category personal data, ensuring transparency and completing a data protection impact assessment. The increasingly assertive posture of data protection authorities in relation to artificial intelligence means this may not be a trivial exercise.
While the obligations under the EU AI Act will not apply immediately, it’s important to start to prepare for these changes now. The scope of this work will vary from organisation to organisation, but for most there are five key steps: