Series
Blogs
Series
Blogs
Isaac Asimov condensed his framework to regulate robots into three succinct laws. First, a robot may not injure a human or, through inaction, allow a human to come to harm. Secondly, a robot must obey orders from humans, except where they conflict with the first law. Thirdly, a robot must protect its own existence so long as that does not conflict with the first or second law.[1]
The proposals from the EU Commission are less digestible, running to nearly 100 pages and creating a regulatory superstructure for artificial intelligence. We consider the scope of this proposed Regulation on Artificial Intelligence and the specific new rules for high-risk use cases, “deepfakes” and public surveillance.
Please note, this article was updated on 26 April 2021 to reflect the final proposals from the EU Commission. The original version of this article considered the leaked version of the Regulation which was broadly similar but contained a number of differences.
Crafting a suitable regulatory framework for new technology, such as artificial intelligence, is challenging.
Artificial intelligence is used in many different ways. It can be embedded within a product, provided as a service, or used internally within an organisation. The technology might be a general purpose tool or a model trained for a specific task. There are also a large number of potential actors in the deployment of artificial intelligence, including the persons supplying the software, providing the data, training the model, and selling and using the final system.
Artificial intelligence is also an emerging technology and there is little agreement as to what is, and is not, genuine artificial intelligence. While this technology has cracked some hard domain-specific problems (such as face recognition or language translation), it shows little sign of genuine intelligence or replicating the flexibility of the human mind. Skynet and HAL 2000 remain firmly in the realm of science fiction. Regulating this type of emerging technology risks constraining innovation and might just be unnecessary.
The EU Commission appears to have had these factors in mind. While the overall scope of the proposed Regulation is broad, the strongest obligations apply to a tightly defined class of “high-risk” artificial intelligence. Similarly, there has been a decision to focus the obligations on the “provider” of the artificial intelligence system, being the person placing it on the market or putting it into service, though there are also direct obligations for “users” of those systems.
The key definition setting the scope of the Regulation is of an “artificial intelligence system”. The Regulation recognises this is a "fast evolving family of technologies" and earlier leaked drafts of the Regulation acknowledged the difficulty in creating a clear and comprehensive description, given “there is no universally agreed definition of artificial intelligence”.
The solution is to define artificial intelligence by reference to three programming techniques, namely:
This avoids the problems with traditional high-level functional definitions, such as ‘systems that think like humans, act like humans, think rationally, or act rationally’, which are highly contested and more a question of philosophy than law.
However, this approach is still vague and potentially very broad. In particular, defining artificial intelligence to include “logic...based approaches, including...knowledge bases…and expert systems” potentially includes a very broad class of computer programs, not many of which could sensibly be called ‘intelligent’.
The broad definition of artificial intelligence is accompanied by a set of general prohibitions for unacceptable uses of this technology. The Regulation forbids the use of artificial intelligence systems that:
The first two prohibitions are laudable and are difficult to argue with in principle, but suffer from a lack of clarity and require a lot of heavy lifting by the term “physical or psychological harm”. For example, what constitutes “psychological harm” and is it subject to any form of materiality qualification? Does it include doomscrolling, buying new gadgets you didn’t really need, body image issues, etc.?
However, in practice, given this is limited to subliminal techniques and exploitation of vulnerabilities, it is somewhat narrower than Asimov’s Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm”.
While the general prohibitions apply broadly, the majority of the Regulation focuses on “high-risk” artificial intelligence, which is much more tightly defined.
It includes situations that raise product safety risks. The use of artificial intelligence will be “high risk” where it is used as a safety product (or as the product itself) in equipment or systems such as machinery, lifts, medical devices, radio equipment and even toys. It also includes the use of artificial intelligence in the management of critical infrastructure such as roads or utility networks.
Use cases that endanger fundamental rights will also be “high risk”. The Regulation identifies "high risk" as the use of artificial intelligence for:
Finally, the EU Commission has the opportunity to identify new forms of “high-risk” artificial intelligence, based on specific criteria. This approach of only applying strict regulation to a tightly defined class of “high-risk” uses, while allowing incremental expansion over time, appears sensible.
The Regulation applies to:
There are also obligations placed on the importers and distributors of these systems. The Regulation contains level playing field provisions which extend to providers or users in third countries where the output of the system is used in the EU.
Significant and extensive compliance obligations are placed on providers of “high-risk” artificial intelligence systems. They include requirements to:
These compliance obligations are detailed and burdensome. Providers of “high-risk” artificial intelligence systems will need to expend significant effort to comply with them.
Users of “high-risk” artificial intelligence systems will be subject to more limited obligations. They must use that technology in accordance with the instructions for use, monitor the operation of the system and keep logs of its use.
In addition to the obligations placed on “high-risk” artificial intelligence systems, there are also specific obligations for other use cases.
In particular, where the system is used to create “deepfakes”, interact with humans or recognise emotions, the human must be informed.
The proposed Regulation overlaps with a number of other legal instruments, particularly the GDPR in relation to systems that process personal data and EU conformity laws in relation to products. This arguably makes the proposed Regulation unnecessary. However, there are some significant differences between the Regulation and the GDPR:
This is all backed up by a new regulatory superstructure. Each Member State will need to appoint a national regulator and a new European Artificial Intelligence Board will be set up.
The sanctions for breach are also potentially significant. Infringement of the general prohibitions (described above) and data governance provisions will attract fines of up to 6% of annual turnover or €30 million, and breach of most other parts of the Regulation will attract fines of up to 4% of annual turnover or €20 million.
Accompanying these sanctioning powers are a broad range of investigatory tools including right to access source code and run test cases.
The proposed Regulation on Artificial Intelligence has a long way to go. It will need to pass through the EU’s legislative machine and will then apply two years after it is adopted. This means these new obligations are unlikely to apply until 2024.
In the meantime, the use of artificial intelligence continues to raise new and interesting legal issues across a whole range of areas, including data protection, intellectual property, competition, financial regulation and product liability. Our AI Toolkit (here) provides detailed, practical tips on how to deploy this technology safely, ethically and lawfully.
The proposed Regulation on Artificial Intelligence is available here.
By Peter Church
[1] Asimov later added the Zeroth law to cater for situations in which robots have taken responsibility for governing.