You are using an outdated browser. Please upgrade your browser to improve your experience.
While AI has been part of our digital lives for many years, generative AI hit the mainstream in 2022 with the launch of tools such as Dall-E-2 and ChatGPT. Generative AI is a form of machine learning trained on vast amounts of data that allows computers to generate content such as written text, music and art from prompts.
ChatGPT quickly captured the public imagination because of its ease of use and broad potential application. The rise of generative AI is being fuelled by billions of dollars of investment and continued technology advances, and its capability is expected to grow exponentially.
Generative AI companies are scaling rapidly as investment has surged in 2022 and 2023, reaching unicorn status almost twice as fast other companies. In 2023 we expect to see continued investment in generative AI, as an exception to the more cautious approach that investors have been taking towards tech investment and M&A.
AI is already regulated under a myriad of rules, including data protection, antitrust and financial regulation, product liability and consumer protection. However, the development of AI-specific regulation has accelerated rapidly with the rise of generative AI as governments race to respond to the risks identified.
The EU is progressing the world’s first bespoke legislation for AI regulation with its AI Act expected to be passed later this year. It has also partnered with the US on a voluntary code of conduct. Diverging from the EU, the UK is aiming to take a more pro-innovation approach – and to lead on safety, hosting the first major global summit on AI safety later this year.
In the APAC region, countries are adopting a range of approaches including: (1) a prescriptive regulatory framework similar to the EU’s approach (e.g. Mainland China); (2) voluntary guidance (e.g. Singapore); or (3) combining regulation and guidance (e.g. Japan).
Data is the lifeblood of AI and the increasing awareness and adoption of AI is driving more scrutiny than ever around the collection of data used to feed AI models. The use of personal data in Generative AI models raises significant privacy concerns, leading to the provision of ChatGPT being temporarily banned by the Italian data protection authority in April 2023.
ChatGPT remains under review by other data protection authorities in the EU and, at a time of increasing data regulation and more assertive enforcement and litigation, we expect regulators to take further action to protect the privacy of individuals in relation to generative AI.
Organisations will also need to address the evolving threat landscape caused by the adoption of generative AI, with organisations ingesting huge amounts more data to train their models, and cyber attackers reported to be using AI to create more sophisticated attack methods.
Generative AI platforms typically source data from the internet, often without permission, and inevitably in a manner that creates copies. Whether or not this sort of copying infringes copyright (and/or other IP rights) in the source material may depend on where the copying takes place. Moreover, the relevant law is, in many jurisdictions, unclear, in flux, or both, resulting in a challenging legal landscape to navigate.
Legislators across the globe are rushing to regulate this area, aiming to strike the right balance between AI companies and enabling innovation; and protecting IP rightsholders’ interests. The EU is seeking to address this in its upcoming AI Act and there are cases before the US courts which should determine whether training AI models on publicly available works constitutes copyright infringement in the US. These developments will bring change in 2023 and beyond.
Competition authorities around the globe are also considering how the competitive market for generative AI could evolve and which steps are needed to support competition and protect consumers.
In May 2023, the UK’s Competition and Markets Authority launched an initial review of artificial intelligence models and will report its findings in September 2023. In the US, the Federal Trade Commission is also watching the development of generative AI closely and in May issued guidance for businesses on the use of generative AI, warning against unfair or deceptive practices.
Competition authorities have been tracking the development of AI for some time, as part of a wider trend of scrutiny and intervention in digital markets. While it is too early to say whether generative AI tools such as ChatGPT will raise competition concerns, authorities may well road-test a range of competition issues based on how digital markets have evolved to date.
Gaming is at the forefront of technology advances with AI being used, for example, to understand players’ behaviour to improve games or identify new revenue streams; and to create immersive adaptive video game experiences with non-player characters that act as if they are controlled by a person.
Generative AI offers the potential for even more engaging immersive experiences in virtual worlds and for improving processes for game development and monetisation opportunities.
As the gaming industry continues to drive innovation and implement AI in 2023 and beyond, it will need to take account of the evolving regulation of AI and the broader regulation of the digital economy.
Read more on the key legal issues gaming companies and investors are facing as they pursue opportunities across the globe and some acute issues for the industry such as online safety, age verification and the metaverse.
There is increasing regulatory scrutiny of the use of AI – and now GenAI – tools. Companies adopting AI in a reckless fashion risk regulatory enforcement and litigation as well as significant reputational damage. Existing rules and rapidly evolving regulation create a complex matrix for organisations to navigate. However, the broad principles underpinning the various regimes have some common elements which need to be addressed.
When deploying AI models, it is important to consider the full spectrum of associated risks, including supply chain, data governance, antitrust, model risk management and business ethics. Companies must: (1) provide oversight and accountability for their AI; (2) be able to explain to customers – and regulators – how AI is being used; (3) validate initial findings and monitor products as they evolve; and (4) keep a human in the loop.
Having a policy and a governance structure enables organisations to adopt AI successfully and reduces the risk of harmful outcomes.