The explosion of generative AI models and developments in agentic AI are driving a new wave of digital transformation. Regulators across the globe are striving to keep up, particularly in the highly regulated financial services sector. In the fourth edition of this report we partner with IBM, whose research demonstrates how AI-first institutions are already outperforming peers.
We highlight both the opportunities and challenges facing businesses deploying AI in core financial markets, offering a broad overview of the complex and fast-changing regulatory landscape, along with practical guidance for managing risk.
Click the regions below to discover their regulatory approach to AI in financial services.
Light-touch, industry-led: The UK government previously announced that it intends to adopt a light-touch and industry-led approach, meaning that there won’t be specific legislation like the EU AI Act. However, in July 2024 government proposed a set of binding measures on AI and its intention to establish appropriate AI legislation.
Mostly self-regulated: The general regulatory approach is to foster AI innovation through the responsible use of AI. The financial regulator has issued AI-friendly guidelines and best practice for regulated firms, while the data regulator has also issued guidance for the use of personal data in AI. In May 2024, the Singapore government published a comprehensive Model AI Governance Framework for GenAI.
Currently no AI specific laws: No statutory laws on the use of AI, although the HKMA and SFC have issued guidance on the use of AI in financial services. The Privacy Commissioner has published the Artificial Intelligence: Model Personal Data Protection Framework. The Government is carrying on a consultation process on reinforcing Copyright Ordinance in the face of AI advancements.
Efforts at coordination: The OECD has updated its principles to which 49 countries are committed, including the G20 and the EU– this addresses issues related to the emergence of general purpose and generative AI. There have been a variety of international co-ordination efforts including the G7 Hiroshima AI Process; US-EU Trade and Technology Council and global AI Safety Summits.
Growing body of AI guidance and law: The US is characterized by a complex regulatory landscape which is mainly state-led. There is various sector specific regulator guidance and various state level laws with respect to AI specific activities. The US has also produced a comprehensive AI Risk Management Framework. May and September 2024 saw AI legislation in Colorado and California. Acknowledging this uniquely complex regulatory landscape, the Trump administration published in July 2025 the AI Action Plan which is a comprehensive effort aimed to solidify United States leadership in artificial intelligence.
Voluntary framework and technology-neutral regulations: AI is currently regulated through a voluntary scheme and various technology-neutral regulations such as consumer protection, online safety and privacy laws. However, the federal government has announced voluntary AI safety standards as part of its measures to regulate AI and released a proposals paper for introducing mandatory guardrails for AI in ‘high risk’ settings (Sept 2024).
Upcoming overarching AI legislation: South Korea’s AI law is aimed at supporting innovation rather than imposing restrictions. The Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI was passed in January 2025 and will take effect on 22 January 2026.
Potential change to prescriptive approach: The Government has issued non-legally binding guidelines for AI developers, providers and business users to follow compliance requirements when using AI. The current Japanese government is pushing for legislation to regulate the use of GenAI and the R&D of the AI-related technologies, which if passed, would signify a “hard law” approach.
Extensive AI regulatory regime: The EU AI Act was adopted by the EU Parliament in March 2024 and entered into force in August 2024. It is the first AI-specific regulation across the globe and has extensive regulatory and liability regime with extra-territorial reach. It will become fully applicable in two years with various provisions being implemented at different intervals.
Prescriptive and risk-based: Rules effective as of August 2023 cover all aspects of generative AI. Several rules were published to regulate other AI-related service/products (e.g. Algorithm Recommendation and Deep Synthesis Services). A comprehensive official draft AI law has been listed in the legislative agenda.
Our updated AI Toolkit, crafted by our expert technology, privacy, intellectual property, litigation, employment, competition, ESG and financial regulatory teams, is intended as a quick-reference guide for in-house counsel on all things AI.
This Toolkit starts with a technical primer and provides an overview of key AI compliance, contracting and contentious topics across the EU/UK, Asia and the US.
UK regulators have prescribed how financial institutions and market infrastructure must build their resilience to business disruption.
The EU is currently pushing through a transformational digital regulation package. However, that package is made up of a number of different laws, such as the Data Governance Act, the AI Act, NISD2, the Digital Services Act and ten other related instruments.
Keeping track of all these new laws is a challenge. Our EU Digital Regulation Handbook provides a short, accessible summary of the status of each law, together with an assessment of comparable developments in the UK. The handbook is available here.
When overseeing firms’ adoption of AI, regulators will leverage the tools within their operational resilience frameworks, including the EU’s Digital Operational Resilience Act and the corresponding rules in the UK.
During our last webinar of our series on AI in financial services in which we examined:
We explore the reality of implementing AI, the shifting legal landscape, AI in the Middle East and the role of AI in the energy transition, payments and online safety.
AI has captured the media zeitgeist in recent years, especially since the release of OpenAI’s ChatGPT in 2022. Countless stories probe the technology’s potential future, exploring the advancements of AI and the changes it might bring about. But what has been the real-world impact so far?
The initial euphoria surrounding generative AI has given way to a more nuanced, realistic understanding of its potential and challenges.
IBM and Red Hat’s new open-source project is designed to lower the cost of fine-tuning large language models by allowing people to collaboratively add new knowledge and skills to any model.
Insights from our Fintech lawyers around the world