Images are still loading please cancel your preview and try again shortly.
Accessibility tools

AI in Financial Services 4.0

The explosion of generative AI models and developments in agentic AI are driving a new wave of digital transformation. Regulators across the globe are striving to keep up, particularly in the highly regulated financial services sector. In the fourth edition of this report we partner with IBM, whose research demonstrates how AI-first institutions are already outperforming peers.

We highlight both the opportunities and challenges facing businesses deploying AI in core financial markets, offering a broad overview of the complex and fast-changing regulatory landscape, along with practical guidance for managing risk.

 

 

AI in financial services

 

 

AI in Financial Services 4.0: Managing machines in an evolving legal landscape

Get your copy of the guide

DOWNLOAD

Key themes

1

The AI revolution in finance – opportunities and challenges

AI adoption is accelerating in financial services, a data-rich industry with a long history of machine learning. With advanced deployments of AI increasing, and significant process mapping already in place, the industry is primed to lead in generative AI, notably AI agents. But, given the highly regulated nature of the sector where consumer protection and market stability are regulatory priorities, firms must understand and manage the associated legal risks.
2

The global legal landscape for AI

The challenge for lawyers advising business is both to understand how existing legal and regulatory frameworks apply to AI and to keep up with new AI-specific law and regulation as it develops at an international, regional and national level.
3

AI and financial services regulation

As important as it is to keep one eye on future developments, when implementing AI systems today, financial services firms must do so within the constraints of the existing financial services regulatory framework.
4

Reconciling AI with global data protection laws

The interaction between AI and data protection legislation is complex and still not fully resolved. In the EU, regulators are considering fundamental issues such as when personal data taken from the internet can be used to train an AI. They also expect those using AI to apply high governance standards, including completing detailed impact assessments.
5

Regulating AI through competition law – key issues to consider

Antitrust regulators are generally focused on the impact of frontier technologies and developing digital markets on competition. How businesses use emerging technologies, especially AI, is emerging as a key theme for antitrust regulators across the globe. Regulators are not only considering whether competition law is fit for purpose with respect to the impact of AI – for example, with respect to algorithmic collusion, hub and spoke arrangements, tacit collusion and broader harms – but are also balancing these risks with a desire to fuel financial growth and technological innovation.
6

Practical guidance on managing legal risk in AI

Financial services organisations need to take a holistic, forward-looking approach to anticipating the impact of AI technology on their business. In practical terms, firms need to have a clear understanding of what they want to achieve in deploying any AI tech, and how it will work to achieve that goal, together with a clear plan for identifying and managing and associated technological, reputational and legal risks.

Global overview

Click the regions below to discover their regulatory approach to AI in financial services.

Light-touch, industry-led: The UK government previously announced that it intends to adopt a light-touch and industry-led approach, meaning that there won’t be specific legislation like the EU AI Act. However, in July 2024 government proposed a set of binding measures on AI and its intention to establish appropriate AI legislation.
Mostly self-regulated: The general regulatory approach is to foster AI innovation through the responsible use of AI. The financial regulator has issued AI-friendly guidelines and best practice for regulated firms, while the data regulator has also issued guidance for the use of personal data in AI. In May 2024, the Singapore government published a comprehensive Model AI Governance Framework for GenAI.
Currently no AI specific laws: No statutory laws on the use of AI, although the HKMA and SFC have issued guidance on the use of AI in financial services. The Privacy Commissioner has published the Artificial Intelligence: Model Personal Data Protection Framework. The Government is carrying on a consultation process on reinforcing Copyright Ordinance in the face of AI advancements.
Efforts at coordination: The OECD has updated its principles to which 49 countries are committed, including the G20 and the EU– this addresses issues related to the emergence of general purpose and generative AI. There have been a variety of international co-ordination efforts including the G7 Hiroshima AI Process; US-EU Trade and Technology Council and global AI Safety Summits.
Growing body of AI guidance and law: The US is characterized by a complex regulatory landscape which is mainly state-led. There is various sector specific regulator guidance and various state level laws with respect to AI specific activities. The US has also produced a comprehensive AI Risk Management Framework. May and September 2024 saw AI legislation in Colorado and California. Acknowledging this uniquely complex regulatory landscape, the Trump administration published in July 2025 the AI Action Plan which is a comprehensive effort aimed to solidify United States leadership in artificial intelligence.
Voluntary framework and technology-neutral regulations: AI is currently regulated through a voluntary scheme and various technology-neutral regulations such as consumer protection, online safety and privacy laws. However, the federal government has announced voluntary AI safety standards as part of its measures to regulate AI and released a proposals paper for introducing mandatory guardrails for AI in ‘high risk’ settings (Sept 2024).
Upcoming overarching AI legislation: South Korea’s AI law is aimed at supporting innovation rather than imposing restrictions. The Act on Promotion of the AI Industry and Framework for Establishing Trustworthy AI was passed in January 2025 and will take effect on 22 January 2026.
Potential change to prescriptive approach: The Government has issued non-legally binding guidelines for AI developers, providers and business users to follow compliance requirements when using AI. The current Japanese government is pushing for legislation to regulate the use of GenAI and the R&D of the AI-related technologies, which if passed, would signify a “hard law” approach.
Extensive AI regulatory regime: The EU AI Act was adopted by the EU Parliament in March 2024 and entered into force in August 2024. It is the first AI-specific regulation across the globe and has extensive regulatory and liability regime with extra-territorial reach. It will become fully applicable in two years with various provisions being implemented at different intervals.
Prescriptive and risk-based: Rules effective as of August 2023 cover all aspects of generative AI. Several rules were published to regulate other AI-related service/products (e.g. Algorithm Recommendation and Deep Synthesis Services). A comprehensive official draft AI law has been listed in the legislative agenda.

Additional resources

artificial human

Updated AI Toolkit - Ethical, safe, lawful: A toolkit for artificial intelligence projects

Close X
Operational Resilience

Operational Resilience

Close X
Handbook

EU Digital Package Handbook

Close X
Silicon_GettyImages1394867825_M(1)

AI in financial services: Applying operational resilience rules to AI

Close X
beams of light

Tech Legal Outlook 2025 Mid-Year Update

Close X

IBM - The impact of AI

Close X

IBM - From AI projects to profits

Close X

IBM - The technology behind InstructLab, a low-cost way to customize LLMs

Close X
   

Contacts


x Find a Lawyer