Images are still loading please cancel your preview and try again shortly.
Accessibility tools

Artificial Intelligence in Financial Services 2.0

Managing machines in an evolving legal landscape

Artificial Intelligence in Financial Services 2.0: Managing machines in an evolving legal landscape

We are seeing increased engagement from regulators across the globe with respect to artificial intelligence, including in the financial services arena where AI is already having a profound impact. In this updated report, originally published in September 2019, we explore developments in both the existing and evolving regulation of AI, and key legal issues that arise for businesses deploying this technology. In each case we focus on the context of the financial services sector – in which AI use is now widespread – as a key component of modern economies.
 
AI in financial services

Artificial Intelligence in Financial Services 2.0: Managing machines in an evolving legal landscape

Get your copy of the guide

DOWNLOAD
AI - a boardroom issue

AI – a boardroom issue

AI technologies are presenting firms with a wide range of new opportunities, both in terms of cost- and risk-reduction as well as in revenue generation. Firms may need to embrace these types of opportunities in order to remain competitive. However, these technologies also give rise to considerable new challenges, which warrant careful consideration at a board level. In particular:

*Click through the wheel to navigate the key themes of the publication
Heightened risk of failure:

Depending on the precise area of deployment, the consequences of a financial firm’s AI system going awry could potentially be catastrophic. Imagine, for example, widespread consumer discrimination, engagement in market abuse or a failure to meet regulatory capital requirements. The risks of failures arising through the use of AI are heightened by the novel features of the technology (as discussed above), which can often result in relevant issues going undetected.

 

Evolving regulation:

Regulatory frameworks are continuing to evolve in response to the novel features of machine learning discussed above and as AI technologies continue to develop. Approaches to regulation differ across different regions. In addition, there are various soft law standards to consider. We discuss this further in Chapter 2.

 

Compliance with existing financial regulation:

In most jurisdictions, existing financial regulation already imposes high standards in relation to issues such as governance, risk-management and control and outsourcing. The novel features of AI applications can test the adequacy of existing systems and processes in meeting these types of compliance requirements. In some cases, the use of AI may simply be incompatible with existing regulatory requirements. We discuss this further in Chapter 3.

 

Senior manager accountability:

In many jurisdictions, senior managers will have individual regulatory responsibilities if AI is deployed within the scope of their responsibilities. This may arise even if they are unaware of the deployment. It is therefore incumbent on senior managers to be proactive in managing these types of risks. We discuss this further in Chapter 3.

 

Cross-sectoral regulation:

Depending on the application, various cross-sectoral laws and regulations will come into play. We consider issues relating to data protection law in Chapter 4 and competition law in Chapter 5. 

 

Liability under contract, common and civil law and product liability regimes:

There are fundamental legal questions to consider as to who will be liable if and when things go wrong, as we explore in Chapter 6.

 

AI – a boardroom issue

AI technologies are presenting firms with a wide range of new opportunities, both in terms of cost- and risk-reduction as well as in revenue generation. Firms may need to embrace these types of opportunities in order to remain competitive. However, these technologies also give rise to considerable new challenges, which warrant careful consideration at a board level. In particular:

*Click through the wheel to navigate the key themes of the publication
Heightened risk of failure
Heightened risk of failure:

Depending on the precise area of deployment, the consequences of a financial firm’s AI system going awry could potentially be catastrophic. Imagine, for example, widespread consumer discrimination, engagement in market abuse or a failure to meet regulatory capital requirements. The risks of failures arising through the use of AI are heightened by the novel features of the technology (as discussed above), which can often result in relevant issues going undetected.

 

Evolving regulation
Evolving regulation:

Regulatory frameworks are continuing to evolve in response to the novel features of machine learning discussed above and as AI technologies continue to develop. Approaches to regulation differ across different regions. In addition, there are various soft law standards to consider. We discuss this further in Chapter 2.

 

Compliance with existing financial regulation
Compliance with existing financial regulation:

In most jurisdictions, existing financial regulation already imposes high standards in relation to issues such as governance, risk-management and control and outsourcing. The novel features of AI applications can test the adequacy of existing systems and processes in meeting these types of compliance requirements. In some cases, the use of AI may simply be incompatible with existing regulatory requirements. We discuss this further in Chapter 3.

 

Senior manager accountability
Senior manager accountability:

In many jurisdictions, senior managers will have individual regulatory responsibilities if AI is deployed within the scope of their responsibilities. This may arise even if they are unaware of the deployment. It is therefore incumbent on senior managers to be proactive in managing these types of risks. We discuss this further in Chapter 3.

 

Cross-sectoral regulation
Cross-sectoral regulation:

Depending on the application, various cross-sectoral laws and regulations will come into play. We consider issues relating to data protection law in Chapter 4 and competition law in Chapter 5. 

 

Liability under contract, common and civil law and product liability regimes
Liability under contract, common and civil law and product liability regimes:

There are fundamental legal questions to consider as to who will be liable if and when things go wrong, as we explore in Chapter 6.

 

WATCH: Webinar in collaboration with techUK, featuring speakers from Bank of England and Starling Bank

On Thursday 18th November we hosted a webinar on this topic in collaboration with techUK. We joined Ollie Thew, a Senior Fintech Specialist at the Bank of England, and Harriet Rees, Head of Data Science at Starling Bank to discuss:

  • Existing and evolving financial and data protection regulation, and the compliance requirements that would apply to AI’s use in finance
  • Practical guidance to address some of the key ethical challenges relating to the use of AI in the financial services sector
  • The potential impact of the EU’s proposal for AI specific, cross-sector regulation on the financial services sector and discuss how the UK might respond in terms of aligning or diverging.

Watch the session recording here.

Additional resources

AI Toolkit

Close X
Motion Speed City

Building the UK financial sector’s operational resilience

Close X

From Principles to Practice: Use Cases for Implementing Responsible Artificial Intelligence in Financial Services

Close X
Fintech

Regulating the Digital Economy Series

Close X
   

Contacts


x Find a Lawyer