Managing machines: the governance of artificial intelligence in financial services

A leading UK regulator has directly addressed the governance implications of adopting artificial intelligence and machine learning technologies within the financial services sector. Referring to the initial results of a major Bank of England and FCA survey of firms’ adoption of the technologies, the statement highlights (1) data usage, (2) the role (and responsibilities) of people and (3) transition risks as three areas of key regulatory focus and matters deserving board attention.

FCA conference on governance in banking

James Proudman – a leading UK regulator - spoke at the FCA conference on the governance of artificial intelligence in the financial services sector. The speech is the most revealing statement yet from a UK regulator on the risks arising from the deployment of artificial intelligence (AI) and machine learning (ML) technologies.

The speech reveals that the Bank of England and the FCA have conducted a survey of leading UK financial services firms to assess their current and future plans to deploy the technologies.  Mr Proudman concludes by making three observations around the challenges for boards and management before suggesting three principles for governance.

Adoption of AI in financial services 

In his speech Mr Proudman commented that AI & ML technologies could deliver various benefits, including:

  • enhanced AML/KYC: where, according to the IMF, two-thirds of banks and insurers are using or experimenting with AI; and
  • more accurate credit assessments: particularly for high-volume lending.
Survey of significant financial services firms

Mr Proudman revealed that the Bank of England and FCA are looking into regulated firms’ adoption of AI and will be publishing the full results of a survey of nearly 200 firms in Q3 2019.

Some indicative results are:

  • the mood around the adoption of AI implementation is strategic but cautious;
  • many firms are currently in the process of building the infrastructure for large scale AI deployment;
  • 80% of responding firms reported using the technologies in some form;
  • the typical firm expects to make close to 20 applications within the next 4 years; and
  • barriers to adoption were mostly seen as internal rather than stemming from regulation.
3 Key Challenges & 3 Key Principles:

With the industry looking to rapidly scale its application of AI and ML technologies, the speech identifies three challenges for boards and management and three principles for governance in responding to them. 

1. Data & Controls
  • Challenge: Data hygiene is key.  Data that is incomplete, inaccurate or mislabelled (or which embeds bias) is likely to generate problematic outputs (for example poor or biased credit decisions).  There are also ethical, legal and conduct issues associated with the use of personal data.  Similar issues arise in relation to data processing and analysis.
  • Human in the loop? The regulators expect oversight and testing both at the design and the deployment stages. A system which has been heavily tested prior to deployment shouldn’t be left to run and make judgements without continued supervision and testing. The speech also refers to a human or other override being required in certain cases but doesn’t provide further guidance. Firms will need to exercise their judgement carefully.
  • Principle: Since AI poses challenges to the proper use of data  Boards should attach real priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.
2. The role of humans

“Firms will need to consider how to allocate individual responsibilities, including under the Senior Managers Regime.”

  • Challenge: AI or ML technologies will not reduce the existing accountability burden on humans.  They will challenge the existing approach to allocating accountability – particularly under the Senior Managers Regime - and firms should consider the implications.
  • Responsibility shift: The regulators question whether responsibility will be shifted both towards the board but potentially also to more junior, technical staff, which in the long run may mean less responsibility for front-office middle management. 
  • Principle: Boards should continue to focus on the oversight of human incentives and accountabilities within AI and ML centric-systems.  
3. Execution risk at board level
  • Challenge: As the rate of adoption of AI in financial services accelerates, the execution risk that the boards have to deal with will also increase. So far, firms have embraced either a piecemeal approach or a more general firm-wide approach to adoption. Regulators acknowledge the costs of aligning internal processes, systems and controls and underline the need for firms to make sure that there are senior managers with the appropriate skillset to deal with these new technological and legal challenges. 
  • Principle: Boards should reflect on the skills and controls that are necessary to oversee the transition.  Many of the challenges raised by this transition can only be brought together at, or near, the top of the organisation. 
What’s happening next?

This is a fast-evolving space. Refer to our AI toolkit or contact one of our specialists for guidance on how to conduct an artificial intelligence or machine learning project ethically, safely and lawfully.