How AI in financial services is regulated in the UK

In its national strategy for artificial intelligence, released in September 2021, the UK government pointed out that it has not (so far) introduced blanket AI-specific regulation, preferring instead to take a sector-led approach. This position might change once the UK’s Office for AI releases its white paper on governing and regulating AI in early 2022. In the meantime, financial services firms which are deploying AI need to do so within the confines of existing regulation and supervisory approaches.

High-level principles

Regulators, such as the UK’s Financial Conduct Authority and Bank of England, often say that they take a “technology-neutral” approach. This means that they do not prescribe or proscribe any technology for the firms they oversee or adjust regulatory treatment based solely on underlying forms of technology. So far this has meant that there is no specific restriction on the use of AI in financial services. But this does not mean that using AI is unregulated.

Deploying AI in the financial sector requires careful consideration of the existing regulatory framework. For example, when rolling out AI, firms will need to apply the high-level principles for business and fundamental rules set by the FCA and Prudential Regulation Authority. For example, the FCA requires firms to treat customers fairly and communicate with them in a way which is clear, fair and not misleading. This is relevant when it comes to how transparent firms are about how they apply AI in their business, especially where it could negatively affect customers (for example, when assessing creditworthiness).

Other high-level requirements oblige firms to conduct their business with due skill, care and diligence and have adequate risk management systems. The novel features of AI pose new or amplified risks for businesses. Under some AI models, particularly ones that use more advanced techniques, outputs may not be explainable as a function of their inputs.

Similarly, tools that are heavily reliant on training data may require new processes to manage the quality of that data. Risk frameworks should adapt to ensure AI-related risks are managed effectively.

Activity-specific regulation

More specific regulation may also apply. For example, firms providing investment advice must ensure that it is suitable for the customer. This remains the case when the advice is informed by AI or delivered through automated means, known as robo-advice. Another example is the detailed rules on algorithmic trading. Any use of AI systems to make high-frequency trading decisions would need to comply with these rules which are designed to avoid the risks of rapid and significant market distortion.

Ensuring compliance with these requirements often necessitates a degree of human intervention. For robo-advice, this is likely to involve a human signing off outputs from the algorithm before they are delivered as advice to the customer, which is known as having a “human-in-the-loop”.

Human sign-off on every decision would not, however, work for high-frequency trading as it would slow down the process in a way that is not commercially viable. Another option might be to take a “human-on-the-loop” approach which provides for human intervention during the design phase and in the monitoring of the system’s operations.

Some applications of AI may simply be incompatible with applicable regulatory requirements. Unlike traditional rules-based algorithms, machine learning algorithms do not produce a pre-determined result but instead are considered successful if they achieve a certain degree of accuracy. This feature may be incompatible with a requirement to achieve an absolute standard of compliance (such as a requirement to maintain a minimum level of regulatory capital).

Likewise, some algorithms may deliver different outputs in response to the same inputs over time (as the model is retrained with new data). This may be incompatible with a need for consistency.

Reliance on third parties

The rapid spread of AI in the financial sector has been fuelled in part by the emergence of a range of new AI-related tools and services offered by third parties. Many firms currently applying AI solutions rely on third parties to varying degrees. At the extreme end, some major cloud providers now offer “AI as a Service” packages, enabling organisations to upload and manage data with ease without having to invest in developing their own infrastructure. However, even where firms develop their own tools in-house, they often rely on third parties for specific components, notably software and training data.

The use of third parties is not unique to the field of AI but the heavy reliance on (in some cases, unregulated) third parties can raise several challenges. Firms must ensure that their risk management processes are effective, that they understand the impact of any outsourcing on their resilience and that they have contractually managed liability appropriately.

A June 2021 report from the FCA/Alan Turing Institute notes the increasing complexity of technology supply chains and the challenge this poses for responsible AI adoption. Firms will want to ensure they have effective communication channels across the supply chain so that they receive the information they need.

Firms should also be prepared for disruption to their AI systems. For example, operational resilience plans should include failover procedures to cover outages. More generally, there should be some oversight of how the AI systems are working over time and arrangements for humans to intervene when they no longer work as expected. The more dynamic the AI, the greater the risk that it exceeds its original parameters.

Individual accountability

As its rate of adoption in financial services accelerates, how to manage AI is increasingly a boardroom question. In this context, senior managers of financial services firms must be mindful of their individual liability. For example, the UK Senior Managers Regime requires senior managers to take reasonable steps to avoid a breach in the parts of the business for which they are responsible. Senior managers will therefore take a personal interest in AI where it is deployed and driving decision-making within the scope of their responsibility.

The opacity of the technology complicates how senior managers can evidence they have taken “reasonable steps” to ensure that the AI systems are compliant, both operationally and in terms of their output. Processes of reverse-engineering can sometimes be used to draw conclusions about the properties of so-called “black-box” algorithms, but these will not provide complete transparency.

The regulatory approach on this is still emerging but, as a minimum, firms will need to allocate responsibility for AI-related risks appropriately within the organisation.

Looking ahead

The FCA and Bank of England have convened an AI Public-Private Forum, one of the aims of which is to gather views on whether regulatory guidance could clarify how existing requirements apply in the context of AI. The final meeting of the AIPPF is due before the end of 2021. The next steps are uncertain but it is likely that the regulators will take some action in 2022, for example suggesting examples of good or poor practice when deploying AI.

If the Office for AI does propose taking a blanket approach to regulating AI, the UK would not be the first jurisdiction to take this path. For example, the European Commission has proposed introducing cross-sector EU legislation on AI.

Among other things, this would designate the use of AI for assessing creditworthiness as “high risk” meaning that it would be subject to more onerous standards than other uses. The proposals have a long way to go before they become law and are not expected to start to apply before 2024.

This article (subscription only) was originally published by Thomson Reuters Regulatory Intelligence.

Read our updated report on AI in financial services for more detail.