FCA and Alan Turing Institute to explore practical application of AI transparency framework

In 2019, the FCA and Alan Turing Institute embarked on a year-long project to look at the use of AI in financial services. Now, they are planning to road-test a framework for thinking about transparency needs in the context of AI in financial markets.

Transparency the key to ethical AI

As discussed in our thought leadership report, Artificial Intelligence in Financial Services: Managing machines in an evolving legal landscape, transparency is a key principle in delivering ethical AI – as well as an area of regulatory focus.

Supporting other regulatory objectives such as fairness and accountability (which, respectively, require transparency to be demonstrated and accurately allocated), transparency in the use of AI solutions is a major consideration for firms. It matters not just as an ethical ideal, but also as a means to deliver regulatory compliance and engender customer trust (and, ultimately, deliver successful products).

Enabling “beneficial innovation” through transparency

In a recent FCA blog post, the FCA and Alan Turing Institute (ATI) have also emphasised the role of transparency – simply, users having access to relevant information about an AI system – as an “enabler of beneficial innovation”. They have also described transparency as a “lens for reflecting on relevant ethical and regulatory issues and thinking about strategies to address them”. 

In particular, transparency is cited as key to:

  • demonstrating trustworthiness and encouraging widespread acceptance of AI systems;
  • enabling customers to understand and challenge the basis of certain outcomes (the FCA and AT cite the example of a sub-optimal loan decision based on an algorithmic credit assessment informed by incorrect information); and
  • allowing customers to make informed choices about their behaviour in full view of the factors that determine outcomes (for example, knowing how credit scores are affected by late / missing payments, or the criteria that influence certain insurance pricing).
A framework for AI transparency

The FCA and AT have set out a proposed high-level framework for thinking about transparency needs in respect of AI solutions. The practical application of this framework is expected to be workshopped with industry and civil stakeholders.
Firstly, when it comes to establishing what is “relevant” information about an AI system, it is suggested that it may be helpful to split such information into two categories:

  • model-related information – the inner workings of the AI model, i.e. the model code and other detail that provides visibility on model input and output relationships; and
  • process-related information – information about the process of developing and using the AI system itself (this may include information on any phase of the system’s lifecycle).
Transparency in practice: the transparency matrix

As well as looking at who should have access to what information – with distinctions made between the requirements for staff, clients, regulators and other parties (e.g. shareholders) – the FCA suggests that decision-makers develop a “transparency matrix”. 

In short, this means that firms go beyond merely asking “what information should be accessible?” (and deploying a one-size-fits-all-stakeholders approach in response). Instead, different stakeholder types are considered independently; and decisions on the information to be made accessible to them tailored based on the following factors: 

  • rationale-dependence – the reasons that drive certain stakeholders’ interest in transparency;
  • stakeholder-specificity – how decisions on the information to be provided may differ between stakeholder types; and 
  • use case-dependence – how such decisions may also hinge on the specific use case.

Using a systemic framework such as this will undoubtedly help firms identify – at a more granular level – myriad transparency needs and respond to them effectively. It may also help firms overcome the “explainability problem” – that explanations are not a natural by-product of complex AI algorithms – as discussed in our recent webinar, Managing machines: AI regulation in finance (available to our clients via the Linklaters Knowledge Portal).

The bigger picture

The FCA and AT collaboration should be viewed in the context of complementary initiatives that are delivering guidance for the ethical use of use of AI at a national and global level. Some of the other key initiatives that will be relevant to financial services-focused work in the UK are summarised below.