International AI standards proposed for securities market intermediaries and asset managers

Numerous regulators have noted that the use of artificial intelligence in financial services has the potential to create new risks or amplify existing risks. In response to those concerns, the International Organization of Securities Commissions has drafted guidance for how AI and machine learning should be overseen. The guidance provides an insight into the approach national regulators are likely to take as AI becomes more commonplace in securities markets.

IOSCO consultation paper on AI

IOSCO, the global standard setter for the securities sector, is consulting on new draft guidance to its members on the use of artificial intelligence and machine learning by market intermediaries and asset managers. Once finalised, the guidance would be non-binding but IOSCO would encourage its members to take it into account when overseeing the use of AI by regulated firms.

IOSCO’s membership comprises securities regulators from around the world. It aims to promote consistent standards of regulation for securities markets.

Six measures to address AI risks

The draft guidance puts forward six fairly detailed measures for regulators to impose on the firms they supervise to reflect expected standards of conduct. In short, these cover:

  1. having designated and appropriately skilled senior management responsible for the oversight of AI and a documented internal governance framework with clear lines of accountability,
  2. adequate testing and monitoring of AI algorithms throughout their lifecycles,
  3. ensuring staff have adequate skills, expertise and experience to develop and oversee AI controls,
  4. managing firms’ relationships with third party providers, including having a clear service level agreement with clear performance indicators sanctions for poor performance,
  5. what level of disclosure firms should provide to customers and regulators about their use of AI, and
  6. how to ensure that the data that the AI relies on is of sufficient quality to prevent biases.

These measures are intended to tackle the perceived problems of AI around resilience, ethics, accountability and transparency, which we explore in our report on Artificial Intelligence in Financial Services: Managing machines in an evolving legal landscape.

IOSCO’s guidance is subject to the principle of proportionality. Notably, it emphasises that the size of firm is not the only relevant factor in this regard and that regulators should also consider the activity that is being undertaken, how complex and risky it is, and the impact that the technology could have on clients and markets.

AI not yet receiving special treatment?

As well as setting out its guidance, the report also indicates some of its findings from industry discussions. Many of these findings suggest that, despite broadening and increasingly sophisticated use of AI, firms have not generally made special arrangements for its governance. For example, according to IOSCO, many firms:

  • do not employ specific compliance personnel with the appropriate programming background to appropriately challenge and oversee the development of machine learning algorithms
  • use the same development and testing frameworks that they use for traditional algorithms and standard system development management processes
  • say that they do not have the human resources or the right expertise at all levels to always fully understand AI and ML algorithms.

One reason for this could be that AI is generally not yet subject to special regulatory treatment. The IOSCO paper makes the point that many jurisdictions have overarching requirements for firms' overall systems and controls but only a few have regulatory requirements that specifically apply to AI- and ML-based algorithms.

How firms are using AI today

According to IOSCO, market intermediaries are already using AI in their advisory and support services, risk management, client identification and monitoring, selection of trading algorithms, and asset / portfolio management. By contrast, asset managers’ use of AI is in its “nascent” stages and is “mainly used to support human decision-making”. This is consistent with the findings of the Bank of England and FCA from a 2019 survey of UK financial institutions.

What happens next?

The consultation on the draft guidance closes on 26 October 2020.

In the UK, the FCA is currently working with the Alan Turing Institute to look at the implications of the financial services industry deploying AI. Meanwhile, the European Commission has released its own guidelines for trustworthy AI and is expected to propose legislation in this area later in 2020.