AI projects often call for collaboration between the tech experts and an industry partner. This requires careful structuring particularly in relation to the intellectual property rights generated by that collaboration. Moreover, industry partners should not underestimate the value their knowhow and data bring to that collaboration.
Most of the advances in AI have been fuelled by data. We look at how to get hold of data and the constraints on the use of that data, notably under confidentiality and data protection laws. The use of a sandbox can help in many cases.
The aim of an AI system is to be intelligent - to analyse, decide, and potentially act, with a degree of independence from its maker. This raises interesting issues particularly where the algorithm at the heart of that system has no common-sense safety valve.
So what regulatory controls are needed on this decision making and what is your liability if you get it wrong?
AI systems can be opaque and behave unpredictably. Supervision is important. You should put systems and controls in place to ensure that live use of artificial intelligence systems is safe. That might include the use of counterfactuals, sampling and circuit breakers.
AI is used in a variety of financial contexts from the provision of robo-advice to trading decisions. Financial services firms must comply with both their broader regulatory obligations, and the specific controls in areas such as algorithmic trading. The use of AI should also be factored into the firm’s overall risk management framework.
Find out about AI and Innovation at Linklaters.
The world is transforming faster than ever before. Traditional business models across many sectors have either already been disrupted by new and agile players, or are staring down the barrel of imminent disruption.
No matter which industry organisations belong to, they all share a common focus: the need to contemplate and implement profound strategies of digital transformation.