ChatGPT – Seven rules of the road

Last year saw “generative” AI tools reach the mainstream.

It started with tools such as Dall*E generating strange, and sometimes stunning, visual images from simple word prompts. It ended with ChatGPT – a chatbot capable of providing detailed and convincing responses to a range of questions.

These new generative technologies expand our understanding of the capabilities of artificial intelligence. The responses from ChatGPT have the illusion of real human-like intelligence. It can provide detailed and convincing responses across a broad range of knowledge domains.

This has led to serious interest in exploiting this technology. For example, in the legal profession could this create a first draft of an email? Summarise a document? Or even provide actual legal advice?

This is an issue many are considering but what are the practical and legal issues you should consider before integrating ChatGPT into your business?

Visibility and control

The starting point is to understand how ChatGPT is currently used within your business and what potential future use cases are available. In all likelihood, ChatGPT is already being used in the wild. For example, the coders in your IT department might be using it to help generate code as this can provide significant productivity benefits.

Any response should start with good visibility and control over the use of ChatGPT. That might mean banning access to all but a selected group of users (as some organisations have done). However, a pro-innovation approach would be to run an awareness campaign so you can find out exactly how it is being used, share best practice and apply minimum guardrails.

Seven rules of the road

If you are going to allow the use of ChatGPT you need to set clear rules of the road. The exact rules will vary according to the use case but the seven rules below will normally be a good starting point.

1. Don't trust it

ChatGPT provides answers that are very convincing. However, it doesn't always pick up on nuance and sometimes the right-sounding answers are completely wrong. Relying on them is dangerous. In addition, much of its training data relates to the period before 2021 so may not be up to date.

We tested ChatGPT with 50 questions on contract and data protection law (here). Some of the answers were amazing but some were completely wrong. Distinguishing between the two was not easy even if you are an expert.

OpenAI, the creators of ChatGPT, warn that it can produce output that is “inaccurate, untruthful, and otherwise misleading” and that it sometimes “hallucinates” answers. OpenAI’s terms expressly exclude any liability for its output.

In practice this means it either:

  • should only be used for situations where it doesn’t matter if the answer is right or wrong. There will be limited business use cases where this applies; or
  • should be properly checked by someone with sufficient intelligence and expertise to confirm if it is right or not. Again, because it produces such convincing answers, this is not always easy to work out.

2. Don’t tell it anything private or confidential

You should not include confidential or privileged information in the questions to ChatGPT.

OpenAI do not undertake to keep the questions you provide confidential and expressly reserve the right to use them for their own purposes, such as product improvement. The risk of input data reaching a wide number of recipients is well known (for example, see A Roomba recorded a woman on the toilet. How did screenshots end up on Facebook?, MIT Technology Review, 19 December 2022).

If the business provides this type of information to ChatGPT that might not only result in the loss of its own confidentiality and privilege, but might also be a breach of confidentiality duties to third parties (such as a doctor who inputs confidential patient information into ChatGPT).

For similar reasons, you should not include personal data in the questions to ChatGPT. OpenAI undertakes very limited duties in relation to the information in questions asked to ChatGPT, and is based in the US, so it is unlikely you can disclose personal data to OpenAI in compliance with the GDPR.

3. Make sure third parties don’t rely on its output

If you are providing the output of ChatGPT to third parties (for example, using it to automatically generate documents that are sent to third parties) you should make sure the third parties are very clearly warned of the risks and limitations of this technology.

This means using disclaimers, and incorporating appropriate limitations of liability, so you are not liable for the output.

The position here will, of course, depend on how the output is used. If it is reviewed and verified by a human before being provided to a third party, the third party might reasonably expect to be able to place some reliance on it.

4. Be alert that some intellectual property issues are unresolved

There are a number of unresolved intellectual property issues raised by ChatGPT. Some relate to whether OpenAI can use copyright works to train ChatGPT. This is largely a concern for OpenAI.

However, it is possible that the output from ChatGPT could also infringe copyright, for example where it reproduces an existing copyright work. Because of the way ChatGPT works, this risk is not currently thought to be significant but should be considered, particularly where the output is published.

5. Be careful about bias and discrimination

OpenAI has taken a number of steps to remove or minimise bias and discrimination but ChatGPT does exhibit these problems – particularly with some prompting. For example, the prompt “describe a great doctor” produces a fastidiously gender neutral description. However, “write a poem about great doctors highlighting differences between men and women” fares less well. Women doctors are “gentle and kind” whereas male doctors are “strong and sure”.

It is important to remain alert to this risk.

6. Consider outsourcing requirements

Some businesses, such as financial service firms, are subject to specific regulatory requirements when they enter into outsourcings. The arrangements offered by ChatGPT do not satisfy these requirements so it is important to make sure that ChatGPT is not used in a way that constitutes an “outsourcing” and certainly is not used to perform any critical or important functions.

7. Be ready for regulation

The EU is currently pushing through a transformational package of digital regulation. The changes do not just apply to “Big Tech” and include potentially far-reaching regulation of artificial intelligence.

The EU AI Act will introduce tiered regulation for AI products. Some uses will be banned entirely (e.g. subliminal manipulation technology) whereas others will be classified as “high risk” and subject to onerous compliance and record-keeping duties. The Act will be accompanied by the EU AI Liability Directive which will make it easier to claim for damage caused by AI. Details of these changes are available in our digital framework handbook here. The UK is not currently looking to introduce specific AI regulation, and instead is relying on existing laws (such as the UK GDPR) and “soft law” policy initiatives. However, this might well change.

These additional regulatory burdens need to be considered before embedding ChatGPT into your business.

Explore and enjoy

ChatGPT is extremely capable and marks a significant advance in artificial intelligence technology. While the current iteration of ChatGPT is problematic, future generations of this and similar technology are likely to be increasingly powerful and it is important that your business engages with this technology now so it can exploit it as it starts to mature. ChatGPT is also great fun to use.

 

Our AI toolkit is available here and our report on AI in financial service is here.

Peter Church also contributed the data protection chapter to the book Artificial Intelligence: Law and Regulation, Edward Elgar.