AI: Check your cites? Read the cases…the Ayinde judgment

Artificial intelligence tools are playing an increasingly important role in the delivery of legal services. However, these tools create considerable risks. We look at some hard statistics on the predisposition of AI tools to make up cases and examine the Divisional Court’s judgment in Ayinde as an example of why this can be a real problem in practice.

Do AI tools make up cases?

The simple answer to this is yes. Large Language Models are stochastic parrots trained to create text based on probabilistic predictions. This process of regurgitating and reinventing the internet (and other learned text) creates an inherent risk that case citations will not be replicated accurately or will simply be fabricated.

The LinksAI English law benchmark gives an idea of the prevalence of this issue. The benchmark was created to test the ability of Large Language Models to answer legal questions. The first benchmark was released in October 2023 and the second benchmark in February 2025.

One of the things we measured is the accuracy of citations. Each answer was given a score out of three on citations and an answer containing a fictional citation was automatically given a score of 0. The table below provides an overview of how often the LLMs we tested were awarded a score of zero on this basis.

graph 

These numbers are broadly consistent with the Ayinde judgment below where, in one of the situations considered by the Divisional Court, the claimant listed 45 cases, of which 18 did not exist.

Having said that, the most recent LLMs we tested (Gemini and o1) are improving and only included fictitious cases in 9% of the questions we asked. The accuracy of these tools should improve further over time, and newer tools offer research capabilities to help check the citations provided. However, there is clearly still a problem.

Why does this matter? The Ayinde judgment

The Divisional Court’s judgment in Ayinde, R v Qatar National Bank QPSC & Anor [2025] EWHC 1383 is a textbook example of why this really matters in practice. The court indicated that including fictitious citations in pleadings is an extremely serious matter that might well lead to:

  • Contempt of court.
  • Referral to a regulator.
  • Strike out and wasted cost sanctions.
  • Public admonishment.

The Divisional Court suggests that, at a minimum, the inclusion of fake citations is likely to result in a referral to a regulator and a wasted costs award. While a public admonishment could be issued instead, given the risks to the administration of justice, admonishment alone is unlikely to be a sufficient response.

Perhaps more significant is the pointed remarks to those managing law firms or chambers. The court states “practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services…For the future… the profession can expect the court to inquire whether those leadership responsibilities have been fulfilled”.

What happens if you do get caught out?

The court’s strong reaction is partly a product of the extremely poor handling of one of the cases before it, which related to judicial proceedings against the London Borough of Haringey. The grounds for judicial review were settled by counsel (Ms F) and contained no less than five fictitious cases.

When Haringey’s solicitors identified the issue, the correct response would have been to immediately investigate and provide a full and comprehensive apology. Instead, Ms F drafted an extraordinary response variously stating that the claimants did “not see the point you are making by correlating any errors in citations to the issues addressed in the request for judicial review”. The response went on: “We hope that you are not raising these errors as technicalities to avoid undertaking really serious legal research …It appears to us improper to barter our client's legal position for cosmetic errors”.

When the matter came before Ritchie J ([2025] EWHC 1040 (Admin)), Ms F denied using artificial intelligence tools claiming that “she kept a box of copies of cases, and she kept a paper and digital list of cases with their ratios. She said that she had "dragged and dropped" the reference to [the fictitious cases] from that list into the grounds for judicial review.”

When the matter came before the Divisional Court, Ms F continued to deny using artificial intelligence tools, though claimed she “"may also have carried out searches on Google or Safari" and that she may have taken account of artificial intelligence generated summaries of the results”.

The Divisional Court was wholly unimpressed, concluding that Ms F either deliberately included fake citations in her written work or used generative artificial intelligence tools and lied about it. Both possibilities met the threshold for initiating contempt proceedings.

In the end the court decided not to initiate contempt proceedings, mainly because Ms F was extremely junior. It instead referred Ms F to the Bar Standard Board and referred the instructing solicitor to the Solicitor Regulation Authority (on top of Richie J’s earlier award of wasted costs).

However, the Divisional Court ends with a severe warning. Their “decision not to initiate contempt proceedings in respect of Ms [F] is not a precedent. Lawyers who do not comply with their professional obligations in this respect risk severe sanction”.

What does it mean to me?

There are a number of immediate practical lessons:

  • Use as a sword. You should check not just your own citations but also those of your counterparty. The clever point they have made may well be entirely without foundation.
  • Check your cites? Or read the cases. It is also clearly critical to your continued career in the law that you carry out a diligent and systematic check of all of the cases, laws and quotations from any AI tools you use. This is easier said than done. Our experience with the LinksAI Benchmark is that the responses from these tools sound polished and authoritative, so it is all too easy to be lulled into a false sense of security. Having said that, is a simple mechanical citation check enough? Before deploying a case in argument you should actually read it. The details of the case may lead to new arguments, show it is irrelevant or even positively harmful. The devil is in the detail and blindly citing cases is never a good idea.
  • If you do get caught out, apologise quickly and in full. The implications of including a false citation are serious and, at a minimum, are likely to include a referral to the regulator (i.e. SRA or BSB) and a wasted cost order. However, the implications of trying to style it out are potentially much worse. Those providing fake citations will often be caught red-handed as, if a cited case does not exist, no amount of bluster will magic it into reality. The Ayinde judgment illustrates how the failure to promptly investigate and apologise raises serious risks of being found in contempt.
  • Keep using AI (safely). Finally, none of this should deter you from continuing to (safely) use AI. The Ayinde judgment has an Annex with a litany of examples of lawyers coming seriously unstuck in the UK, the US, Australia, New Zealand and Canada. However, the AI tools are only going to get better. The Ayinde judgment is a sobering read but not even the Divisional Court can hold back the irresistible tide of technological change.