A changing world – top tips for preparing for the rise of AI in the world of disputes

In 2023, there has been an explosion in the use of AI-enabled technology. This has been supplemented by swathes of commentary in the media, much of which has warned of the potential risk to companies and governments caused by the proliferation of AI.

From the perspective of disputes, the rise of ChatGPT as an AI tool is particularly relevant. ChatGPT is an accessible, free-to-use service that anybody can use. It provides answers to user prompts within seconds, based on data collected from millions of documents, and other source material.

A recently reported case in The Law Society Gazette, highlights the impact that AI-enabled technology is likely to have on court proceedings and dispute resolution moving forwards.

We look at the facts of the case and offer practical tips to mitigate some of the risks facing businesses in the future.

What happened when ChatGPT was used in a recent civil case?

The civil case involved a litigant-in-person (LiP) and a represented party. Proceedings in the case had ended for the day, with the barrister for the represented party arguing that there was no precedent for the LiP's arguments. It is understood that the LiP then returned to court the next day with multiple case citations, purporting to support the arguments that they had raised.

The barrister then identified that one case citation had been furnished, and although the other citations were real case names, they had quoted completely different passages to those included in the actual judgments. This meant that they did not support his position. After being questioned by the judge, the LiP admitted that they had used ChatGPT to find cases that could assist them in their submissions.

In this case, the judge accepted that the misleading submissions were inadvertent and did not penalise the LiP. However, this decision comes with a note of caution as it is an evolving area, and such situations may arise again in the future, with not always the same discretion being exercised.

Our thoughts

In this particular case, the judge adopted a lenient view and chose not to impose any punitive sanctions on the LiP, such as a costs order. However, this judgment was not reported and so it is unlikely to form a precedent that will be strictly followed in all future cases.

Unfortunately, as the decision is unpublished, we are unable to comment on the judge's remarks, or their reasoning for the decision. Therefore, it remains to be seen what factors a judge may consider when deciding which party should bear the burden for unnecessary costs that have been incurred as a result of the provision of false information obtained using ChatGPT, or similar AI models.

As outlined above, this case represents a timely reminder that the use of AI-enabled tech is only likely to grow in the future. Looking ahead, this has the potential to disrupt how disputes are conducted. LiPs, for example, may feel emboldened to conduct litigation against larger institutions or produce significant documentation purporting to support their position originating from AI models, like Chat GPT.

Parties will have to carefully consider the impact of this when engaging in disputes with unrepresented parties, which will typically involve individuals and SMEs, and perhaps spend more time checking authorities and submissions made by opposing parties.

It is not just the UK that is grappling with how to deal with the encroachment of AI in the legal sphere. A lawyer in New York is now potentially facing sanctions for erroneously relying on ChatGPT to cite cases, which transpired to be fictitious.

Although not strictly relevant in the courts of England and Wales, this decision may be indicative of the approach that judges in England and Wales are likely to take if an experienced practitioner relies on AI-enabled tools to produce false and misleading information.

It remains to be seen how the position will develop in England and Wales and to what extent the Courts will intervene to prevent costs and court time from being wasted by (erroneous) reliance on AI tools. However, what is clear for now is that legal practitioners and parties to disputes should be mindful of the potential issues that AI can pose, so that they can proactively minimise any harmful impact in their use.

Our top tips

  • If a party purports to rely on legal authority in support of their arguments, make sure that you (a) check that the case exists; and (b) check that the other side's representation of that case's authority is accurate.
  • Keep an eye on costs – if parties seek to submit large bundles of documents purporting to support their case, carefully consider how you approach the review of those documents, particularly where you are dealing with a LiP and have concerns that they may be using AI tools. Extensive review of erroneous documents represents a real risk of significantly increasing costs.
  • It is worth considering if the parties in question are vulnerable, and whether your business has any applicable internal procedures for sensitively handling the dispute or attempting to reach an agreeable solution for all parties.
  • As a matter of good practice, it is advised that in-house counsel refrain from using AI-enabled tools to draft court documents or conduct research. If you do choose to use an AI tool, ensure that you corroborate the information provided with a trusted source of information.

If you would like any further information regarding the potential impact of AI on disputes or to discuss any of the matters raised in this article, please get in touch.

Key contacts

Related