Principal Director - Chartered Trade Mark Attorney
Intellectual Property | Charities
This website will offer limited functionality in this browser. We only support the recent versions of major browsers like Chrome, Firefox, Safari, and Edge.
												
				In an increasingly digital legal landscape, AI tools like ChatGPT are being adopted by litigants in person and professionals alike for drafting submissions. However, a recent appeal decision by the UK Intellectual Property Office (UKIPO) illustrates the risks of relying on such technology without verification.
The case involved an appeal against the decision issued in Pro Health Solutions’ attempt to oppose and invalidate Prohealth Inc's trade marks.
The appellant, Dr. Soufian, a litigant-in-person, admitted at the start of the hearing that he had used ChatGPT to assist in preparing both his grounds of appeal and skeleton argument.
The grounds of appeal included a list of cases, each with a case name, a citation, the court, a "quote", a URL and a comment. The cases were real but three of the "quotes" included were not found in the decisions.
The skeleton arguments filed included cases with incorrect references as well as summaries of cases which misrepresented the case substantially.
Even though Dr. Soufian promptly apologised and explained his inexperience, the tribunal highlighted that all litigants—represented or not—have a duty not to mislead the court. As Lord Sumption noted in Barton v Wright Hassell LLP [2018} UKSC 12, unrepresented parties are not entitled to lower standards of compliance.
The court referenced Ayinde, R (On the Application Of) v London Borough of Haringey [2025] EWHC 1383 (Admin), which sets out the broader implications of AI misuse in legal settings. The judgment sets out "duty rests on lawyers who use artificial intelligence to conduct research themselves or rely on the work of others who have done so. This is no different from the responsibility of a lawyer who relies on the work of a trainee solicitor or a pupil barrister for example, or on information obtained from an internet search. We would go further however. There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused".
The Appointed Person set out that "It is important that all litigants before the registrar (whether ex parte or inter partes) and during any appeal to the Appointed Person are made aware of the risks of using AI. Many litigants in person will have little knowledge of trade mark law and think that anything generative artificial intelligence creates will be better than they can produce themselves. So a very clear warning needs to be given to make even the most nervous litigant aware of the risks they are taking."
This case serves as a critical warning to both legal professionals and self-represented parties: AI can assist in drafting, but it cannot replace due diligence, legal understanding or ethical responsibilities. Relying on ChatGPT or similar tools without verifying the accuracy of its outputs is not just risky, it can compromise your case, damage your credibility and result in real legal consequences.
This case highlights that the use of AI in litigation is increasingly widespread, with harmful consequences for the integrity of the legal process and the credibility of participants.