Employers are facing a new challenge in managing workplace disputes, as a growing number of people use artificial intelligence tools such as ChatGPT to draft employment grievances and prepare claims for the Employment Tribunal.
Legal experts warn that while AI may offer greater access to justice, many of the submissions generated are inaccurate, irrelevant or speculative, and risk overburdening a system already struggling under the weight of unresolved cases.
The concern comes amid a 32 percent rise in the open Tribunal caseload compared with the same period last year, according to Ministry of Justice data covering the first quarter of 2024–25. With further legislative changes due under the Employment Rights Bill, experts fear delays will worsen.
Submissions that do more harm than good
Ailie Murray, employment partner at law firm Travers Smith, said her team was “increasingly seeing employees use AI to draft grievances, employment claims, and submissions against their employers.”
“While this might appear helpful, it can sometimes do more harm than good,” she told the City AM business site. “AI can sometimes be inaccurate, which leads to creating claims and arguments that are not valid or are not relevant to the employee’s circumstances,” the report summarised her as saying. “This is problematic as it does little to help resolve the employee’s issue, and in some cases actively undermines or prejudices it.”
Murray added that employers were facing additional legal costs from having to review and respond to longer and more complex submissions. “It also creates additional costs for employers having to review and respond to lengthy submissions,” Murray said.
She warned that such claims place “further delays in a system which is already overburdened”.
Tribunal system at breaking point
The UK’s Employment Tribunal system has faced ongoing criticism over delays in recent years, particularly following the pandemic. The latest official data, published in June, shows the backlog of outstanding claims has risen sharply, with tens of thousands of cases awaiting a hearing.
By March this year, the backlog had climbed to 491,000 open cases, up from 444,000 a year earlier, representing an 11 percent rise and continuing a month-on-month upward trend.
One in four HR professionals say delays are forcing them to hold open vacancies or keep underperforming staff, as unresolved disputes linger for months, according to reports.
The Ministry of Justice has promised additional funding and staffing to reduce delays, but legal observers say the situation is deteriorating as AI-generated claims enter the system in growing numbers.
‘More speculative claims, greater reputational risk’
Ella Bond, a senior solicitor in London-based law firm Harper James’ employment team, told HRreview that AI-generated submissions may be giving claimants false confidence while placing significant new burdens on employers.
“There has been a noticeable increase in employees turning to AI platforms to prepare grievances and Employment Tribunal submissions. While this may seem to improve access to justice, in practice, it creates a number of difficulties for employers,” she said.
“AI-generated claims are often lengthy and include inaccuracies or irrelevant arguments, all of which require a detailed response from employers. This drives up legal costs and consumes valuable management and HR time that could otherwise be focused on resolving workplace issues.”
Bond warned that the growing use of AI in employment disputes was being felt across the system. “The impact of these submissions is being felt beyond individual cases – the Employment Tribunal system is already under considerable pressure, and inaccurate or overly complex claims risk prolonging hearings and adding to existing backlogs.”
“For employers, the consequences extend beyond financial cost – even claims that ultimately lack legal merit can cause uncertainty, take up valuable resource time and carry reputational risks once they reach a public forum.”
She added that the rise in AI usage may lead to “more speculative claims, placing employers in the difficult position of having to weigh the commercial costs of defending a case against the merits of early settlement”.
Bond advised employers to invest in early legal advice and sound HR processes to minimise the risk of claims and to respond effectively where disputes do arise.
What AI gets wrong
While large language models such as ChatGPT can be useful in helping users understand legal language or structure documents, they are not designed to offer tailored legal advice. In practice, this means AI may generate plausible-sounding but legally irrelevant content that fails to reflect the complexity of a given case.
In some instances, AI-generated submissions have included case law that does not apply, or incorrectly stated statutory rights. Tribunal judges and employment lawyers have also raised concerns about “hallucinated” precedents, which are fabricated references to legislation or case decisions that do not exist.
The inaccuracies waste time and resources and, legal experts say, can weaken a claimant’s case.
