The UK’s justice system faces a crisis characterized by significant case backlogs and logistical failures, largely stemming from over a decade of chronic underfunding (austerity). While the government is promoting Artificial Intelligence (AI), particularly Large Language Models (LLMs) like ChatGPT, as a revolutionary solution to “turbocharge” public services, critics warn that its adoption risks merely “papering over the cracks” of a fundamentally dysfunctional system. Without addressing the deep, underlying resource deficits, the introduction of AI—even for routine administrative tasks—could increase risks and exacerbate unequal access to justice, particularly for vulnerable clients.
The Illusion of AI as a Silver Bullet
Powerful voices in politics and think tanks are championing AI as the answer to the justice system’s problems, such as bureaucratic overload and case backlogs. Proponents suggest AI can liberate human staff from routine workloads, allowing them to focus on essential human aspects like face-to-face client engagement and expert judgment. Tools like Technology Assisted Review (TAR), which predicts the relevance of documents, are already in use, with some reports citing successes, such as the Old Bailey saving significant costs on evidence processing.
However, the current focus on rapid LLM adoption is viewed with skepticism. Critics argue that while AI is useful, it is being implemented primarily as a cost-cutting measure rather than a means to enhance human capacity. This institutional context is critical: using digital tools to cut costs when resources are already stretched means the inherent risks of AI (such as “hallucinations” or plausible but incorrect outputs) will hit hardest precisely where human oversight is weakest.
The Hidden Dangers of Algorithmic Bias and Error
The risks associated with AI are not confined to simple administrative errors. More controversial uses, such as risk-scoring algorithms in probation and immigration cases, have drawn severe criticism for entrenching existing inequalities and affecting people’s lives without their knowledge or an effective challenge mechanism. Research suggests that automated systems can disproportionately affect marginalized groups, making them more vulnerable to unjust outcomes.
Furthermore, the latest LLMs introduce the danger of “hallucination”—generating entirely fictitious information. Senior UK judges have already warned lawyers against using these tools after multiple cases, both in the UK and internationally, involved fake, non-existent case law being filed in court. Decisions based on AI-generated evidence, even when used only for background research, are likely to open new grounds for legal challenge, which ironically risks adding to, rather than reducing, the case backlog.
The Unjust Reality of Unequal Access
The implementation of AI systems in an under-resourced environment directly threatens the principle of equal access to justice. The benefits of AI tools will mostly be seen in parts of the system where resources and time for human oversight are highest, such as in well-funded legal practices that can afford expert human reviewers.
Conversely, the risks will be concentrated where resources are lowest: among vulnerable clients who have less money and time to challenge decisions. When human time is already scarce due to austerity, the inevitable errors produced by automated systems become more difficult to catch and correct. Ultimately, adopting AI without first fixing the deep underlying problems of underfunding and structural dysfunction only risks institutionalizing and deepening the existing inequalities in the justice system.