AI in Legal Research: Precision Without the ‘Phantom Cases’
Practical guardrails UK solicitors can use to prevent hallucinated citations and keep research robust, efficient and court-ready.
Legal research is where AI can both shine and embarrass you. Used well, it can surface lines of authority, patterns and angles that would have taken hours to assemble manually. Used badly, it can cheerfully invent cases, mis‑state holdings and leave you explaining “the model hallucinated” to a sceptical judge.
This article is about building a research workflow for UK solicitors where AI is useful but never the final arbiter of what the law says.
We’ll look at:
- where AI genuinely helps in research;
- why “phantom cases” and mis‑citations happen;
- practical guardrails you can put in place in a small or mid‑sized firm; and
- a step‑by‑step workflow you can adapt to your own practice.
Where AI really helps in legal research
Think of AI as a very fast, very literal trainee with an excellent memory but no context or professional judgment.
It’s particularly good at:
- Orientation. Turning a messy fact pattern into a list of issues, causes of action and defences to explore.
- Summarisation. Explaining long judgments, consultation papers or practice directions in plain English.
- Comparison. Highlighting differences between two versions of a clause, order or contractual regime.
- Organisation. Grouping authorities by issue, jurisdiction, date or outcome, so you can see the landscape.
In other words, AI can help you frame the problem, organise the material and draft a first pass. But it cannot be trusted to say “this is good law” or “this is the best authority”.
That is still your job.
Why hallucinations and “phantom cases” happen
Large language models generate text by predicting the next token based on patterns in their training data. They don’t:
- check a live database of reported cases unless you explicitly wire that in; or
- have an internal concept of “this citation corresponds to a real case in ICLR / BAILII / Westlaw”.
So if you ask a general‑purpose model:
“Give me five recent Court of Appeal authorities on X with neutral citations”
it may:
- stitch together plausible‑looking party names;
- generate neutral citations that resemble the real format; and
- summarise “holdings” that sound realistic but don’t exist.
That’s a hallucination: plausible, confident and wrong. In practice it is no different to a trainee inventing authorities – except that it can do so at scale, very quickly.
The risk is not only embarrassment; it is the potential breach of:
- your duty to the court not to mislead;
- your duty of competence (if you rely uncritically on AI output); and
- client care / cost obligations, if time is wasted chasing ghosts.
Guardrails before you start
Before anyone in the firm uses AI for research, set three foundations.
-
A short written policy.
It doesn’t need to be long, but it should state clearly:- what AI tools may and may not be used for;
- the requirement to verify all authorities on trusted services;
- who is responsible for supervision and sign‑off.
-
Model choice and data controls.
Favour options where:- prompts and outputs are not used to train public models;
- data is stored in the UK/EU or under a contract you’re comfortable with; and
- access is controlled (ideally via SSO) rather than a free‑for‑all of logins.
-
Training and examples.
Show fee‑earners:- what a hallucinated case looks like in practice;
- how to check citations quickly on ICLR, BAILII or your subscription services;
- how to record what they’ve checked.
Without those basics, workflows and checklists rarely stick.
A robust AI‑assisted research workflow
Here is a practical five‑stage flow you can adapt.
1. Frame the question properly
Start with a clear, factual issue rather than “do I have a claim?”
For example:
“Client is a tenant under an assured shorthold tenancy in England. Landlord failed to protect deposit within 30 days. What are the main statutory consequences and authorities on limitation and multiple penalties?”
Use AI to:
- turn this into a list of specific research questions;
- identify relevant statutes and key concepts to explore.
But treat that as a checklist to test, not as final advice.
2. Gather candidate authorities
Ask your tools to:
- list potential statutes, CPR provisions and leading authorities;
- group them by issue (liability, limitation, quantum, costs).
At this stage, you’re building a hypothesis set – “these might be relevant”.
Export or copy this list into your research note or into your matter in OrdoLux.
3. Verify every authority
For each proposed case or citation:
- Look it up on authoritative services you actually trust (ICLR, BAILII, Lexis, Westlaw, PLC, etc., depending on what you license).
- Confirm:
- the case exists;
- the neutral citation is correct;
- the proposition you want to rely on is actually supported.
If you can’t find the case quickly on proper services, treat it as non‑existent, however convincing the AI summary sounds.
4. Build and maintain a source log
Create a simple table in your research note or in your case management system. For each proposition:
- Issue / proposition – what point are you trying to establish?
- Authority – statute, rule or case name + citation.
- Source – where you checked it (ICLR, BAILII, Lexis etc.).
- Status – confirmed / distinguished / overruled / uncertain.
- Verifier – initials and date.
This doesn’t have to be long. The aim is that:
- you can show your homework to a supervising partner; and
- if anyone revisits the question in six months, they can see what you checked and why.
5. Draft, then deliberately “stress‑test” with AI
Once you’ve done the real research, AI can help you:
- improve structure and clarity;
- highlight gaps or alternative arguments;
- produce variants for different audiences (client, Counsel, internal memo).
A good pattern is:
- Draft the note yourself.
- Ask AI to criticise or stress‑test the reasoning: “What counter‑arguments or authorities might opposing counsel raise?”
- Check any new cases it mentions in the same way as above.
- Decide which points, if any, to adopt.
The output you sign off should always be the product of your judgment, not AI’s draft.
Supervising juniors using AI in research
For partners and supervisors, the risk profile changes slightly.
Consider requiring that juniors:
- paste in their source log when sending you an AI‑assisted note;
- flag clearly what was generated by AI and what is their own analysis; and
- confirm that any submission to the court relies only on authorities you have both checked.
You might also build into file reviews a couple of standard questions:
- “Where did this authority come from originally?”
- “Have you checked whether anything has overruled or criticised it?”
Over time, this normalises AI as a helpful but fallible assistant, not an oracle.
Where OrdoLux fits
OrdoLux is designed on the assumption that solicitors will use AI inside their workflow, not off to the side in a separate chatbot.
The research tools in OrdoLux are being built to:
- keep prompts and outputs attached to the matter;
- make it easy to maintain a source log alongside your notes; and
- surface checklists and guardrails (like the steps above) at the point of use.
So instead of juggling separate tools, your team can treat AI as part of the case file – with a clear audit trail.
This article is general information for practitioners — not legal advice.
Looking for legal case management software?
OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined‑up system. Learn more about OrdoLux’s legal case management software.