Prompt Engineering for Solicitors: Getting Reliable Results from AI

Photo: Practical Skills and legal AI for UK solicitors – Prompt Engineering for Solicitors: Getting Reliable Results from AI.

How solicitors can structure prompts, constraints and checks to get reliable, repeatable outputs from AI tools without turning into prompt hobbyists.

“Prompt engineering” sounds grand, but for most solicitors it simply means asking AI for help in a way that gets useful, checkable output.

You do not need to memorise obscure tricks. What you do need is a repeatable way to:

  • explain the legal task clearly;
  • constrain what the model should and should not do; and
  • make it easy to verify and improve the result.

This article offers a practical prompting toolkit for UK solicitors, with patterns you can reuse in everyday work.

A simple framework: ROLE – TASK – CONTEXT – CONSTRAINTS – OUTPUT

When you are stuck, use this mental checklist:

  • ROLE – how should the model “see itself”? (eg, “You are a trainee solicitor in a UK litigation team…”)
  • TASK – what exactly should it do? (summarise, draft, compare, explain)
  • CONTEXT – what background does it need? (jurisdiction, stage, client type)
  • CONSTRAINTS – what must it avoid? (no speculation, no new cases, word limits)
  • OUTPUT – how should the answer be structured? (headings, bullets, tables)

A decent prompt might look like:

“You are a trainee solicitor in a UK commercial disputes team. TASK: Summarise the attached order for a lay client in England. CONTEXT: The client is the claimant. The order was made on their application for specific disclosure. CONSTRAINTS: Use plain English, no more than 400 words. Do not introduce new authorities or advice. OUTPUT: Headings for ‘What the court has ordered’, ‘Deadlines’ and ‘What we need from you’.”

You can adapt this skeleton for almost any task.

Pattern 1: Summarising documents safely

When summarising judgments, orders or correspondence, make sure the model does not add law or facts that are not present.

Helpful constraints include:

  • “Only summarise the content of the document – do not add new legal analysis.”
  • “Quote key passages verbatim where necessary and identify the paragraph numbers.”
  • “If you are unsure what something means, flag it rather than guessing.”

Always read the source yourself for important points, but a good summary prompt can save significant time.

Pattern 2: Drafting first‑pass documents

For routine documents (client updates, standard letters, chronologies), you can prompt along these lines:

  • specify the audience (“lay client”, “general counsel”, “judge in the County Court”);
  • describe the objective (“reassure and inform”, “set out a robust but measured position”, “explain next procedural steps”); and
  • provide any house‑style preferences (“short sentences”, “avoid Latin”, “British spelling”).

Example:

“Draft a first‑pass letter to a lay client following a directions hearing in the County Court. Audience: non‑lawyer, anxious about costs. Objective: explain what happened, outline next steps, and set expectations about likely timescales. Constraints: 600 words maximum, plain English, avoid jargon. Do not give any view on merits beyond what I specify. I will paste my hearing note below.”

You can then edit the draft to add nuance and firm‑specific language.

Pattern 3: Generating options, not answers

Sometimes the best use of AI is to generate possibilities for you to evaluate.

Examples:

  • “Suggest three alternative phrasings for this clause that are more client‑friendly but retain the same legal effect.”
  • “Give me a list of potential counter‑arguments opposing counsel might raise based on this skeleton argument.”
  • “Produce a list of questions I should ask the client before drafting particulars of claim.”

The model is helping you think wider and faster, not decide anything on its own.

Pattern 4: Debugging your own prompts

If a result is poor, treat it as a chance to improve the prompt:

  • Was the task vague (“help with this case”) rather than specific?
  • Did you fail to specify the jurisdiction or procedural posture?
  • Did you give the model conflicting instructions (eg, “keep it short” and “cover everything in detail”)?

You can even ask the model:

“Explain why your previous answer may have been unsatisfactory and suggest a better way for me to phrase the prompt.”

This is often surprisingly helpful.

Pattern 5: Building internal prompt templates

As you discover prompts that work well for your practice, save them as internal templates:

  • “client update after hearing”;
  • “first‑pass NDA review”;
  • “summarise long advice for partner review”.

Store them in your knowledge system or case management tool so others can reuse and adapt them. The value is not the wording itself but the shared pattern.

Keep verification and ethics in view

However good your prompts, the core duties remain:

  • verify authorities and key facts on reliable sources;
  • make sure anything sent to a client reflects your judgment; and
  • respect confidentiality, privilege and data protection rules in what you paste into tools.

Good prompt engineering is not about gaming the system; it is about getting clearer, more controllable drafts that fit inside your existing professional obligations.

Where OrdoLux fits

OrdoLux is being designed so that:

  • prompts and outputs can be saved against the matter file;
  • firms can maintain libraries of tried‑and‑tested prompt patterns; and
  • supervision is easier because partners can see how AI was used in producing drafts.

That way, prompting becomes a shared skill across the firm rather than a hobby for a few enthusiasts.

This article is general information for practitioners — not legal advice.

Looking for legal case management software?

OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined‑up system. Learn more on the OrdoLux website.

Further reading

← Back to the blog

Explore related guides