AI Policies for Law Firms That People Actually Read
How to draft an AI policy for a law firm that supports adoption, manages risk and is short enough that people will actually read and follow it.
Many firms now have an “AI policy” sitting in someone’s inbox. Few people have read it. Fewer still use it.
The problem is rarely bad intent. It is that the policy is:
- too long and abstract;
- written in IT or marketing jargon; or
- disconnected from the realities of practice.
This article sets out a short, practical model for AI policies in law firms – one that fee-earners might actually follow.
1. Decide what your AI policy is for
Before drafting anything, agree the policy’s purpose. Common aims include:
- setting clear boundaries (what AI may and may not be used for);
- explaining expectations of staff (for example, supervision and verification duties);
- documenting your risk appetite for regulators and clients; and
- giving a home to more detailed procedures (checklists, templates, playbooks).
If a paragraph does not support those aims, consider leaving it out or moving it to a separate guidance note.
2. Keep the core policy short
Think of the core policy as a 2–4 page document that:
- partners can approve without a three-hour meeting; and
- fee-earners can read in one sitting.
You can attach annexes for:
- suggested prompts and workflows;
- technical details for IT;
- copies of DPAs and vendor summaries.
But the central text should use simple headings, for example:
- Scope and definitions
- Approved tools and use cases
- Prohibited uses
- Responsibilities and supervision
- Training and review
3. Be explicit about approved and prohibited uses
Fee-earners care most about the question: “Can I use this tool for this task?”
Helpful structure:
Approved uses (with supervision) – e.g.:
- summarising public judgments and consultation papers;
- drafting first-pass client updates and internal notes;
- reorganising and rephrasing content you have already written.
Higher-risk uses (extra checks required) – e.g.:
- assistance with legal research and case law;
- drafting documents to be filed at court;
- handling sensitive personal data or criminal offence information.
Prohibited uses – e.g.:
- pasting live client files into unapproved consumer chatbots;
- using AI to fabricate evidence, attendance notes or time records;
- sharing API keys or access with anyone outside the firm.
Clear lists reduce ambiguity and make supervision simpler.
4. Tie AI policy to existing duties, not special rules
Rather than inventing new concepts, anchor your policy in duties fee-earners already recognise:
- Competence and supervision – no unsupervised AI outputs to clients or courts.
- Confidentiality and data protection – only approved tools, minimal necessary data, clear DPAs.
- Duty to the court – verification of authorities and factual assertions.
- Record-keeping – save AI-assisted work product to the matter file.
A helpful way to frame it is:
“AI is just another way of working. All your existing duties apply. This policy explains how.”
5. Make it easy to comply
Policies fail when they ask people to fight their tools. Instead:
- integrate approved AI tools into case management and document systems;
- provide template prompts that are already consistent with policy;
- bake verification steps into checklists and file review processes.
For example, a standard precedent might include a note:
“If AI was used in drafting this document, confirm in your attendance note that all authorities and key facts have been checked.”
6. Build in feedback and updates
AI tools are moving quickly. Your policy should be stable enough not to change monthly, but flexible enough to adapt.
Practical steps:
- name an AI policy owner (often someone in risk or innovation);
- set a review cadence (for example, annually, or sooner if major tools change);
- invite feedback from teams about what is working and what feels unworkable.
Keep version history and change logs so you can show regulators and clients how your governance has evolved.
7. Communicate the policy like any other change
A silent email with a 15-page PDF attached is not a roll-out.
Consider:
- short training sessions with real examples from your practice;
- Q&A sessions for sceptical partners and keen juniors;
- quick-reference guides or intranet pages with the headlines.
Make it safe for people to ask “Can I use AI for this?” without fear of looking foolish.
Where OrdoLux fits
Policies work best when systems support them.
OrdoLux is being designed so that:
- approved AI workflows run inside the matter file;
- prompts and outputs can be reviewed and audited;
- firms can switch providers or models without changing end-user behaviour; and
- time recording and supervision tools can see where AI has helped.
That way, your AI policy is backed by the tooling, rather than relying solely on memory and goodwill.
This article is general information for practitioners — not legal advice.
Looking for legal case management software?
OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined-up system. Learn more on the OrdoLux website.