Confidentiality and Privilege When Using AI in Your Firm
How to preserve confidentiality and privilege when using AI systems in UK law firms, from vendor due diligence to safe day-to-day habits.
The hardest questions partners raise about AI are often about confidentiality and privilege:
- “If we paste client information into a tool, have we waived privilege?”
- “Is this any different from using email or a document management system?”
- “How do we explain all of this to clients and regulators?”
This article provides a practical framework for thinking about confidentiality and privilege when using AI in a UK law firm. It is not a full treatise on the law, but a way to make day-to-day decisions with more confidence.
Start with the basics: what are you protecting?
Two overlapping concepts are in play:
- Confidentiality – the broad equitable and contractual duty not to misuse information entrusted to you.
- Legal professional privilege – specific rules protecting certain communications from disclosure in litigation or investigations.
AI tools can threaten both if used carelessly, for example by:
- exposing information to people outside the confidentiality “circle”;
- storing client data in jurisdictions or systems with weak protections; or
- making it hard to prove, later, that material was kept confidential.
Thinking about “third parties”
The starting point for many privilege questions is whether information has been shared with a third party in circumstances inconsistent with confidentiality.
When you use AI tools, ask:
- Who is actually receiving the data? The vendor? Its sub-processors? Human reviewers?
- Are they bound by contractual obligations of confidentiality equivalent to those you would expect from other service providers?
- Can they use the data for their own purposes (for example, training public models)?
Many firms already rely on third-party providers for email, document storage and transcription. AI tools are not fundamentally different – but they often involve new categories of processing (such as large-scale analysis or training).
Safer and riskier patterns
Broadly, you can think of AI use falling into three patterns.
Pattern A: Internal-only, no client data
Examples:
- summarising public judgments;
- generating marketing copy or internal policies;
- experimenting with prompts on dummy data.
Confidentiality and privilege risks here are low (though you still need to consider vendor contracts and data protection).
Pattern B: Client data with controlled, contractually bound providers
Examples:
- using an AI-enabled document review platform under a DPA;
- running contract analytics within your own secure environment;
- using AI features embedded in your DMS or case management system.
Here the key is ensuring that:
- providers act as processors, not controllers using data for their own purposes;
- training flags are off for customer prompts and outputs; and
- there are robust confidentiality clauses, jurisdiction provisions and security commitments.
This is closest to how you already treat other IT providers.
Pattern C: Client data in consumer-grade tools
Examples:
- pasting an entire attendance note into a public chatbot;
- uploading client bundles to a free website for “AI analysis”;
- using tools with unclear or changeable terms of use.
These are high-risk. You may struggle to show that information remained confidential, especially if terms allow the provider wide rights to use and share data.
Waiver of privilege: what’s the real concern?
Privilege is more likely to be at risk where:
- material is shared with third parties not reasonably necessary for the conduct of the matter;
- there are no robust confidentiality obligations; or
- systems allow broad access outside the firm or its carefully chosen providers.
A cautious approach is to treat AI vendors much like:
- e-disclosure providers;
- expert consultants; or
- specialist IT services.
If their involvement is reasonably necessary and they are properly bound to preserve confidentiality, the risk of waiver is significantly reduced. But ad-hoc use of unknown tools is much harder to defend.
Practical controls for firms
Sensible controls include:
- Approved providers list – a short list of AI tools that have passed legal, privacy and security review.
- Configuration standards – ensure training flags are off, logs are protected, and data is retained only as long as necessary.
- Clear “no-go” rules – for example:
- no uploading of entire bundles or pleadings to unapproved tools;
- no use of personal accounts for client-specific work;
- no sharing of passwords or API keys.
Document these rules in your AI and information security policies so they are easy to reference in training and audits.
Communicating with clients
Many clients now ask explicitly:
- whether you use AI in their matters;
- what safeguards you apply; and
- whether their data is used to train models.
It is helpful to prepare:
- a short, non-technical explanation of your approach;
- standard wording for client care letters and privacy notices; and
- a process for handling more detailed due diligence questionnaires.
Being transparent – without overwhelming clients with jargon – can become a point of differentiation rather than a problem.
How systems like OrdoLux can help
Confidentiality and privilege are easier to manage when:
- AI use happens inside your case management system, not scattered across multiple websites;
- prompts, outputs and underlying documents are stored in one controlled environment; and
- you have a clear audit trail showing who accessed what and when.
OrdoLux is being designed with these assumptions:
- firms will want to integrate approved AI providers rather than generic consumer tools;
- data will need to remain clearly associated with matters and clients; and
- logs and permissions must support supervision and, if necessary, regulatory scrutiny.
That won’t answer every privilege question, but it makes it much easier to explain and defend your use of AI.
This article is general information for practitioners — not legal advice.
Looking for legal case management software?
OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined-up system. Learn more on the OrdoLux website.