Ethics & Professional Duties When Using AI (for Solicitors)

Photo: Ethics and legal AI for UK solicitors – Ethics & Professional Duties When Using AI (for Solicitors).

Competence, confidentiality, supervision and duty to the court — translated into everyday practice when AI is in the loop.

Ethics is usually the part of an AI discussion where everyone nods gravely and then moves on. For solicitors, that is dangerous. The SRA’s existing Principles and Codes already bite on AI use, even if they never mention “large language models”.

This article translates those duties into everyday rules for using AI in practice, focusing on:

  • competence and supervision;
  • confidentiality and data protection;
  • duty to the court and to third parties; and
  • how to build AI into your firm’s governance.

It is written with UK solicitors in mind. It is not legal advice, but a practical starting point for firm policy.

Competence: you must understand the tools you use

The SRA expects you to provide a competent service. That doesn’t require you to be an AI researcher, but it does mean:

  • understanding, in broad terms, how your chosen tools work;
  • knowing where they are strong and where they fail; and
  • designing processes that compensate for those weaknesses.

In practice, competence with AI might include:

  • knowing that models can hallucinate authorities and facts;
  • recognising when outputs look suspiciously confident but thin on detail;
  • being able to construct prompts that give the model enough context to be useful; and
  • critically, knowing when not to use AI at all.

A partner who signs off on AI-assisted work without understanding these basics risks falling short on supervision as well as competence.

Supervision: AI cannot be your unsupervised junior

If trainees or paralegals produced a piece of work based largely on an unfamiliar tool, you would:

  • ask how they generated it;
  • probe the sources they relied on; and
  • review the reasoning before it left the building.

AI deserves at least the same level of scrutiny.

Practical steps include:

  • requiring fee-earners to label AI-assisted drafts in the file;
  • asking for a short explanation of how the tool was used (“orientation only”, “first draft”, “suggested counter-arguments”); and
  • building sample review of AI-heavy matters into your normal supervision routines.

The goal is not to ban AI, but to make its use visible and accountable.

Confidentiality: don’t leak your client’s case into someone else’s model

Your duty of confidentiality is broad. Any use of AI that involves sending client information to a third party needs to be justified.

Key questions to bake into your policy:

  • Which tools are approved for client work, and under what conditions?
  • Do those tools use prompts and outputs to train public models?
  • Where is data stored, and who (within the provider and its sub-processors) can access it?

Strong defaults might include:

  • banning the use of consumer chatbots for identifiable client matters;
  • requiring matter identifiers and names to be stripped out where possible; and
  • preferring tools that offer tenant isolation, clear DPAs and logs of access.

Confidentiality is also about human behaviour. Remind people not to paste:

  • full attendance notes,
  • whole bundles, or
  • unredacted witness statements

into unapproved tools just because it feels convenient.

Duty to the court: verify before you rely

The duty not to mislead the court is engaged whenever AI touches:

  • legal research,
  • drafting of statements of case, or
  • written submissions.

If AI has suggested authorities or summarised the law, you must ensure that:

  • every case you cite exists and says what you claim it does;
  • quotations are faithful to the judgment; and
  • limitations or exceptions are not quietly smoothed away by an over-enthusiastic model.

A simple rule helps:

“If it goes to the court, a human must have read the underlying authority and checked the citation.”

That rule can be enforced by file notes, checklists and training. It is not anti-technology; it simply ensures you are not outsourcing your duty of candour.

Fairness to third parties

AI tools can be tempting in:

  • drafting robust letters before action;
  • generating without-prejudice communications; or
  • preparing publicity and articles about ongoing cases.

Remember that duties to third parties – including avoiding abusive correspondence and respecting undertakings – apply regardless of who drafted the words.

Encourage fee-earners to:

  • treat AI suggestions as options, not instructions; and
  • ask whether they would be comfortable justifying the tone and content to a regulator or judge if challenged.

Governance: bringing it all together in an AI policy

A workable firm-wide AI policy does not need to be long, but it should cover:

  • Scope – what counts as “AI” for the policy (models, copilots, automation features in existing tools).
  • Approved uses – for example, research orientation, summarising documents, first drafts of routine documents.
  • Prohibited uses – eg, sending sensitive special category data to unapproved systems, using AI to generate evidence, or relying on AI for unchecked legal conclusions.
  • Roles and responsibilities – who owns the policy, who approves tools, how breaches are handled.
  • Training and review – how people are brought up to speed and how the firm checks that practice matches policy.

You can then update the policy as tools and regulatory expectations evolve, without having to revisit first principles every time.

How systems like OrdoLux can support your ethical framework

Ethics becomes easier to operationalise when:

  • matter work, documents and AI-assisted notes all live in one place;
  • you can see which prompts and outputs relate to which case; and
  • supervision can happen within the same system partners already use to review work.

OrdoLux is being designed so that:

  • AI activity is logged against matters;
  • prompts and outputs can be reviewed alongside time entries and documents; and
  • firms can plug in approved providers rather than having staff scatter data across multiple websites.

That won’t replace professional judgment, but it can make living up to your ethical duties simpler and more auditable.

This article is general information for practitioners — not legal advice.

Looking for legal case management software?

OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined-up system. Learn more about OrdoLux’s legal case management software.

Further reading

← Back to the blog

Explore related guides