How to Run an AI Risk Assessment in Your Firm

Photo: Risk & Governance and legal AI for UK solicitors – How to Run an AI Risk Assessment in Your Firm.

A practical method to score AI risks in your firm: inventory use cases, score impact and likelihood, select controls and record sign-offs.

Most firms now have some kind of AI in use – even if it is just individuals experimenting with tools on their own initiative. The question is no longer “Are we using AI?” but “How much risk are we carrying, and is that risk under control?”

An AI risk assessment sounds heavyweight, but it does not need to be. This article sets out a practical, repeatable method for UK law firms to:

  • take stock of AI use across the practice;
  • score risks in a consistent way; and
  • decide what controls and sign‑offs are appropriate.

Step 1: Build a simple AI inventory

You cannot assess what you do not know about. Start by asking:

  • What AI‑enabled tools are we using in the firm?
  • Who owns them internally?
  • What are they used for (eg, research, drafting, document review, HR, marketing)?
  • What data goes in and out?

Capture this in a basic spreadsheet or register with columns such as:

  • name of tool / system;
  • vendor and hosting location;
  • teams using it;
  • typical use cases;
  • data categories involved (client confidential, personal, special category, staff HR data, open web, etc.).

This inventory is the foundation for everything else.

Step 2: Define your risk criteria

Next, agree how you will score risk. A straightforward approach is to score each use case on:

  • Impact – how serious the consequences would be if something went wrong (from minor inconvenience to regulatory investigation or serious client harm).
  • Likelihood – how plausible it is that the risk will materialise, given the tool and the way you use it.

For each dimension, define what 1–5 mean in plain language. For example:

  • Impact 1: trivial, little or no client impact
  • Impact 3: moderate, could cause complaint or financial loss to a client
  • Impact 5: severe, could trigger regulatory action or significant harm

The aim is not mathematical precision but consistent judgment across matters and partners.

Step 3: Identify common AI risk themes

For each tool or use case, consider themes such as:

  • Confidentiality & GDPR – is client or staff data sent to third parties? Where is it stored? Are there clear DPAs?
  • Accuracy & hallucinations – could incorrect outputs meaningfully affect advice, pleadings or negotiations?
  • Duty to the court – does the tool assist with research or drafting of submissions? How are authorities checked?
  • Bias & fairness – does it influence decisions about individuals (recruitment, promotion, client intake)?
  • Operational resilience – what happens if the service fails or changes pricing abruptly?

Score each use case against these themes, then roll them up into your overall impact/likelihood view.

Step 4: Choose proportionate controls

Once you have a risk score, you can choose controls with a lighter or heavier touch.

For lower‑risk internal uses (eg, summarising public documents):

  • restrict tools to approved providers with sensible defaults;
  • require basic training for users; and
  • log activity in the relevant matter or internal project.

For medium‑risk uses (eg, AI‑assisted drafting of client communications):

  • require human review and sign‑off;
  • insist on verification of any authorities cited; and
  • keep prompts and outputs attached to the matter file.

For higher‑risk uses (eg, tools that influence HR decisions or client onboarding):

  • carry out a structured DPIA;
  • involve risk/compliance in tool selection;
  • consider additional technical safeguards such as redaction, access controls and more detailed logging.

Document these controls in your AI policy and procedures, so the assessment is linked to concrete actions.

Step 5: Record decisions and ownership

An AI risk assessment is only useful if:

  • decisions are recorded; and
  • someone is accountable for acting on them.

For each tool, capture in your register:

  • the overall risk score and key concerns;
  • controls adopted (eg, “research only”, “no special category data”, “partner review required for outputs to clients”);
  • any conditions for continued use (such as contractual changes or technical improvements);
  • the internal owner (partner or manager) responsible.

This gives you an audit trail for regulators and clients and avoids the common problem of “shadow AI” where no‑one is clearly in charge.

Step 6: Review regularly, not constantly

Technology and regulation are moving fast, but you do not need to reassess everything every month. A reasonable pattern is:

  • annual review of the AI inventory and risk scores;
  • ad‑hoc reassessment when:
    • new high‑risk use cases are proposed; or
    • vendors make major changes to models, pricing or terms.

Keep the process administratively light so that people actually use it.

Where OrdoLux fits

OrdoLux is being designed with the assumption that firms will want:

  • clear records of which matters and workflows use AI assistance;
  • the ability to restrict certain AI features to approved contexts; and
  • logs that show who did what, when, for audit and supervision.

That makes it easier to connect your AI risk assessment to real activity in live matters, rather than just a spreadsheet on a shared drive.

This article is general information for practitioners — not legal advice.

Looking for legal case management software?

OrdoLux is legal case management software for UK solicitors, designed to make matter management, documents, time recording and AI assistance feel like one joined‑up system. Learn more on the OrdoLux website.

Further reading

← Back to the blog

Explore related guides