5 Things You Should Know Before Adopting AI in the UK | Orrick, Herrington & Sutcliffe LLP

5 Things You Should Know Before Adopting AI in the UK | Orrick, Herrington & Sutcliffe LLP


AI tools are now a valuable part of the day-to-day toolkit for many teams. The rewards can be great. They can summarise long documents, generate drafts, and organise data fast. However, putting confidential information into an AI tool without proper care or controls could result in legal risks that outweigh the rewards.

What’s the risk and why it matters

When you paste information into an AI tool, you may be:

  • Uploading commercially sensitive data about your own business (e.g., roadmap, pricing, fundraising plans).
  • Sharing a client’s or partner’s confidential information.
  • Disclosing information that is expressly deemed “confidential” (e.g., under an NDA or other agreement).

If you have committed to keep something confidential, inputting it into certain third-party AI tools may be a breach of some of those commitments, especially if the tool provider uses inputs for “model training,” “service improvement,” or “analytics.” AI note takers on calls can raise the same issues if their terms allow recording retention, analysis, or onward use. Without appropriate consideration, these tools can also create compliance risks from a data protection perspective.

Using AI tools – what should you do?

1. Check the tool’s enterprise terms before you use it.

  • Be mindful of:
    • “we may use inputs/user content to train/develop/improve our models/tools/services”;
    • “we retain inputs for product improvement”; and
    • “we share data with affiliates or vendors for model development”.
  • Note the benefits of:
    • “no training on customer data by default”;
    • “enterprise isolation,” “data deletion timelines”; or
    • “EU/UK data hosting (if relevant)”,
    • and a Data Processing Addendum where personal data is involved.
  • Engage AI providers that offer “sandboxes” which are locked down to your data only.
  • Free tools are more likely to use your inputs broadly. Treat them as public unless the provider offers enterprise-grade controls.
  • If in doubt, reach out to the tool’s provider to ask what they do with inputs. They may be able to accommodate certain concerns.

2. Set a clear AI use policy for personnel

  • Define what can and cannot be put into AI tools (e.g., “no client confidential information, sensitive business information, or unpublished financials”).
  • Allow only approved tools and accounts and prohibit personal accounts for work use.
  • Provide safe workflows (e.g., where feasible, use redacted data or synthetic examples).
  • Train teams regularly and keep a simple FAQ and examples.

3. Manage third parties

  • Ensure consultants and subcontractors follow your policies.
  • Consider including wording in NDAs/consultancy agreements that prohibits inputting your (or your clients’/partners’) confidential information into AI tools without written consent, and requires use only of approved tools with no model training on your data.
  • Reserve your right to audit the compliance of the third parties you engage, where proportionate.

4. Build an audit trail

  • Keep records of approved tools, policy versions, training dates, and any DPIAs/vendor assessments.
  • For sensitive projects, log the datasets that are shared and the safeguards used (e.g., redaction, enterprise instance).

5. Fundraising, deals and diligence

  • Investors, buyers and procurement teams increasingly ask how businesses manage AI risk. Being able to show your policy, approved tool list, and vendor terms can accelerate diligence and avoid price chip conversations about “data leakage risk.”

Summary checklist

  1. Don’t paste confidential or client data into non‑approved AI tools.
  2. Try to use enterprise plans with no training on your inputs.
  3. Redact or anonymise wherever commercially viable.
  4. Train your team and extend requirements to consultants.
  5. Keep a paper trail.

[View source.]



Source link