AI Safety and Control

Use AI at work without creating a data incident.

We train people to use Copilot safely in real workplaces. We start with your policies, your risks, and what “safe” means for your organisation. Then we teach practical habits staff can use every day.

Non-negotiables

  • No secrets in prompts. If you would not email it externally, do not paste it into AI.
  • Human accountability stays. AI can assist, but you approve and you own the outcome.
  • Verify before use. Treat AI output as a draft, not a fact.
  • Use approved tools. Use what your organisation has approved and configured.

How we keep it safe

  • Data classification first. Decide what type of data you’re handling before you ask anything.
  • Safer prompting. Get structure, options, and checklists without pasting sensitive content.
  • Decision controls. Learn where AI must stop and a human must decide.
  • Audit-friendly habits. Save assumptions, sources, and the final decision trail.

The Traffic Light rule (RAG)

A quick decision filter delegates use before every prompt.

RED — Never

Never put it in. Ever. Personal data, patient/client details, payroll, credentials, security details, confidential contracts, and anything that would trigger an incident if leaked.

AMBER — Only with care

Only if you sanitise it and you can verify the output. Remove identifiers. Reduce detail. Keep it need-to-know.

GREEN — Safe

Public information, generic templates, structure, checklists, drafting support. Still check the output before you share or send.

The Sat-Nav Rule

If your Sat-Nav tells you to drive into a river, and you do it, you drove into the river. Copilot is the same. It suggests. You decide.

Copilot drafts: text, summaries, ideas, steps.
You verify: facts, numbers, names, permissions, tone.
You own it: if you send it, it’s your work.

Why Microsoft Copilot

  • Most organisations already run on Microsoft 365.
  • Copilot sits inside the controls IT already uses.
  • If a user can’t access a file today, Copilot shouldn’t show it to them tomorrow.

What we do not do

  • We do not provide legal advice or replace your governance team.
  • We do not take control of your systems or security configuration.
  • We do not ask for your confidential data to “make the demo better”.
  • We do not promise unrealistic results.