Microsoft Copilot is Your Safety Car that keeps the whole race under control
Cop-pilot Crashed and Burned on Copilot’s Hallucinations, and How to Avoid It Happening to You
Last week, West Midlands Police got itself into a mess after admitting that a piece of “intelligence” used in planning for an Aston Villa match contained something that simply was not true, something that AI had just made up. A report referenced a West Ham vs Maccabi Tel Aviv match that never happened; they had never played this, yet Copilot said it did. The Chief Constable, after denying it, later said the error came from the Police's use of Microsoft Copilot, not from a human source, and the claim was used without proper checking, a rookie error for the Keystone Cops
The story has had real consequences. After sustained criticism of the force’s handling of the decision, West Midlands Police Chief Constable Craig Guildford has now stepped down with immediate effect. It is a reminder that AI mistakes are not “tech problems”, they are human ones. They become leadership problems the moment they influence real decisions.
That single mistake became part of a much bigger controversy about banning Maccabi Tel Aviv fans from attending the Europa League fixture at Villa Park on 6 November 2025, and it has now contributed to leadership fallout at the force.
So let's explore this a bit further and see what went wrong and what you can do to prevent this from happening to you.
What is an AI hallucination anyway?
A hallucination is when an AI system produces information that looks confident and plausible, but is wrong or made up.
Not “a typo”. Not “a slightly dated fact”. Fully invented details that fit the pattern of what you asked for. AI can do this all the time; with normal prompting, you get this about 15% of the time.
That is exactly what happened here: Copilot produced a credible-sounding reference (a football match in this case) that did not exist, and it was allowed to travel into a real-world decision without being verified, which had human impacts. Imagine if you were making a decision about dodgy information, how confident would you be?
Why hallucinations happen
Most AI tools are not databases. They are pattern machines.
They generate the most likely next words based on training data and context, and if you ask them something where:
-
The answer is unclear
-
The sources are mixed
-
The tool cannot access the right documents
-
or the prompt encourages “fill in the gaps”
…they will sometimes guess.
So if you wrote Mary had a little......... what would the next word be, well you might say lamb but equally as correct could be sleep, brother or drink, all of these fit the sentence, and that's what AI does, it tries to guess the next likely word.
That is not a moral failing. It is a design trade-off, and it can be useful.
Why hallucinations can still be useful
Here is the part most people miss: hallucinations are a side effect of the same capability that makes AI valuable.
If a model could only repeat what it is 100% certain about, it would be a dull search box.
The value of modern AI is that it can:
-
draft, summarise, structure and reframe fast
-
generate options when you are stuck
-
generate and surface angles to questions you might not have considered
-
help you think, not just look things up
The trick is simple: use it for thinking and drafting, not as your source of truth.
How to prevent hallucinations in plain English
You asked for practical steps that make readers think: “I need training on this.” Good. Here are the ones that matter, without the tech fluff.
1) Make the AI check itself
Use prompts like:
-
“List the claims you are making that would need verification.”
-
“What are you uncertain about?”
-
“What would change your answer?”
-
“Give me three alternative explanations.”
This does not guarantee truth, but it forces the model out of autopilot and makes risk visible.
2) Ask for sources and treat missing citations as a warning
If the tool cannot provide links or named sources, assume it might be guessing.
Even when it does provide citations, spot-check them. (A link is not the same as evidence.)
This is the easiest behaviour change for busy leaders: no sources, no trust.
3) Use RAG so the model is not guessing
RAG (retrieval augmented generation) simply means: give the AI the documents you want it to rely on.
Instead of asking “what is our policy?”, you feed it the policy. Instead of “what did the contract say?”, you provide the contract.
You are not making the model smarter. You are making it grounded.
4) Put humans back in the loop for anything high stakes
If the output could affect:
-
safety
-
reputation
-
legal position
-
finance
-
hiring and people issues
…then you need a human sign-off and a simple verification step.
West Midlands Police did not get in trouble because they used AI. They got in trouble because a made up claim was allowed to behave like intelligence.
5) Add a lightweight process (this is what most teams are missing)
A practical minimum:
-
What did we ask?
-
What did it say?
-
What did we check?
-
What did we change?
-
Who approved it?
That is governance without bureaucracy.
The takeaway for SMEs
Do not panic about hallucinations. Panic is a waste of time.
Do two things instead:
-
Decide what AI is allowed to do in your business (draft, summarise, brainstorm, analyse)
-
Decide what AI is never allowed to do without checking (facts, claims, policy, compliance, anything high impact)
If you do that, hallucinations become manageable. They stop being scary and start being just another risk to control, like any tool.
If you want to use AI safely without slowing down
This is exactly the gap we see in SMEs.
Leaders buy Copilot or ChatGPT, people start using it, then everyone quietly hopes it will be fine. No training, no rules, no verification habits, no RAG, no process, and we end up with errors.
If you want to fix that properly, our training focuses on:
-
How to prompt for accuracy (and spot uncertainty)
-
How to build “check itself” habits into everyday use
-
When and how to use RAG so answers are grounded in your documents
-
What lightweight governance looks like for a small team
Because the goal is not “more AI”.
The goal is better decisions, with fewer avoidable mistakes.
If this story made you think “we need something comprehensive to stop this happening here”, our ConkerAI training courses are designed for exactly that.
Leave your comments below