Why most prompt training doesn't stick
Walk into a Copilot training session in 2026 and there's a fair chance you'll be shown a prompt template with eight fields — persona, tone, audience, format, length, examples, constraints, output — and told this is the magic incantation. People nod. People take photos of the slide. Two weeks later nobody is using it, because nobody on a Tuesday morning is going to fill in eight fields to write an email.
Microsoft's own guidance, sitting on its public Support pages and in the FY26 Building blocks of a good Copilot prompt PDF, is much shorter — and far more durable. It's four parts: Goal, Context, Source, Expectations.
The four parts
Goal. What you actually want. Draft a follow-up email. Summarise this report. Write a status update for my skip-level. The only mandatory part. Microsoft's own framing is blunt: "all that's required is a clear goal."
Context. Why you want it. I'm trying to chase the supplier without sounding annoyed. Audience is the senior leadership team and the tone needs to be confident but not over-promising. Context is what stops Copilot defaulting to the bland average of every email ever written.
Source. What it should reason over. The thread you just had. The five-page brief on your screen. The SharePoint site for that programme. Microsoft is explicit that order matters — later parts of a prompt are weighted more heavily, so sources should usually go last.
Expectations. What done looks like. Three short paragraphs, British English, ending with two clear questions. A bullet list, no more than seven items, no fluff. Expectations are how you stop Copilot writing the same eight-bullet structure for everything.
A worked example
Here's a prompt with only Goal:
Write a status update for my skip-level.
You'll get something. It'll be generic.
Now the four-part version:
Goal: Write a status update for my skip-level for my Q2 work on the SharePoint migration.
Context: Audience is a director who's busy, doesn't have the detail, and tends to react well to confidence and clear next steps. We are slightly behind on the schedule but not on the outcomes.
Expectations: Three short paragraphs. British English. End with the one decision I need from her.
Source: Use this week's project status doc and my last two updates to her.
Same product. Wildly different output. Nothing in the four-part version requires creativity — it requires you to slow down for fifteen seconds.
Why this version stays usable
There are two practical reasons the four-part frame survives contact with real users.
First, it's additive. You don't have to write all four. You can write Goal alone, then add Context if the result isn't quite right, then add Expectations to tighten the output. Most working prompts I see in trained orgs are two parts on the first pass and four on the second — that's exactly how Microsoft says to use it.
Second, it maps onto how people already talk. Here's what I want, here's why, here's what I'm working from, here's what good looks like is a fairly natural request. Eight-field templates aren't.
Two small habits worth teaching
Two extra patterns, both Microsoft-recommended, raise the hit rate further.
Order matters. Put your sources at the end of the prompt. Copilot weights later content more heavily, so a brief at the bottom of the prompt influences the output more than the same brief at the top.
Iterate, don't restart. If the first response is 70% there, don't rewrite the prompt — reply to it. Make it shorter and more direct. Cut the third paragraph and add a bullet list of risks. Most users keep starting fresh, which is a habit they've imported from search engines and that does them no favours here.
The training implication
If you've got a 60-minute prompt training to run, you can spend 50 minutes on Goal-Context-Source-Expectations and the last 10 on iterate-don't-restart, and you'll have outperformed 90% of the prompt training out there. The trick is restraint. The four parts are enough.