How Prompt Engineering Actually Works: The 5 Variables That Separate Good Prompts from Great Ones

How Prompt Engineering Actually Works: The 5 Variables That Separate Good Prompts from Great Ones

PromptLab Guide

Most prompt engineering advice focuses on tricks: use 'act as', add 'step by step', try chain-of-thought. These help. But they're tactics built on top of something more fundamental — and if you understand the fundamentals, you can write good prompts for any situation, including ones no tutorial has covered.

Here are the five variables that separate prompts that work from prompts that waste tokens.

Variable 1: Role and Register

Every language model has been trained on a massive range of writing styles. When you don't specify a role, the model defaults to something generic — usually a helpful assistant summarizing things in a neutral tone. That's rarely what you want.

Role doesn't just mean 'act as a lawyer.' It means: who is writing this, for whom, and in what context? 'Write a Slack message from an engineering manager to a junior developer who missed a deadline' is a role prompt. So is 'you're a CFO explaining budget variance to a board of directors.'

Register — the formality level — matters as much as role. Technical writing, executive communication, casual social copy, and legal memos all live in different registers. Name the register and the output quality jumps immediately.

Variable 2: Constraints

Constraints are the most underused tool in prompting. Word limits, format restrictions, things to exclude, structural requirements — all of these make outputs dramatically better.

The mechanism is simple: constraints eliminate bad options. 'Write a product description' has infinite valid outputs. 'Write a product description under 60 words, starting with a pain point, for a B2B buyer, using no adjectives like powerful or seamless' has far fewer — and the ones that remain are better.

A useful heuristic: if your first prompt output is generic, add one constraint. If the second output is still generic, add another. You're narrowing the solution space each time.

Variable 3: Context and Examples

AI models don't know your product, your tone of voice, your competitive situation, or your audience. You do. The gap between a generic output and a useful one is almost always a context gap.

Few-shot examples are the most reliable form of context. Instead of describing what you want, show it: 'Here's an example of a subject line we like: [X]. Write 10 more in this style.' The model calibrates to the example rather than trying to interpret your description.

When you can't provide examples, provide detailed situation context: the reader's role, what they already know, what decision they're trying to make, what would make them dismiss the output. The more of their mental model you give the AI, the better it can write for them.

Variable 4: Output Format

This is the most straightforward variable and the most commonly skipped. If you don't specify format, you get prose paragraphs. If your output needs to go into a spreadsheet, a slide, a table, a Jira ticket, or an email — say so.

Format also includes length, structure, and section headings. 'Use a three-column table with rows for each competitor' produces something categorically different from 'compare these competitors.' Same underlying request — radically different output.

Variable 5: Iteration Strategy

The best prompt engineers treat AI like a junior collaborator, not a vending machine. The first output is a draft. They critique it, ask for alternatives, request specific changes, and iterate. The final output is rarely version one.

Useful iteration patterns: 'Give me three alternatives to paragraph two.' 'Rewrite this with more confidence — it sounds tentative.' 'The second option is closest — adjust it by [specific change].' These are faster than re-prompting from scratch because you're converging on what you want rather than starting over.

Putting It Together

A strong prompt addresses all five: who is speaking and to whom (role/register), what's excluded or required (constraints), what context the model needs (situation + examples), how the output should be formatted, and what you'd want in round two if round one falls short (iteration plan).

Most weak prompts are missing two or three of these. That's not a model failure — it's an input failure. The model gave you exactly what you asked for; you just didn't ask for enough.

Want to score your current prompts? The grade tool at http://143.198.136.81:8802 runs your prompt through an A–F rubric across these five variables and tells you exactly what's missing.

Report Page