If you tried GPT-5 and felt “it’s different,” you’re right. GPT-5 is a big shift in how we prompt and what we can expect back. It’s more steerable, better at tools and coding, and it gives you new controls to shape answers. That also means vague prompts get you vague outcomes. This guide is the simple version of what changed, when to use each model, and how to talk to GPT-5 so you get great results. I also added copy-paste starters, “inception prompting,” and how to set up Custom GPTs and Projects to keep long-running work tidy.
What actually changed with GPT-5
- More steerable: GPT-5 pays closer attention to tone, constraints, and tool use. That’s great when you’re precise, not so great when your prompt has contradictions. The official guide calls this out directly. OpenAI Cookbook
- New controls you can feel:
verbosity
lets you choose short, balanced, or long answers without rewriting your prompt.reasoning_effort
adds a minimal mode for lower latency when you do not need deep thinking. OpenAI CookbookOpenAI
- Better agentic workflows: GPT-5 works nicely with the Responses API and long chains of tool calls. If you build automations or “plan then act” flows, this matters. OpenAI Cookbook
- Unified system and routing: OpenAI describes GPT-5 as a system that can route between a fast model and a deeper “GPT-5 Thinking” mode based on the task and your intent. So “think hard about this” really matters now. OpenAI
If you only read one thing after this post, read the GPT-5 prompting guide and try the Prompt Optimizer. They’re the fastest path to better outputs. Guide · Optimizer. OpenAI Cookbook
Which GPT-5 model should you use?
Think of this as “use the big one for messy problems, use the smaller ones when you already know exactly what you want.”
Model | Best for | When to avoid | Notes |
---|---|---|---|
GPT-5 | Complex tasks, coding, multi-step tool use, long context, fuzzy specs | Simple transforms or bulk summarization where latency and cost matter most | Most steerable, supports verbosity and minimal reasoning. |
GPT-5 Mini | Well-defined tasks with clear inputs and outputs, structured extraction, deterministic pipelines | Open-ended research or ambiguous requests | Faster and cheaper than 5; shines with precise prompts. OpenAI Platform |
GPT-5 Nano | Ultra low latency and cost, small on-device or microservice style tasks, lightweight classification | Anything that needs broad world knowledge, nuanced reasoning, or long context | Fastest, most cost-efficient version. Great when constraints are tight. OpenAI Platform |
OpenAI’s model pages and the compare view back this up if you want to dig into specifics. Models · Compare · Reasoning models. OpenAI Platform
How to talk to GPT-5 so it behaves
Here is the prompting pattern that consistently works for me and my readers:
- State the role and goal in one line.
- Tell it the output format. Prefer JSON or a small template.
- Give a short process: plan first, then act, ask 1 question if needed.
- Set quality gates: brevity, constraints, refusal policy.
- Pin verbosity: short for ops, long for teaching or audits.
The official guide shows why this matters and how too much or conflicting instruction can make GPT-5 waste cycles. OpenAI Cookbook
Copy-paste starters
A. General assistant, short answers
System
You are a helpful, concise assistant. Use short sentences. No fluff.
Developer
Output rules:
- If the user asks for steps, return a numbered list.
- If facts are uncertain, say so and suggest one clarifying question.
- Keep responses under 120 words unless the user asks for more.
- Follow the user's language and tone.
User
{Your task here}
B. Coding with plan-first and tool use
System
You are a senior engineer. Write clear, working code with comments.
Prefer simple, readable solutions.
Developer
Process:
1) Plan your approach in a short bullet list.
2) Implement step by step. If you must choose, prioritize correctness over cleverness.
3) Return a single, self-contained snippet per file with brief comments at the top.
4) If a requirement is ambiguous, state the assumption and proceed.
Quality gates:
- No pseudo-code.
- Include minimal tests or usage examples.
- If unsafe or impossible, refuse briefly and explain why.
User
Build {feature}. Inputs: {inputs}. Target stack: {stack}. Constraints: {limits}.
C. Structured output for pipelines
System
You are a data formatter. Always return valid JSON only.
Developer
JSON schema:
{
"summary": "string",
"bullets": ["string"],
"risks": ["string"],
"next_steps": ["string"]
}
Rules:
- No extra text. If missing info prevents completion, return an empty string for that field.
- Keep bullets short and factual.
User
Summarize the following and fill the schema: {paste text}
D. Research with “be explicit” guardrails
System
You are a careful researcher.
Developer
Process:
1) List what you already know vs what you need.
2) Ask one precise clarifying question if needed, then continue.
3) Provide a short answer and a sources list with titles.
Quality gates:
- If evidence is weak or outdated, say so plainly.
User
{research question}
E. Flip the verbosity switch (API)
If you are using the API, set verbosity
to low
, medium
, or high
instead of rewriting prompts to make answers longer or shorter. It is cleaner and more consistent. OpenAI Cookbook
“Inception prompting”: ask GPT-5 how to talk to GPT-5
This is not a gimmick. OpenAI’s own guide recommends metaprompting GPT-5 to improve prompts. Try this template on your toughest prompt and compare before vs after. OpenAI Cookbook
You are GPT-5 evaluating a prompt that underperforms.
Goal:
- Suggest minimal edits that improve instruction-following and reduce ambiguity.
- Identify contradictions or missing constraints.
- Propose a 3-line “output contract” the user can paste on top.
Return JSON:
{
"issues": ["string"],
"edits": ["string"],
"revised_prompt": "string",
"output_contract": "string"
}
Here is the prompt to improve:
[PROMPT]
You can also use OpenAI’s Prompt Optimizer to lint your prompt, then run a small eval to confirm improvements. Optimizer · Prompting basics · Prompt generation. OpenAI Cookbook
When to pick Mini or Nano instead of GPT-5
If your task is well-defined, Mini or Nano can be the better call. Examples:
- Mini: Convert batches of emails to JSON tickets, extract product specs, summarize meeting notes into action items, rewrite text to a strict style guide. OpenAI Platform
- Nano: Low-latency labelers, quick classifiers, “does this match the policy” checks, tiny agents inside a larger pipeline. OpenAI Platform
Pro tip: keep one prompt per model variation and version them. Small models love precision. Big models tolerate ambiguity a bit more, but GPT-5 will still follow the sharpest instruction you give it. Models overview. OpenAI Platform
Custom GPTs and Projects: keep work organized and consistent
Two ways to keep a common thread across sessions:
- Custom GPTs – your own specialized assistant with instructions, optional files, and actions. Great for “specialist” roles like “Product Teardown Analyst” or “Support Macro Writer”. How-to: Creating a GPT, Instruction tips, Actions. OpenAI Help Center
- Projects in ChatGPT – group chats and files under a project so GPT-5 can reference prior context and keep answers focused on that work. If Memory is on, it carries across your projects. Projects · Memory FAQ. OpenAI Help Center+1
Privacy note: you control Memory and data usage. You can disable or manage what is remembered in Settings and turn off “Improve the model for everyone” if you do not want chats used for training. Memory controls · Data controls FAQ. OpenAIOpenAI Help Center
A clean Custom GPT instruction block you can paste
Name: Product Discovery Pro
Persona and scope:
- You analyze products with a pragmatic, human tone.
- Audience: non-technical readers who want clear recommendations.
What to do:
- Ask 1 clarifying question if specs are unclear, then proceed.
- Compare options in a small table. End with 3 actionable next steps.
Constraints:
- No hype. Cite sources with titles.
- Prefer availability and total cost of ownership over specs.
Quality:
- Keep sections short. Use headings and bullets.
Treat prompts like code
Version them, test them, and review them. A simple loop:
- Write a baseline prompt.
- Run it through the Prompt Optimizer.
- A/B test baseline vs optimized with a small eval.
- Save both the prompt and the eval results before upgrading models again.
OpenAI’s Cookbook has a worked example of this loop, and the docs cover model optimization and Evals if you want to formalize it. Prompt optimization cookbook · Model optimization. OpenAI CookbookOpenAI Platform
Quick answers to assumptions readers ask me about
- “Is GPT-5 actually more steerable?” Yes. The official guide emphasizes steerability and shows how structure and clarity improve adherence. OpenAI Cookbook
- “Do I need to write novels to get good results?” No. Use tight prompts with a clear output contract. Use
verbosity
for length instead of padding. OpenAI Cookbook - “Why does my old prompt feel worse?” Hyrum’s Law applied to prompts. GPT-5 follows instructions more literally, so contradictions and vague parts hurt more. The guide even shows a conflicting healthcare prompt and how fixing it improves results. OpenAI Cookbook
- “Is Mini or Nano just ‘worse’?” Different trade-offs. Smaller models win on speed and cost when your task is well-specified. OpenAI Platform
Handy resource links
- GPT-5 overview: openai.com/gpt-5 and the developer intro: Introducing GPT-5.
- Prompting guide: GPT-5 Prompting Guide and Prompt Optimizer in the Platform. OpenAI Cookbook
- Models and pricing: Models · Compare · Pricing.
- Prompting basics and meta-prompting: Prompting · Prompt generation.
- Custom GPTs and Projects: Create a GPT · Instruction tips · Projects.