← Blog

How to Write Better AI Prompts: The 7-Dimension Framework

Most AI prompts are weak because they were written like search queries. Here is the framework that fixes them - 7 dimensions, with before-and-after examples.

What is a Large Language Model?

A large language model (LLM) is a type of AI trained on vast amounts of text - books, articles, code, forums, documentation - until it learns the statistical patterns of human language well enough to generate new text that sounds coherent and useful.

When you send a message to ChatGPT, Claude, or Gemini, you are not searching a database or running a rules engine. You are giving instructions to a model that predicts the most probable continuation of your input. That distinction matters, because it means the quality of what you get out is directly shaped by what you put in.

What is Prompting?

A prompt is any text you send to an LLM as input. It could be a question, an instruction, a document to summarize, or a combination of all three.

Prompting is the act of writing that input deliberately - choosing words, structure, and context to steer the model toward the response you actually want. It is the interface between your intent and the model's output.

Most people treat prompts like search queries: short, keyword-heavy, and vague. That works fine for Google, which is optimized to infer intent from minimal input. LLMs are different. They take your words literally and fill in every gap you leave with their own assumptions. The less you specify, the more the model guesses - and the more it guesses, the further the output drifts from what you had in mind.

Why Most People Write Weak Prompts

The average person's first instinct is to write something like:

Write me a cover letter.

or

Summarize this.

or

Help me with my email.

These feel like reasonable requests. They are not. Each one leaves the model to guess your industry, your audience, your tone, your length preference, your goal, and a dozen other variables. The model will produce something, but it will be the statistical average of all cover letters, all summaries, all emails - which is to say, generic.

The habit comes from years of using search engines, where brevity is rewarded. LLMs reward the opposite: specificity and structure.

The 7 Dimensions Every Good Prompt Should Cover

Researchers and practitioners across multiple prompt engineering frameworks - CO-STAR, CRISPE, RTF, and others - have converged on a common set of dimensions that separate effective prompts from weak ones. These 7 are the practical synthesis:

1. Clarity

Can your request be misread? "Write something about social media" could produce a history lesson, a marketing guide, or a personal essay. "Explain how the Instagram algorithm ranks posts, for a small business owner with no technical background" cannot be misread. Clarity is about removing ambiguity before the model has a chance to resolve it incorrectly.

2. Specificity

Vague requests produce vague answers. Adding concrete details - word count, a deadline, a specific constraint, a target metric - tells the model exactly how much work to do and where to stop. "Make it shorter" is vague. "Cut this to under 100 words while keeping the main argument" is specific.

3. Context

The model has no idea who you are or why you need this. A sentence of background changes everything about how the response is shaped. "I am building a landing page for a fitness app targeting beginners" gives the model a frame. Without it, the model writes for no one in particular, which usually means no one finds it useful.

4. Role / Persona

Asking the model to adopt a role - "Act as a senior copywriter", "You are a patient tutor explaining to a 12-year-old", "Respond as a hostile code reviewer looking for bugs" - unlocks a focused subset of the model's knowledge and shifts its tone immediately. Role prompting has dedicated academic research behind it, and it is one of the highest-leverage changes a beginner can make.

5. Output Format

If you do not specify a format, you will get a wall of prose. Ask for a bullet list, a table, step-by-step numbered instructions, a JSON object, or a specific length, and the output becomes immediately usable without reformatting. Format is not a stylistic preference - it is part of the specification.

6. Examples

Showing one example of what you want is worth ten lines of description. Even a rough sample removes the model's biggest source of ambiguity: what your ideal output actually looks like. This is the principle behind few-shot prompting, one of the most empirically validated techniques in LLM research (Brown et al., 2020). When in doubt, show don't tell.

7. Tone / Style

"Professional", "casual and friendly", "technical for senior engineers", "ELI5" - without a tone instruction, the model defaults to a generic middle ground that rarely fits your actual audience. Specifying tone is especially important when the output will be read by a specific person or published in a specific context.

Common Prompt Anti-Patterns

Before looking at a full before-and-after, here are the patterns that most reliably produce bad outputs:

  • One-liners with no context - "Summarize this." Summarize it for whom? At what length? For what purpose?
  • Implicit role expectations - "Write a legal disclaimer." The model does not know you want the tone of a UK solicitor rather than a US startup.
  • No format specified - "List the pros and cons." In a table? As bullets? One sentence each or a paragraph?
  • Outcome without constraints - "Make it better." Better how? Shorter? More persuasive? More formal?
  • Missing audience - "Explain machine learning." To a data scientist or a marketing manager?

Every one of these forces the model to guess, and the guess is always toward the generic middle.

Before and After

Here is the same task written as a weak prompt and then rewritten to cover all 7 dimensions:

Before (scores ~3/14):

Write me a cover letter.

The model will produce something. It will be generic, because it has nothing else to go on.

After (scores ~13/14):

Act as an experienced career coach. Write a cover letter for a junior front-end developer with 1 year of experience applying for a remote role at a fintech startup. Highlight adaptability and a passion for clean UI. Keep it under 250 words and use a confident but approachable tone. Format it as three short paragraphs: opener, body, close.

Same task. Completely different output. The rewritten version specifies role, context, specificity, output format, and tone. The model has no gaps left to fill with assumptions.

Iterating on Your Prompt

Even a well-structured first prompt is rarely the final one. Good prompting is a loop, not a one-shot.

Once you have an initial response, follow up with targeted corrections rather than starting over:

  • "Now make it more concise - aim for half the current length."
  • "The tone is too formal. Rewrite it as if you are talking to a friend."
  • "Focus only on the second point. Expand that into a full paragraph."
  • "Keep everything the same but change the output to a numbered list."

Each follow-up narrows the model's output space without losing the context you already established. This is faster than rewriting the whole prompt and teaches you which dimensions need the most adjustment for your use case.

Works with Any LLM

The 7 dimensions are not ChatGPT-specific or Claude-specific. They target structural weaknesses that affect every large language model - GPT-4o, Claude Sonnet, Gemini, Mistral, Llama, and others. A prompt improved across all 7 dimensions will perform better on all of them, because the underlying problem (under-specified instructions) is the same regardless of which model reads them.

The specific phrasing of your role instruction or output format may produce slightly different results across models, but the improvement from covering all 7 dimensions is consistent.

Try the AI Prompt Improver

Writing better prompts is a skill, and like any skill, it helps to get feedback. The AI Prompt Improver analyzes your prompt across all 7 dimensions, shows you a score and per-dimension breakdown, and rewrites it to close every gap it finds.

Paste any prompt - even a rough one-liner - and you will immediately see which dimensions are missing and a version that covers all of them. After a few sessions, you will start writing better prompts from scratch without needing the tool.

Ready to try it out?

Analyze My Prompt →

Send Feedback

AI Prompt Improver
Score your AI prompt across 7 quality dimensions and get an improved version instantly.
AI Token Calculator
Paste any text and instantly see token count, context usage, and estimated cost for GPT, Claude & Gemini models.
Deep Link Tester
Test and debug deep links, Universal Links (iOS), and App Links (Android) directly in your browser.
Unix Timestamp Converter
Convert Unix timestamps to human-readable dates and back. See the current epoch time live, with timezone support.
Markdown to PDF
Convert Markdown to a clean, printable PDF. No installs required.
Username Generator
Generate unique usernames with style options and bulk generation.
Word Counter
Count words, characters, sentences, and estimate reading time.
Password Generator
Generate strong, secure passwords with custom length and character options.
URL Slug GeneratorSoon
Convert any title into a clean, SEO-friendly URL slug.