Introducing the CRIT Framework

Introducing the CRIT Framework

Here’s why your results with AI feel generic — and how a better prompt structure fixes it.

You fire up ChatGPT. You need a blog post for your SaaS. You type: “Write a blog post about productivity tools for remote teams.”

Hit enter. Hope.

What comes back is… fine. Technically correct. Maybe even SEO-friendly. And completely soulless.

It reads like every other “Top 10 Productivity Hacks” post on the internet. Your competitors could have written it. A productivity bot from 2019 could have written it.

You think: “I gave it a clear prompt, didn’t I?”

You tweak, retry, copy someone’s clever formula. You hope it’ll click. But the output still feels… wrong. You’re not sure if the problem is the tool — or you.

It’s not you. It’s not the model. It’s your structure.

The Real Problem Isn’t AI — It’s Your Prompt Architecture

Most people treat AI like a mind-reader. They assume ChatGPT or Claude will somehow divine their brand voice, understand their audience, and nail their goals with minimal direction.

Instead, think of AI not as a genius oracle, but as a capable but context-starved assistant. Like a smart intern, AI has skills but no context. It can’t intuit your goals unless you give it enough information.

But would you tell a smart intern “Write me a blog post” and walk away? Of course not. You’d give them context. You’d explain the goal. You’d clarify the audience and tone.

It’s not that the tools don’t work. It’s that they can’t read your mind — and no one ever taught you how to feed them right. The problem isn’t that the models are broken — it’s that your prompts are under-specified. No context leads to generic results.

Result? Generic output that sounds like it came from the Content Marketing Bot Factory.

To fix that, you need a structure that gives AI what it lacks — clarity, specificity, and direction.

Meet CRIT: The Framework That Fixes Fuzzy Prompts

🎯 CRIT = Context, Role, Interview, Task

CRIT turns vague requests into creative briefs your AI can actually use. It’s not magic words or prompt hacks — it’s structure.

Each piece of CRIT solves a key failure point in most prompts:

  • Context = prevents irrelevance
  • Role = prevents generic tone
  • Interview = prevents confusion
  • Task = prevents meandering.

Let me break down each piece:

🧭 Context: What is this for? Who’s it for?

AI needs situational awareness. A blog post for first-time visitors needs different tactics than an email for existing customers. Context tells the model:

  • What stage of your funnel this fits
  • What your reader already knows
  • What outcome you’re working toward.

🎭 Role: Who should the AI act like?

Language changes depending on who’s speaking. Give the model a clear persona:

  • A veteran SaaS founder sharing hard-won lessons
  • A conversion-focused copywriter with a knack for clarity
  • A technical writer explaining complex concepts simply.

This tightens tone, filters vocabulary, and guides the entire approach.

You’ve set the scene and cast the lead character. Now the collaboration begins.

🤝 Interview: Ask me one question at a time to clarify the task

Most prompts fail because you give fuzzy instructions. The Interview component turns AI into a thoughtful collaborator. It asks for missing info, surfaces contradictions, and helps you think more clearly before writing begins.

Finally, you need to define exactly what success looks like.

🎯 Task: What specific outcome do you want?

This focuses the generation. Are you looking for an outline? A full draft? A catchy headline? Three different angles? The Task statement sets clear boundaries and success criteria.

✔ Defines what the model should create
✔ Sets word count or format
✔ Prevents vague, rambling output.

That’s a lot of moving parts — but here’s what it looks like when they come together.

Before and After: See CRIT in Action

🛑 Generic Prompt: “Write a blog post about productivity tools for remote teams.”

✅ CRIT-Structured Prompt:

Context: This blog post is for the top of our funnel, targeting remote team leaders who are overwhelmed by tool sprawl and looking for simpler solutions.

Role: You’re a seasoned remote work consultant who’s helped 50+ companies streamline their workflows.

Interview: Ask me one question at a time to clarify the task.

Task: Write a 1,200-word blog post that challenges the “more tools = more productive” mindset and positions our unified platform as the alternative.

See the difference? The second prompt gives AI everything it needs to write something that speaks to your audience and serves your business goals. Notice how the new prompt adds context (funnel stage), a voice (consultant role), a chance to clarify (interview), and a clear deliverable (task). That’s CRIT in action.

Reading the second one feels like… finally being understood. That’s not just better output. It’s the difference between a cold auto-reply and a collaborator who gets you. Between “meh” and “hell yes.”

Why CRIT Changes Everything

CRIT helps you:

  1. Write better copy – clarifies intent before generation begins
  2. Iterate faster – each element is modular and adjustable
  3. Debug easily – you know exactly what to fix when output misses
  4. Scale your system – structure is reusable across all your copy needs.

It’s not about becoming a prompt engineer. It’s about treating AI like the smart collaborator it can be — instead of a magic content fairy.

Your voice isn’t missing. It’s just been muffled by vague prompts and guesswork. CRIT lets you hear it again — clear, confident, unmistakably yours.

Structure doesn’t box you in — it sets you free.


Acknowledgements: These components have been floating around the Internet for a while in various forms, but I first encountered the CRIT framework as a coherent framework from Geoff Woods by way of Jack Spirko.


Structure is just the beginning. If you want to see how CRIT helps you think differently before you prompt, read this next:

Why AI Copy Fails

It goes deeper into how CRIT works and how to fix your next prompt before you even hit send.