CRIT: The Framework That Made ChatGPT Stop Wasting My Time
CRIT: The Framework That Made ChatGPT Stop Wasting My Time
I was stuck. Again.
The cursor blinked, patient and unhelpful. I’d asked ChatGPT to help with a tricky story scene—just a bit of brainstorming. My mind raced through possibilities, dead ends, character motivations that wouldn’t quite click into place. Instead, it gave me something lifeless and generic. Nothing usable. I gave up in disgust.
That wasn’t the moment I realized the prompt was the problem. That was the moment I almost gave up on AI entirely. I closed the tab, wondering if I’d just been wasting time on a tool that would never understand me.
That moment didn’t just shake my trust in ChatGPT—it revealed a deeper pattern I hadn’t wanted to admit.
The Pattern: AI Works for Simple Tasks, Fails for Complex Ones
Frustration sets in
This kept happening. ChatGPT worked fine for simple tasks—brainstorming, summarizing articles, basic research. But the moment I needed something more complex, more nuanced, more me? It fell apart.
I wanted sharp edges, personal fingerprints. Instead, I got copy you could slap on any startup in any industry. The output felt disconnected from everything that made my work mine. Ideas got diluted. Momentum stalled. Launches delayed.
Realization dawns
That’s when I realized I didn’t understand prompting at all. I wasn’t failing because AI was dumb. I was failing because I didn’t understand that a good prompt isn’t a sentence—it’s a system. I’d been asking it to do what only I could do—think in my context.
A good prompt isn’t a sentence — it’s a system.
So I started digging. If magic wasn’t coming, maybe method would.
Step 1: Finding the Four Pieces That Fixed My Prompts
It didn’t happen in a single epiphany—it was more like finding puzzle pieces under the couch, one by one, until the picture finally made sense.
First, I learned about Task. It’s the most obvious one, but I discovered that being more specific about what I wanted the AI to actually do produced better results.
“Brainstorm story concepts” is useless.
“Generate a list of 20 story concepts to use as the basis for building the premises for adventure stories” is better.
Then Context. I started explaining where this content would live, who would read it, what they already knew. The AI stopped assuming I was writing for a generic audience of nobody.
Just “story concepts”? Blah.
How about: “Each story concept should be adventuresome, with lots of opportunity for action, peril, and derring-do. Each should be immediately attractive to young men aged 20–35 but should be appropriate for anyone over the age of 10. Example: No stories of adultery or infidelity. The time period for each concept should tend towards the modern world or near future. Each concept might involve sports, military, adventure, rodeo, nautical, racing, or near-future science fiction. The setting for each concept could be urban or rural, air or sea or space. The setting for each concept should be fairly realistic, but with room for touches of the fantastic or supernatural.”
But the real surprise was Role. When I told ChatGPT to “act as a hard-bitten conceptual editor for a small publishing house who knows that time is money and communicates in concise, even clipped, sentences” something shifted. The suggestions became specific, opinionated, useful. The AI had a perspective to write from instead of trying to please everyone.
And Interview—that changed everything. Instead of accepting my vague brief, the AI started asking clarifying questions. “Do you want these 20 concepts to stand alone as one-shot adventures, or to have series potential baked in?” “Do you want these concepts to lean more toward gritty realism, or toward pulpy high adventure with a few larger-than-life elements?”
I’d never have thought to ask myself those questions. But once the AI did, the answers were obvious. And the output was ten times better.
I’d been collecting these fragments—Task, Context, Role, Interview—without quite knowing how they fit. Then one day, something snapped into place.
Step 2: Naming the System — CRIT
CRIT is a four-part framework that turns vague prompts into structured conversations.
CRIT = Context • Role • Interview • Task
Ground it • Give it a voice • Sharpen the brief • Define success
I was listening to a podcast when Jack Spirko focused something called the CRIT framework. Context, Role, Interview, Task.
He was talking about CRIT in the context of how to use it to keep from getting fleeced by your mechanic. But I was only half-listening. The ideas that had been floating around my head half-formed suddenly crystallized.
Now I had a frame. I wasn’t just improvising anymore—I was designing the conversation.
CRIT wasn’t just four handy labels—it was a system. Each piece served a specific function:
- Context oriented the model to the real world
- Role gave it a voice and perspective
- Interview sharpened my own thinking
- Task defined success
For the first time, prompting felt like engineering instead of wishful thinking.
The magic isn’t in the acronym. It’s in the sequence.
From Stuck Scene to Full Story
Software engineering is my day job, but you’ll often find me up late at night, working on one story idea or another.
After listening to that podcast, late one evening I went back to ChatGPT and outlined a 700+ word prompt for taking a story premise and generating an outline.
This time, I used CRIT.
The results were spectacular. In short order I had a pipeline of CRIT-based prompts to iterate a concept from rough premise to full 6,000 word short story.
I was in control the whole time, acting as the managing editor.
It felt like switching from shouting into a void to having a real team of collaborators around the table. Instead of generic output, I got specific, contextualized results that reflected a deep understanding of both the craft and the content.
An hour later, I was grinning at the results. Same tool. Totally different results.
Nothing about the model changed—only the way I spoke to it.
Why CRIT Works
Let’s step back and look at why this structure actually works—and why each piece matters in the order it does.
Once I had names for the pieces, here’s how I started using them:
C — Context: Orient the model
Tells the AI where this content lives in the real world
R — Role: Guide its tone and perspective
Gives the AI a specific voice to write from
I — Interview: Sharpen the prompt before generation
Forces collaboration and reflection—helps surface missing info and shape better inputs before any words are generated
T — Task: Define success clearly
Sets specific, measurable goals for the output
I’ll be honest—at first, it felt like overkill. Four steps just to ask for a headline? But every time I skipped one, the results reminded me why CRIT worked. You’ll find your own shortcuts, but this is where I started.
The magic isn’t in the acronym. It’s in the whole. Context grounds everything in reality. Role gives the AI a coherent voice. Interview forces you to think more clearly about what you actually want. And Task ensures you get something specific and useful.
Without it, you’re left hoping the AI will magically ‘get you.’ It won’t. With it, you get to keep your voice and your momentum.
Once I understood what each part did, I realized CRIT wasn’t just for fiction (or auto mechanics). It worked everywhere I needed better AI output.
How CRIT Transforms Marketing Copy
One of the most useful places I’ve found to put CRIT to work has been to help me with my marketing copy for side projects.
-
Landing pages: Context = early-stage SaaS, skeptical technical audience • Role = experienced founder • Interview = what’s the main objection? • Task = write hero section that shows it matters to them
-
Email sequences: Context = campaign goal and subscriber awareness • Role = helpful guide or fellow founder • Interview = uncover emotional journey • Task = define format and length
-
Code generation: Context = broader application • Role = senior developer • Interview = make sure we know the boundaries before we start • Task = specify exactly what to build
CRIT isn’t about “doing it right.” It’s about making the system work for you. The framework forces clarity on both sides of the conversation—yours and the AI’s.
Because here’s what I learned: leverage isn’t in better tools. It’s in better structure.
The same ChatGPT that gave me garbage with a lazy prompt gave me genuinely useful output when I learned to direct it properly. The difference wasn’t the model’s intelligence. It was the quality of the conversation.
If CRIT changes how you prompt, it also changes what kind of copy you can reliably generate. That’s where this gets interesting for founders who need marketing content that actually converts.
If this clicked for you, I wrote up a more complete breakdown of the CRIT framework and how to apply it here.