Most prompt engineering guides follow a rigid structure: define the assistant’s role, outline the task, specify constraints, and close with examples. Something like:
You are a behavioral science expert. Analyze the following user feedback. Provide summaries with clear themes and count frequency of key comments. If the comments are unclear, ask clarifying questions.
That format works fine in many cases. But it’s not how I prompt - at least that’s now how I prompt now that I’ve talked extensively with this version of the ChatGPT. I’ve told it my interests and fed in a good deal of my writing. As such, my prompting is more conversational since this model is already trained on the way I think.
I tend to do what I naturally do when I’m thinking through something complicated: I talk it out. Not literally out loud, but in writing. I describe what I’m noticing, what I suspect might be happening, and what decisions I’m trying to make. Then, at the end, I usually ask a question. Not a dramatic one. Not a rhetorical one. Just… a question. Something that makes it clear where I want to go next.
It’s not that I’m trying to be clever. It’s just how my brain works. And, for whatever reason, this style tends to produce better results for me than the rigid template I was originally taught. Maybe it’s because the responses feel more like a continuation of a conversation rather than a mechanical task. Or maybe it’s because the context I’m providing gives the model enough grounding to actually be useful.
In retrospect, there is a kind of structure to it. It just doesn’t look like a template. But if I had to reverse-engineer what I’m doing, I’d break it down like this:
-
Conversational context – I describe the scenario in full sentences, like I’m talking to a colleague. It’s not optimized. It’s not bullet points. It’s how I would explain something if someone asked, “What’s going on?”
-
Framing insight – I usually highlight something that’s bugging me. Some contradiction. Some signal I can’t quite make sense of. I don’t always label it as such, but it’s the tension that gives the prompt weight.
-
Behavioral lens – Since I work in behavioral science, this is often where my perspective shows up. I’ll say something like, “I suspect this might be loss aversion,” or “Maybe this is default bias at play.” It helps frame the interpretation I’m hoping for.
-
Specific question – Finally, I end with a question. Not to signal helplessness, but to open the door for collaboration. The question usually focuses the response, even if everything before it was more exploratory.
That’s it. That’s the prompt.
It’s not better because it’s looser. It’s better because it mirrors how I actually think when I’m trying to solve problems. I’m not filling out a form. I’m describing a situation, surfacing what’s unclear, and inviting the next move.
I’m sure there are people who do just fine with structured prompts. And I still use them when I’m dealing with rigid tasks—like code generation, bulk formatting, or output that needs to be tightly scoped.
But when it comes to behavioral research, user trials, and trying to make sense of messy human behavior, this looser, dialogue-style prompting works better for me. It allows ambiguity. It gives room for doubt. And it preserves the chain of reasoning that led me to ask the question in the first place.
That’s what I do. Not because it’s the only way. Just because it’s the way that makes the most sense for the kind of thinking I do.