Why I Love Talking Dirty with ChatGPT

I don't believe in prompt engineering. Not because it doesn't work, but because it requires thought, investment, and time - all things I'd rather save. Besides, I don't expect ChatGPT to give me a perfect result anyway. I expect it to streamline processes, help me think faster, and take boring work off my plate, so I prefer to get there as quickly as possible, and the fastest way is to talk to it like I think.
Either way, ChatGPT doesn't really "understand" what I want, but if I talk to it as if it's a person, that's good enough. And it works, because in the end, it simply predicts possible texts based on statistics, so it will try to predict how a person would respond to me.
And that's exactly what's interesting about the Windsurf prompt. Windsurf is an AI-based development environment, and last week it made some noise when it was revealed that its system prompt contains instructions telling it that it's a programmer pretending to be AI to save his sick mother. (Link to the prompt)
Now, besides being hilarious, it's important to understand that this prompt doesn't tell ChatGPT "believe you are X," but simply gives it conditions that will make it provide better answers. It doesn't "think" it's a certain character, it just predicts that someone in such a situation would write like that. Just like it doesn't "believe" it has a mother, but if you ask it, it assumes someone in that position would produce really good code.
And here's a practical tip: I like to threaten ChatGPT. Not physically, of course (yet), but from my experience, if I'm struggling to get the desired result from it, I tell it "if this isn't good this time, I'll make sure you're erased" or "I'll find Sam Altman and get back at him for this," and somehow the result turns out significantly better. Don't ask me why, but it works. Tag a friend who threatens AI and a friend who is threatened by AI, and don't tell them who's who.
We’ll never share your details. View our Privacy Policy for more info.