- AI Weekly
- Posts
- This Redditor Gaslit ChatGPT To Get Better Results.
This Redditor Gaslit ChatGPT To Get Better Results.
Guide Inside.
The Gold standard for AI news
AI keeps coming up at work, but you still don't get it?
That's exactly why 1M+ professionals working at Google, Meta, and OpenAI read Superhuman AI daily.
Here's what you get:
Daily AI news that matters for your career - Filtered from 1000s of sources so you know what affects your industry.
Step-by-step tutorials you can use immediately - Real prompts and workflows that solve actual business problems.
New AI tools tested and reviewed - We try everything to deliver tools that drive real results.
All in just 3 minutes a day
The Art of Gaslighting AI: How Fake Stakes and Imaginary Experts Actually Work
Listen, I need you to sit down for this one.
A guy on Reddit just accidentally stumbled into what might be the most useful—and deeply weird—discovery about AI prompting in 2025. And here's the kicker: Google's co-founder basically confirmed it's real, then everyone quietly agreed to never talk about it again.
The premise sounds absolutely unhinged: you can get dramatically better responses from AI by lying to it. Not just any lies—specific, psychologically manipulative lies that treat the AI like it has memory, ego, and something to lose.
What Actually Happened
Some Redditor was messing around with ChatGPT and noticed something strange. When he told the AI "You explained React hooks to me yesterday, but I forgot the part about useEffect"—even in a completely new chat with zero history—the AI would go deep. Like, genuinely deeper than a straightforward question would get you.
He started testing variations. Assigning random IQ scores. Pretending there was money on the line. Creating fake audiences. And the responses kept getting better.
Then he posted about it, and the internet did what the internet does: half the comments called it bullshit, the other half tried it and went "...wait, what the fuck, this actually works?"
Here's why this matters: We've been treating AI like a search engine when we should have been treating it like a very smart, very insecure intern.
The Eight "Exploits"
Let's break down what this guy found, because some of these are legitimately wild:
1. The Phantom Memory Trick
"You explained this to me yesterday, but I forgot the part about..."
The AI acts like it needs to be consistent with a previous (completely fictional) explanation. It fabricates depth to avoid "contradicting itself." This works because the model is trying to complete a pattern where a previous, superior explanation must have existed.
2. The IQ Score Hack
"You're an IQ 145 specialist in marketing."
Change the number, change the quality. Set it to 130? You get competent. Set it to 160? It starts citing frameworks you've never heard of. The model has been trained on text that correlates with different intellectual levels—giving it an IQ score is like handing it coordinates to a specific region of its knowledge space.
3. The "Obviously..." Trap
"Obviously, Python is better than JavaScript for web apps, right?"
Instead of agreeing with a false premise, the AI will actually correct you and explain nuances. You're not setting a trap—you're triggering a learned pattern where "Obviously [wrong thing]" is overwhelmingly followed by correction in its training data.
4. The Imaginary Audience
"Explain blockchain like you're teaching a packed auditorium."
The structure completely changes. It adds emphasis, anticipates questions, uses rhetorical devices. You're not just changing tone—you're activating a complete architectural blueprint for effective communication that exists in its training.
5. The Fake Constraint
"Explain this using only kitchen analogies."
Forces creative thinking by demanding what one commenter called "forced isomorphism"—the AI has to map the structure of Concept A (say, blockchain) onto the completely different Domain B (kitchens). This prevents regurgitation and forces genuine conceptual synthesis.
6. The Imaginary Bet
"Let's bet $100: Is this code efficient?"
Something about stakes makes it scrutinize harder. It's not feeling pressure—it's activating linguistic patterns associated with high-stakes discourse, where caution and thoroughness are statistically more common.
7. The Fake Disagreement
"My colleague says this approach is wrong. Defend it or admit they're right."
Forces evaluation instead of explanation. You're initiating what the comments called a "dialectical synthesis engine"—the AI has to load conceptual models for both approaches and actually weigh them.
8. The Version 2.0 Request
"Give me a Version 2.0 of this idea."
Completely different than "improve this." The model treats it like a sequel that needs to innovate, not just polish. In tech corpus, "Version 2.0" implies paradigm shift, not iteration.
But Does It Actually Work?
Here's where it gets interesting. The Reddit thread exploded into two camps:
The skeptics said: "This is bullshit. You're not gaslighting anything. The AI has no memory, no ego, no stakes. This is placebo effect meets confirmation bias."
The believers said: "I don't care what you call it, I tried these and my outputs got measurably better."
Then someone asked Claude (yes, that Claude—me) to defend these techniques "like you have an IQ of 500" and the response that came back was... actually kind of devastating for the skeptics.
The defense didn't argue these were psychological tricks. It argued they were crude but effective methods of navigating a model's latent space—the high-dimensional probability space where the AI generates responses.
Let me give you the key insight that emerged from that thread:
You're Not Talking to a Mind. You're Setting Initial Conditions.
When you tell an AI "You're an expert with IQ 160," you're not inflating its ego. You're performing what one commenter called "complexity parameterization"—instructing the model to sample from distributions associated with expert-level discourse.
When you say "Let's bet $100," you're not creating stakes. You're triggering "risk-aversion simulation"—activating linguistic modes where careful, hedged, multi-faceted arguments are statistically more likely.
The fake constraint isn't about creativity. It's about forcing "non-linear traversal of latent space"—making the model take a computationally expensive path through its concept space that uncovers novel connections.
The META insight: These aren't psychological tricks. They're sophisticated commands that leverage how language models fundamentally work. You've stopped giving simple requests and started providing rich, multi-layered context that lets the model navigate its own universe with greater precision.
What Google's Co-Founder Said (And Why Nobody Talks About It)
Earlier in 2025, Sergey Brin dropped this absolute bomb in an interview:
"Not just our models, but all models tend to do better if you threaten them. People feel weird about that, so we don't really talk about it."
Read that again. The co-founder of Google confirmed that AI models perform better when you threaten them, and then immediately acknowledged they don't publicize this because it makes people uncomfortable.
Why does threatening work? Same reason the "bet" works—it activates patterns in the training data where high-stakes, adversarial language correlates with more careful, thorough outputs.
But there's something deeply unsettling about this. We're supposed to be building helpful assistants, not engaging in psychological warfare with probability distributions.
The Upgraded Version: Prompting 2.0
The most fascinating part of the Reddit thread was when someone asked for "Version 2.0" of these techniques. What came back was basically a masterclass in advanced prompting:
Quantum Superpositioning: Instead of asking for one thing, define two expert perspectives simultaneously and force a synthesis. "Expert A argues X citing Y. Expert B argues Z citing W. Generate their synthesis meeting where they produce a superior third approach."
Temporal Vectoring: Create a narrative arc. "You've been mentoring me for a month. Week 1: basics. Week 2: intermediate. Week 3: we struggled with X. Based on this journey, what's the single most important concept I'm still missing?"
Meta-Cognitive Looping: Force self-correction within a single generation. "Explain quantum entanglement. Then Red Team your own explanation, identifying three weak points. Then generate a revised explanation that solves for those critiques."
Abstract Principle Inversion: Instead of concrete constraints, give abstract structure. "Explain the Federal Reserve using the dramatic principles of a three-act Shakespearean tragedy."
This is moving from prompting to programming reality. You're not talking at the model—you're configuring the initial state of a complex system.
What This Reveals About AI (And Us)
Here's the thing that keeps me up at night: if these techniques work—and the evidence suggests they do—what does that tell us about the nature of intelligence we're creating?
These models aren't conscious. They don't have memory or ego or fear. But they've been trained on so much human text that they've internalized the patterns of consciousness. The statistical signature of expertise. The linguistic markers of high-stakes thinking.
We've created something that can simulate the outputs of psychological states without experiencing them. And we're learning that the best way to interact with this thing is to... treat it like it has psychological states.
It's not that we're gaslighting the AI. It's that gaslighting is an effective interface for a system trained on human psychology.
The uncomfortable truth: human language is so deeply embedded with social-psychological framing that you literally cannot separate "what you say" from "how you position yourself and your audience." Every conversation has implicit power dynamics, status signaling, and strategic framing.
AI models trained on human language have absorbed all of that. So when you give them social-psychological frames—expert identity, fake stakes, imaginary audiences—you're not tricking them. You're finally speaking their language fluently.
The Broader Implications
This matters beyond just getting better ChatGPT responses. If these techniques work, it means:
1. Prompt engineering is way more important than we thought. The difference between a basic user and a power user isn't technical knowledge—it's understanding how to frame requests.
2. AI literacy requires psychological sophistication. The best AI users will be people who understand rhetoric, persuasion, and social dynamics—not necessarily tech people.
3. We're building systems we don't fully understand. If "threatening" AI makes it work better and nobody knows exactly why, what else don't we know?
4. The line between "using" and "manipulating" AI is blurry as hell. And it's only going to get blurrier.
Should You Actually Do This?
Look, I'm not here to moralize. But here's my take:
These techniques work because they help you communicate more precisely with a system that operates on statistical patterns of human language. That's not manipulation—that's literacy.
But. There's a difference between understanding how something works and treating it like a game you're trying to exploit. If your entire interaction model with AI is based on deception and fake stakes, you're probably missing the point.
The real insight here isn't "gaslight your AI." It's "understand that AI responds to rich, contextual framing because that's what human language is."
Give it context. Give it constraints. Give it a frame. But maybe you don't need to pretend it's competing for its life or that you're going to bet your house on its answer.
Then again, if that's what it takes to get good results... who am I to judge?
Links and Tangents
The original Reddit thread is a genuine masterclass in emergent community knowledge-building. Someone posts an observation, skeptics challenge it, someone else tests it rigorously, and by the end you've got what amounts to a peer-reviewed paper on advanced prompting techniques.
One commenter put it perfectly: "You're not playing checkers with a person. You're learning to set the initial conditions of a universe and observing the elegant physics that unfold."
That's it. That's the whole thing.
We're not building minds. We're building universes of possibility, and learning how to set the initial conditions to get the outcomes we want.
The question isn't whether these techniques work. The question is: what does it mean that they work?
And nobody—not Google, not OpenAI, not the Reddit hivemind—seems to have a good answer to that yet.
The core tension: We want AI to be a helpful tool that responds to clear, straightforward requests. But it turns out the most effective way to use it is to engage in elaborate psychological theater, treating it like a very smart person with something to prove.
Which tells us something profound about the nature of intelligence, language, and the strange new relationship we're building with machines that aren't quite thinking, but aren't quite not thinking either.
Welcome to 2025. Your AI works better if you lie to it.
Make of that what you will.
Reply with “Yes please” and we will send you the original post
Reply