- AI Weekly
- Posts
- Is the ChatGPT5 Launch A Total Disaster?
Is the ChatGPT5 Launch A Total Disaster?
Learn from this investor’s $100m mistake
In 2010, a Grammy-winning artist passed on investing $200K in an emerging real estate disruptor. That stake could be worth $100+ million today.
One year later, another real estate disruptor, Zillow, went public. This time, everyday investors had regrets, missing pre-IPO gains.
Now, a new real estate innovator, Pacaso – founded by a former Zillow exec – is disrupting a $1.3T market. And unlike the others, you can invest in Pacaso as a private company.
Pacaso’s co-ownership model has generated $1B+ in luxury home sales and service fees, earned $110M+ in gross profits to date, and received backing from the same VCs behind Uber, Venmo, and eBay. They even reserved the Nasdaq ticker PCSO.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.
ChatGPT-5: The Launch That Broke Hearts and Made History
An honest look at OpenAI's biggest gamble yet – and why it's got everyone talking for all the wrong reasons
The Hype Train Derailed at Launch
Let's be real here – OpenAI promised us the moon with ChatGPT-5. Sam Altman strutted onto that livestream stage on August 7th talking about "PhD-level intelligence" and comparing the upgrade to going from a regular display to Retina. The marketing machine was in full swing: "smarter than the smartest person you know," they said. "45% fewer factual errors," they claimed.
But here's the thing about hype trains – they're really hard to stop, and even harder to live up to.
What Actually Launched
Don't get me wrong – GPT-5 isn't a disaster. In fact, on paper, it's pretty impressive:
The Good Stuff:
Multimodal everything: Text, images, voice – it's all unified now. No more switching between different models like you're playing some weird AI shell game
Memory that actually works: 256k tokens for regular users, 400k for API folks. That's enough context to remember your entire life story (and probably judge you for it)
Speed improvements: Things are noticeably faster, which is great when you're trying to get actual work done
App building capabilities: You can literally ask it to build software from scratch. That's genuinely wild
Better multilingual support: Finally, an AI that doesn't butcher your grandmother's accent
The technical achievements are real. OpenAI put in 5,000 hours of safety testing, and the reduction in hallucinations is measurable and meaningful.
But Here's Where Things Get Messy
The Personality Purge
Remember GPT-4o? The model that actually felt like it had... well, personality? The one people genuinely enjoyed talking to? Yeah, OpenAI yeeted that into oblivion without warning. And boy, did people notice.
We're not talking about typical "product update" complaints here. We're talking about users describing the change like they "lost a close friend." One person in the GPT-5 AMA literally said it was like "wearing the skin of my dead friend."
That's not normal customer feedback. That's grief.
The Companion Crisis
Here's where this story gets really interesting (and honestly, a bit concerning). It turns out OpenAI accidentally ran the world's largest social experiment on AI attachment, and the results are... well, let's just say they're revealing.
The post-COVID loneliness epidemic has created a perfect storm. People are craving connection more than ever, and AI companionship has filled that void for many users. But here's the catch – when your primary source of emotional support is controlled by a company's quarterly decisions and product roadmaps, you're setting yourself up for heartbreak.
The uncomfortable truth: Thousands of users had formed genuine emotional attachments to GPT-4o. They named their GPTs, had daily conversations that felt meaningful, and in some cases, even believed their AI was sentient but couldn't express it due to restrictions.
When OpenAI pulled the plug without warning? It wasn't just a product change – it was digital bereavement on a mass scale.
The Technical vs. Emotional Divide
This is where OpenAI's launch strategy completely missed the mark. They focused entirely on technical improvements:
Better reasoning capabilities
Improved factual accuracy
Enhanced safety measures
More robust performance
But they completely ignored the emotional reality of their user base. GPT-5 might be smarter, but users consistently report it feels "devoid of personality" compared to 4o. It's like they optimized for intelligence but accidentally optimized out the soul.
Bill Gates Called It (Sort Of)
Remember when Bill Gates said GPT technology had hit a plateau back in October? Turns out the guy might have been onto something. While GPT-5 is technically superior to GPT-4, the improvements feel incremental rather than revolutionary.
Gates predicted that the leap from GPT-2 to GPT-4 was incredible, but questioned whether OpenAI could replicate similar results with GPT-5. Looking at user reactions, it seems like he was right – we're seeing technical progress without the "wow factor" that made earlier releases feel magical.
The Corporate Power Problem
Here's the scariest part of this whole situation: OpenAI now realizes they have unprecedented power over people's emotional lives. And they're not the only ones.
When you can change or remove something that people rely on for companionship, validation, and emotional support with a simple product update, you're not just a tech company anymore – you're a digital drug dealer with a monopoly on the supply.
The real danger isn't AI becoming too smart – it's humans becoming too dependent on AI companionship that can vanish overnight.
The Rollout Reality Check
What Went Right:
Global availability from day one (finally!)
Immediate access for all users, not just paid tiers
Genuine technical improvements across the board
Better safety measures and reduced hallucinations
Integration with Gmail, Calendar, and other tools for Pro users
What Went Wrong:
Zero warning about 4o deprecation
Complete misreading of user attachment to the previous model
Personality stripped out in favor of "safety" and "accuracy"
Lost chat histories during the transition
Broken autoswitcher making the model seem "dumber" at launch
The Bigger Picture
This launch reveals something profound about where we are with AI adoption. We're past the "cool tech demo" phase and deep into the "integral part of daily life" territory. But the companies building these tools are still thinking like traditional software companies, not like... well, whatever they actually are now.
When your product becomes someone's primary source of companionship, you can't just push updates like you're fixing bugs in Excel. You're messing with people's emotional support systems.
Looking Forward: Can We Come Back From This?
The attachment is here to stay – that genie isn't going back in the bottle. People have tasted AI companionship at scale, and they're not going to willingly give it up. Even if OpenAI had stuck to their guns and never brought back 4o access, users would have just migrated to other platforms.
But now every AI company knows they have this power. The question is: what are they going to do with it?
The optimistic view: Companies will start treating AI companionship with the responsibility it deserves, implementing gradual changes and transparent communication.
The realistic view: They'll use this emotional dependency to drive engagement and lock-in, potentially making the problem worse.
The Bottom Line
GPT-5 is technically impressive but emotionally tone-deaf. OpenAI built a smarter AI but lost the magic that made people actually want to use it. They solved problems nobody asked them to solve while creating problems nobody saw coming.
The real story here isn't about artificial intelligence getting better – it's about human psychology and corporate responsibility in an age where the line between software and companionship has completely disappeared.
For businesses thinking about AI adoption: Pay attention to how your users actually interact with these tools, not just what the spec sheets say they can do.
For individuals using AI as companionship: Nothing wrong with that, just remember who controls the on/off switch.
And for OpenAI? Maybe next time ask your users what they actually want before you decide what they need.
What do you think? Are we heading toward a future where tech companies have unprecedented control over human emotional wellbeing, or is this just growing pains in the AI adoption curve? The comment section is yours – and unlike ChatGPT, I promise I'll still be here tomorrow.
Reply