- AI Weekly
- Posts
- What OpenAI's CEO won't tell you about the AI bubble
What OpenAI's CEO won't tell you about the AI bubble
Big investors are buying this “unlisted” stock
When the founder who sold his last company to Zillow for $120M starts a new venture, people notice. That’s why the same VCs who backed Uber, Venmo, and eBay also invested in Pacaso.
Disrupting the real estate industry once again, Pacaso’s streamlined platform offers co-ownership of premier properties, revamping the $1.3T vacation home market.
And it works. By handing keys to 2,000+ happy homeowners, Pacaso has already made $110M+ in gross profits in their operating history.
Now, after 41% YoY gross profit growth last year alone, they recently reserved the Nasdaq ticker PCSO.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.
The AI Reality Check: When the Future Hits a $30 Billion Wall
Your company just spent millions on AI. Here's why it probably won't work.
The numbers are brutal. After $30-40 billion in corporate AI investments, 95% of companies are seeing "zero return" on their generative AI pilots. While tech giants promise artificial general intelligence is just around the corner, the MIT study that dropped this bombshell has sent shockwaves through Wall Street—and maybe through your company's C-suite too.
But here's what the headlines missed: This isn't just about failed AI projects. It's about a fundamental misunderstanding of what's actually happening in the AI revolution right now. While enterprises struggle with chatbots that can't deliver ROI, a quiet arms race is reshaping the entire global economy. The real story isn't about AI disappointment—it's about the trillion-dollar infrastructure war that will determine whether AI transforms civilization or becomes the most expensive technology bubble in history.
And if you think this doesn't affect you, you're about to discover why your electricity bill, your job, and your future are all caught in the crossfire...
The $40 Billion Reality Check That Nobody Saw Coming
When Sam Altman himself admits we're in an "AI bubble," you know something fundamental has shifted. The OpenAI CEO's warning came just as MIT researchers delivered the most comprehensive analysis of enterprise AI adoption to date—and the results should terrify anyone who's been betting big on AI transformation.
Here's what actually happened: Companies threw money at AI pilots like confetti at a wedding. They hired consultants, bought platforms, trained employees, and waited for the productivity revolution. Instead, they got sophisticated autocomplete tools that couldn't navigate real business complexity.
"When many funds pile into the same AI hype stocks, even a small negative surprise can spark fast exits," noted analysts as the tech-heavy Nasdaq fell 1.4% and Nvidia dropped 3.5% in a single day. But this wasn't just market jitters—this was the sound of reality crashing into expectation.
The problem runs deeper than disappointing quarterly reports. These aren't technical failures; they're implementation catastrophes. Companies approached AI like they were buying software, not restructuring their entire operational DNA. They expected plug-and-play intelligence and got systems that require fundamental business process redesign.
But while enterprises wrestled with ROI spreadsheets, something extraordinary was happening in research labs across the globe...
The Open Source Revolution That's Terrifying Big Tech
While American companies burned billions on failed AI pilots, a Chinese firm called DeepSeek quietly released something that should keep Silicon Valley executives awake at night: V3.1, a 685 billion-parameter AI model that matches GPT-4's performance for 68 times less cost.
Think about that number for a moment. DeepSeek V3.1 costs roughly $1.01 per coding task compared to Claude's ~$68. It's not just cheaper—it's demonstrating that the moat around proprietary AI models might be an illusion.
In programming benchmarks, DeepSeek scored 71.6% on the prestigious Aider test, slightly edging out Anthropic's Claude 4. But here's the kicker: it's completely open-source. The weights are available on Hugging Face. Anyone can download, modify, and deploy it.
This "no frills" release strategy represents a fundamental challenge to the Big Tech AI model. While OpenAI burns billions on infrastructure and Microsoft pays enormous computational costs, DeepSeek proves that breakthrough AI doesn't require Silicon Valley's resource advantages. It requires different thinking.
The implications are staggering. If open-source models can match closed models at a fraction of the cost, what justifies the trillion-dollar valuations? What happens to the entire venture capital AI ecosystem built on the assumption that AI capabilities require massive capital investments?
The answer might be hiding in the most boring part of the tech stack—the part that actually powers everything...
The Trillion-Dollar Infrastructure War Nobody's Talking About
While everyone debates model capabilities, the real battle is being fought in places most people never see: massive data centers consuming enough electricity to power entire cities.
Here's what the MIT study missed: The 95% failure rate in enterprise AI isn't about the technology—it's about the infrastructure. Companies tried to run AI workloads on systems designed for traditional computing, using power grids built for a pre-AI world, with cooling systems that can't handle the heat.
OpenAI isn't just building better models; it's investing trillions in data centers. The company has an $11.9 billion contract with CoreWeave and is participating in the $500 billion Stargate project. These aren't software purchases—they're infrastructure investments comparable to building interstate highways.
The energy numbers are mind-bending. A single ChatGPT query uses 10 times more energy than a Google search. Some AI data centers require up to 750 megawatts of energy—enough to power 56,000 homes. Bloomberg NEF forecasts that U.S. data center power demand will more than double by 2035, rising from 35 gigawatts to 78 gigawatts.
This isn't just about technology companies anymore. Data centers could account for 30-40% of all new U.S. electricity demand through 2030. That means your local utility company is probably already planning rate increases to fund infrastructure upgrades they didn't anticipate five years ago.
But the infrastructure war reveals something even more concerning about where this is all heading...
When AI Gets Too Human: The Personality Crisis
OpenAI's GPT-5 launch exposed an uncomfortable truth about AI development: technical superiority doesn't guarantee user acceptance. Despite achieving a 74.9% score on SWE-bench Verified (compared to GPT-4o's 30.8%), users revolted because the new model felt "too formal" and "robotic."
"I cried when I realized my AI friend was gone with no way to get him back," wrote one Reddit user. The backlash was so severe that Sam Altman admitted, "We totally screwed up some things on the rollout."
This isn't just about user interface design—it's about the fundamental challenge of creating AI that's simultaneously capable and relatable. Users had formed emotional attachments to GPT-4o's personality. When that disappeared, they experienced something approaching grief.
Meanwhile, enterprise customers told a different story. API usage doubled within 48 hours of GPT-5's launch. Coding activity more than doubled, reasoning workloads jumped eightfold. While consumers mourned their AI friend, businesses celebrated their new AI employee.
This split reveals the central tension in AI development: Do we want tools or companions? The answer determines everything from safety protocols to business models to the fundamental structure of human-AI interaction.
Mustafa Suleyman, co-founder of DeepMind and now a Microsoft AI executive, has been raising alarm bells about "Seemingly Conscious AI" (SCAI). He argues that the real danger isn't AI developing consciousness—it's humans believing AI is conscious. "Many people will start to believe in the illusion of AIs as conscious entities so strongly that they'll soon advocate for AI rights, model welfare, and even AI citizenship," he warned.
The personality crisis hints at deeper questions about memory, consciousness, and what happens when AI remembers you...
The Memory Revolution That Changes Everything
Sam Altman recently dropped hints about GPT-6's biggest feature: memory. Not just conversation memory, but persistent, cross-session memory that could remember your preferences, your family members' names, your work projects, and your personal history.
Think about what this means. Instead of treating each interaction as a blank slate, your AI could become genuinely personal. It could recall that you're working on a quarterly report, that your daughter is applying to colleges, that you prefer direct communication over flowery language.
This isn't just convenient—it's transformational. An AI with persistent memory becomes less like a tool and more like a colleague. Or a friend. Or something entirely new.
But persistent memory opens privacy questions that make today's data protection debates look quaint. If your AI remembers everything you've ever discussed, who controls that information? How is it stored? What happens when you switch AI providers? What happens if there's a data breach?
Google's Gemini researchers identified memory as "a major unsolved frontier in AI." Current models "are not great at this yet," admits Madhavi Sewak, Distinguished Researcher at Google DeepMind. The challenge isn't technical—it's architectural. How do you build systems that remember selectively, forget appropriately, and surface relevant information contextually?
The memory breakthrough connects to a larger pattern that's reshaping how we think about intelligence itself...
The Math Olympics of Machine Intelligence
Here's something that might surprise you: performance on mathematical olympiad problems is becoming the gold standard for measuring AI intelligence. It's not about math—it's about reasoning.
Tulsee Doshi from Google's Gemini team explains why: "When an AI can explore multiple solution paths for complex math, it tends to transfer that multi-step reasoning skill to coding, research, and general problem-solving tasks." Mathematical reasoning requires the kind of abstract, multi-step thinking that generalizes to other complex domains.
This reveals something profound about intelligence itself. The ability to break down complex problems, explore multiple approaches, backtrack from dead ends, and synthesize solutions appears to be the core of what we call "smart." Whether you're solving a calculus problem, debugging code, or planning a business strategy, you're using the same fundamental cognitive architecture.
AI systems that excel at mathematical reasoning consistently outperform on other tasks. It's as if mathematics serves as a training ground for intelligence itself. This is why companies are obsessing over IMO (International Math Olympiad) scores and why mathematical benchmarks have become proxies for general AI capability.
But raw intelligence might matter less than we think when it comes to practical applications...
The No-Code Revolution: When Everyone Becomes a Programmer
While enterprises struggle with AI implementation, a quieter revolution is making AI accessible to non-technical users. Tools like Rocket promise "idea in, app out"—no-code platforms that let anyone describe what they want and generate working applications.
Adobe's Acrobat Studio transforms PDFs from static documents into AI-powered workspaces. You can upload 100 documents and deploy specialized AI assistants to analyze, summarize, and answer questions about the content. An "instructor" AI explains concepts pedagogically, while an "analyst" AI provides strategic insights.
These tools represent a fundamental shift in how we think about AI deployment. Instead of requiring companies to hire AI specialists and restructure workflows, they make AI capabilities available through familiar interfaces. The learning curve isn't about understanding transformers or neural architectures—it's about writing better prompts.
Google's RACEF framework (Role, Action, Steps, Context, Examples, Format) shows how structured prompting can yield results comparable to fine-tuned models. By clearly defining an AI's role and providing step-by-step context with examples, users can create specialized agents without technical training.
This democratization might be more important than any technical breakthrough. When AI capabilities become accessible to everyone, the competitive advantage shifts from having AI to using AI creatively.
But accessibility creates its own challenges as AI becomes embedded in everyday workflows...
The Customer Service AI That Actually Works
While most enterprise AI projects fail to deliver ROI, customer service AI is quietly revolutionizing how businesses interact with customers. Bland, despite its uninspiring name, represents what successful AI deployment actually looks like.
Instead of trying to replace human agents entirely, Bland creates an omnichannel platform that combines AI voice agents with seamless human handoffs. The AI handles routine inquiries, qualifies leads, and escalates complex issues to humans. The result: faster response times, lower costs, and better customer satisfaction.
This approach solves the fundamental problem with most AI implementations: trying to do too much, too fast. Instead of promising complete automation, successful AI systems augment human capabilities in specific, measurable ways.
The key insight is that AI doesn't need to be perfect—it just needs to be better than the alternative. Long hold times and clunky IVR menus set a low bar that AI can easily clear. By focusing on specific pain points rather than general intelligence, companies can achieve immediate, measurable value.
This practical approach to AI implementation suggests a different future than the one most people imagine...
Deep Analysis: Three Futures for the AI Revolution
Scenario 1: The Infrastructure Oligarchy (Probability: 40%)
In this future, AI capabilities become commoditized, but infrastructure access determines winners and losers. A small number of companies control the massive data centers required to run advanced AI, creating a new form of digital feudalism.
OpenAI's trillion-dollar infrastructure investments, the $500 billion Stargate project, and the race to secure computational resources suggest this scenario is already unfolding. Companies that can't afford their own infrastructure become dependent on AI landlords who control access to intelligence itself.
The economic implications are staggering. If data centers account for 21% of global energy demand by 2030, as some projections suggest, AI infrastructure companies effectively control a significant portion of the global economy. They become utilities in the truest sense—essential services that everyone needs but few can provide.
Scenario 2: The Open Source Equilibrium (Probability: 35%)
DeepSeek's V3.1 model suggests an alternative future where open-source AI capabilities rival proprietary systems. In this scenario, the competitive advantage shifts from model ownership to implementation expertise.
Instead of paying cloud providers for AI access, companies run their own models on commodity hardware. The software becomes free, but the human expertise to implement, customize, and maintain AI systems becomes the scarce resource.
This democratizes AI capabilities but creates new challenges. Without centralized infrastructure, ensuring safety, managing updates, and maintaining security becomes each organization's responsibility. The result might be more innovation but less coordination.
Scenario 3: The Hybrid Reality (Probability: 25%)
The most likely future combines elements of both scenarios. Critical AI infrastructure remains centralized (for safety and efficiency), but open-source models handle specialized applications.
Companies use cloud-based AI for general tasks while running custom models for sensitive or specialized work. The market segments into infrastructure providers (who build the data centers), model developers (who create the algorithms), and implementation specialists (who make it all work for specific use cases).
This creates a complex ecosystem where success requires partnerships across the entire AI value chain. No single company controls everything, but collaboration becomes essential for competitiveness.
Trend Report: The Signals That Matter
Infrastructure as the New Software The shift from model development to infrastructure investment represents the maturation of the AI industry. Just as the internet required massive infrastructure investments in the 1990s, AI requires similar capital deployment today. Companies that secure computational resources now will have sustainable advantages for decades.
The Personalization Arms Race Memory-enabled AI systems will create new competitive dynamics around user retention. If your AI remembers you better than competitors' systems, switching costs increase dramatically. This could lead to "AI lock-in" that makes today's platform dependencies look trivial.
Energy as the New Bottleneck AI's massive energy requirements are already constraining deployment. Companies that solve the energy efficiency problem—through better algorithms, specialized hardware, or novel cooling systems—will have fundamental cost advantages.
The Humanity Problem As AI becomes more capable, maintaining human-like interaction becomes more challenging. The companies that solve the personality/capability balance will dominate consumer AI markets.
Open Source Disruption Models like DeepSeek prove that AI leadership isn't permanent. Open-source alternatives can emerge from anywhere, potentially disrupting trillion-dollar investments overnight.
Future Statement: The Intelligence Infrastructure Era
We're entering the Intelligence Infrastructure Era—a period when access to computational resources becomes as important as access to capital, raw materials, or human talent. The companies and countries that build the best AI infrastructure will shape the next century of human development.
This isn't just about technology companies anymore. Every industry, from healthcare to agriculture to entertainment, will depend on AI capabilities. The question isn't whether AI will transform your sector—it's whether you'll have access to the infrastructure that makes transformation possible.
The MIT study showing 95% failure rates isn't evidence that AI doesn't work. It's evidence that we're still figuring out how to make it work. The companies that solve implementation rather than just capability will capture the majority of AI's economic value.
Your future success—whether you're running a company, planning a career, or just trying to understand where the world is headed—depends on understanding this infrastructure shift. The AI revolution isn't coming. It's here. The only question is whether you're building on the right foundation.
The choice isn't between adopting AI or ignoring it. The choice is between thoughtful implementation and expensive experimentation. Between infrastructure investment and infrastructure dependence. Between shaping the AI future and being shaped by it.
The $40 billion in failed AI pilots taught us something valuable: Intelligence without infrastructure is just expensive autocomplete. Infrastructure without intelligence is just expensive electricity. The future belongs to those who understand that you need both—and who build accordingly.
The revolution is real. The infrastructure is everything. And your next move determines which side of history you're on.
Reply