In partnership with

Find your customers on Roku this Black Friday

As with any digital ad campaign, the important thing is to reach streaming audiences who will convert. To that end, Roku’s self-service Ads Manager stands ready with powerful segmentation and targeting options. After all, you know your customers, and we know our streaming audience.

Worried it’s too late to spin up new Black Friday creative? With Roku Ads Manager, you can easily import and augment existing creative assets from your social channels. We also have AI-assisted upscaling, so every ad is primed for CTV.

Once you’ve done this, then you can easily set up A/B tests to flight different creative variants and Black Friday offers. If you’re a Shopify brand, you can even run shoppable ads directly on-screen so viewers can purchase with just a click of their Roku remote.

Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.

Hey, josh here. Check these stories out.

The Trillion-Dollar AI Party: Who's Bringing the Champagne and Who's Left Holding the Bill?

Listen, I need to tell you about the most absurd week in tech since... actually, I'm not sure there's a precedent for this level of financial vertigo. We're talking about a single company—OpenAI—casually signing cloud deals worth more than most countries' GDP, while simultaneously the entire AI sector teeters on the edge of what everyone's thinking but nobody wants to say out loud: is this a fucking bubble?

Here's what happened in one seven-day span: OpenAI restructured at a $500 billion valuation, inked $38 billion with AWS, reportedly dropped $300 billion on Oracle, the EU started backing away from its own AI regulations like a kid who touched a hot stove, and tech stocks had their worst day in a month because suddenly everyone remembered that spending money and making money are, in fact, different things.

The kicker? We're simultaneously watching the birth of something genuinely transformative and a financial house of cards that makes the dot-com bubble look quaint.

Let me break down what's really going on.

The OpenAI Spending Spree: When "Big Bets" Become "Holy Shit"

TLDR: OpenAI just committed to spending more on cloud infrastructure in the next seven years than the entire U.S. government spent on the Apollo program (adjusted for inflation). And that's just one slice of their plans.

On November 3rd, OpenAI completed a corporate restructuring that valued the company at roughly $500 billion. For context, that's more than Walmart, more than Exxon, more than Visa. A company that was worth basically nothing seven years ago is now worth half a trillion dollars, and it's never turned an annual profit.

But here's where it gets wild. Microsoft, which had OpenAI in what amounted to an exclusive relationship (read: OpenAI was basically a very expensive Microsoft mistress), agreed to step back. Microsoft keeps its 27% stake—worth about $135 billion, which is insane in its own right—but gives up exclusive cloud rights. Why? Because OpenAI needs to raise capital from other sources, and being monogamous with Microsoft was cramping their style.

Within days, OpenAI signed a seven-year, $38 billion deal with Amazon Web Services. Not million. Billion. That's $38,000,000,000 to rent computing power from AWS, giving OpenAI access to hundreds of thousands of NVIDIA chips to train models that may or may not achieve artificial general intelligence.

Oh, and that's not all. Reports suggest OpenAI also tapped Oracle for $300 billion in cloud capacity and enlisted Google Cloud for support. When you add it up, OpenAI has committed to spending over $1 trillion on infrastructure.

One. Trillion. Dollars.

To put that in perspective: Amazon's stock hit record highs on the AWS news. Wall Street went bananas. But Sam Altman—OpenAI's boy-wonder CEO who has the vibe of someone who either knows exactly what he's doing or is the best con artist since Bernie Madoff—has stated openly that reaching AGI might require spending up to $1.4 trillion on 30 gigawatts of infrastructure.

What Is Actually Happening Here?

The thing is, this isn't just a story about one company making aggressive bets. This is about a fundamental rewiring of how we think about technology investment, competitive moats, and what constitutes rational business strategy in 2025.

Let's peel back the layers.

Layer One: The Surface Story
OpenAI needs more computing power than any single cloud provider can offer, so it's diversifying. Makes sense, right? Don't put all your eggs in one basket, especially when the basket costs hundreds of billions of dollars.

Layer Two: The Power Play
By breaking free from Microsoft's exclusive grip, OpenAI positioned itself as the belle of the ball. Every cloud provider—AWS, Oracle, Google—wants a piece of the AI gold rush. OpenAI essentially triggered a bidding war for its business, and these companies are falling over themselves to lock in multi-year, multi-billion-dollar contracts. Why? Because whoever powers the AI that achieves AGI (if that happens) gets to be the kingmaker. This is about staking a claim in what could be the most important technological transition since the internet.

Layer Three: The Deeper Mechanism
Here's where it gets interesting. OpenAI's spending commitments aren't just about buying computing power. They're about signaling. Every time OpenAI announces another massive deal, it reinforces the narrative: "We are the company that will get to AGI first, and we're willing to spend whatever it takes." That narrative drives their $500 billion valuation, attracts top-tier talent, and keeps competitors on their heels.

But it's also a trap. OpenAI is now on a treadmill it can't get off. If they slow down spending, the narrative collapses. If they don't achieve AGI—or something close enough to justify these expenditures—they become the poster child for irrational exuberance. They're in a "go big or go home" scenario, except home isn't an option anymore because they've already mortgaged it.

Layer Four: The System at Play
What we're witnessing is regulatory capture meets network effects meets good old-fashioned FOMO. The cloud providers are locking in OpenAI not just for the revenue, but because being associated with "the AGI company" elevates their status. Investors are pouring money into AI stocks not because of current profits, but because nobody wants to miss the next Google. And governments—spooked by the prospect of falling behind in the AI race—are loosening regulations (more on that in a second) to ensure their domestic players can compete.

This creates a self-reinforcing cycle: big bets lead to big valuations, which attract more capital, which enables bigger bets. Until it doesn't.

The Geopolitics of Silicon: NVIDIA, China, and the New Cold War

While OpenAI was signing trillion-dollar checks, the U.S. government was busy drawing lines in the sand—or rather, in silicon.

On November 4th, the White House confirmed that NVIDIA's most advanced AI chip, the Blackwell GPU, cannot be sold to China. Period. No scaled-down versions, no exceptions. A White House spokeswoman said the chips would be "reserved for U.S. companies" and kept out of China "at this time."

This is huge. NVIDIA's chips are the backbone of AI development. If you want to train state-of-the-art models, you need these GPUs. By banning Blackwell exports to China, the U.S. is essentially saying: "We're going to win the AI race by controlling the supply of the most critical input."

President Trump had previously floated the idea of allowing limited sales to China, even suggesting he might discuss it with Xi Jinping. But that conversation "did not come up" at their recent summit, which is diplomatic speak for "we've decided to freeze them out."

China, predictably, is not happy. This move will accelerate Beijing's efforts to develop domestic semiconductor capabilities, which could lead to a bifurcated AI ecosystem: one built on Western chips, another on Chinese silicon. The implications are staggering. We're not just talking about trade policy; we're talking about two competing visions of how AI should be developed, who should control it, and what it should be used for.

Here's why this matters: AI is no longer just a technology story. It's a national security story, a trade story, an industrial policy story. The countries that control AI infrastructure—the chips, the cloud, the data—will have strategic advantages that make oil reserves look quaint. We're witnessing the early stages of a new Cold War, except instead of nuclear arsenals, it's about who has the better GPUs.

Europe's Regulatory Retreat: When the Watchdog Gets Cold Feet

Speaking of geopolitics, let's talk about the EU's spectacular about-face on AI regulation.

The European Union passed the world's first comprehensive AI law—the AI Act—in 2024. It was supposed to be a landmark moment: Europe positioning itself as the global leader in responsible AI governance. The Act includes strict rules for "high-risk" AI systems, transparency requirements, and hefty fines for violations.

Except now, before the law has even fully kicked in, the EU is getting cold feet.

On November 7th, the European Commission confirmed it's "reflecting" on postponing parts of the AI Act after intense lobbying from tech companies and pressure from the Trump administration. Draft proposals include a one-year grace period for companies already deploying generative AI, pushing back fines until 2027, and offering more flexible compliance options.

Why the retreat? Three reasons:

  1. Industry pressure: Dozens of European companies—Airbus, Mercedes-Benz, and others—petitioned for a two-year pause, arguing that the regulations would stifle innovation and put European firms at a competitive disadvantage.

  2. U.S. threats: Washington has warned of tariffs on foreign tech regulations deemed discriminatory against American companies. Translation: if you make it too hard for OpenAI and Google to operate in Europe, we'll make it expensive for Airbus and BMW to sell in America.

  3. Fear of falling behind: European politicians are watching the AI arms race heat up and realizing that their companies aren't really in it. There's no European equivalent of OpenAI or Anthropic. If the regulations crush what little AI industry Europe has, they'll be left buying American or Chinese AI forever.

The Commission insists it "fully stands behind" the AI Act's objectives, but acknowledges the need for "realistic timelines," which is bureaucrat-speak for "we didn't think this through."

The broader lesson here is that regulation in a globalized, hyper-competitive market is really hard. If Europe goes too strict, capital and talent flow to America or Asia. If America goes too loose, we risk racing toward AGI with no guardrails. And if China does its own thing—which it will—we end up with three different AI governance models that may be fundamentally incompatible.

The Bubble Question: Are We in Dot-Com 2.0?

Here's the uncomfortable truth everyone's dancing around: a lot of very smart people are starting to wonder if we're in an AI bubble.

On November 6th, U.S. markets saw a sharp sell-off led by tech giants. The S&P 500 and Nasdaq suffered their steepest one-day drops in a month. NVIDIA and Palantir—two of the year's biggest winners—led the decline. Even after a modest rebound, tech stocks were down over 3% for the week.

Why the jitters? Because the math is starting to look scary. Tech now accounts for 36% of the S&P 500 by value—a higher weighting than during the dot-com bubble peak. Let that sink in: we're more concentrated in tech stocks now than we were in 2000, right before everything imploded.

Analysts are pointing out that AI spending is outpacing near-term returns by a country mile. OpenAI's $1+ trillion in cloud commitments, for example, are happening while the company reportedly lost $5 billion last year. Sure, they're investing for the future, but at some point investors are going to want to see a path to profitability that doesn't require inventing AGI.

The worry isn't just about OpenAI. It's about the entire ecosystem. Companies are throwing billions at AI projects because they're terrified of being left behind. But how many of these projects will actually generate returns? How many are just "AI" in name only—adding a chatbot to an existing product and calling it innovation?

There's a concept in economics called "adverse selection," which basically means that when everyone's rushing into something, the quality of investments tends to decline. In the dot-com era, we saw it with companies adding ".com" to their names and watching their stock prices double. Now we're seeing it with companies adding "AI" to their pitch decks.

C3.ai's struggles are instructive here. The company—one of the earliest enterprise AI firms—went public in 2020 amid huge hype (it even trades under the ticker "AI"). But its stock has plummeted over 50% this year, and the company is now exploring a sale after founder-CEO Thomas Siebel stepped down due to health reasons. C3.ai's board is in talks with potential acquirers, but the company's high valuation expectations have come crashing down to earth.

The thing about bubbles is that they're only obvious in retrospect. In the moment, there's always a narrative to justify the valuations. In 1999, it was "the internet will change everything" (which was true, but that didn't mean every internet company was worth billions). In 2025, it's "AI will change everything" (which is probably true, but...).

The difference this time is the scale. When the dot-com bubble burst, it wiped out about $5 trillion in market value. Given how much larger the tech sector is now, and how concentrated wealth is in a handful of AI-related stocks, a correction could be far more devastating.

Meanwhile, in the Lab: AI That Actually Works

Amid all the financial drama and geopolitical posturing, there's actual science happening. And some of it is genuinely incredible.

On November 5th, researchers from Nobel laureate David Baker's lab at the University of Washington published a breakthrough in Nature: they used AI to design novel antibodies from scratch. This is a big fucking deal.

Antibodies are crucial for medicine—cancer treatments, vaccines, immunotherapy. It's a $200 billion market. Traditionally, discovering a new antibody involves immunizing animals and doing months or years of trial-and-error to find something useful. The AI system (based on Baker's RFdiffusion tool) can invent antibody structures computationally in a fraction of the time.

Lab tests confirmed that the AI-designed antibodies attached to their targets—including a flu virus protein and a bacterial toxin—exactly as predicted. The researchers made the software openly available, and a UW spin-off company, Xaira Therapeutics, is moving the tech toward drug development.

This is the kind of AI application that justifies the hype. Not chatbots that hallucinate facts or image generators that produce "AI slop," but tools that accelerate scientific discovery in ways that could save millions of lives. On-demand antibodies for any disease, designed in days instead of years? That's transformative.

It's also a reminder that while everyone's focused on large language models and the race to AGI, some of the most valuable AI applications might be in specialized domains like drug discovery, materials science, and climate modeling.

The Creator Economy, AI Edition: When Algorithms Make the Content

Let's talk about something that's either the future of social media or a dystopian nightmare, depending on your perspective.

On November 6th, Meta announced it's launching "Vibes" in Europe—an AI-powered short-video feed where every video is entirely AI-generated. It's like TikTok, except instead of teenagers dancing, you have... AI-generated content that users create from text prompts and remix.

Meta is positioning this as "an inherently social and collaborative creation experience," but early reactions have been brutal. When CEO Mark Zuckerberg first unveiled Vibes in the U.S., the comments were savage: "nobody wants this," "this is AI slop," and so on.

Meanwhile, OpenAI's Sora app—which lets users turn text prompts into short videos—expanded to Android on November 4th after reportedly hitting 1 million downloads on iOS in its first week. Unlike Vibes, Sora seems to have found an audience, probably because the quality is better and users feel like they're creating something rather than consuming algorithm-generated content.

Here's the deeper question: what happens when AI can generate unlimited content at near-zero marginal cost? YouTube is already worried about a flood of low-quality AI videos. TikTok's algorithm is good, but what happens when there are more AI-generated videos than human-created ones?

We're entering a world where the line between "creator" and "consumer" is blurring, but not in the empowering way tech companies promised. Instead, we might end up with feeds of AI-generated content that's optimized for engagement but devoid of genuine human creativity or connection. It's the logical endpoint of algorithmic content distribution: why bother with messy human creators when you can just have the algorithm generate the videos directly?

The copyright implications alone are mind-boggling. If an AI generates a video that looks like a celebrity, who owns it? If I remix an AI-generated video, am I violating someone's rights, or is it fair game because no human created it in the first place?

Healthcare AI: Dr. ChatGPT Will See You Now

On November 10th, reports emerged that OpenAI is exploring a move into consumer healthcare products, including an AI-powered personal health advisor. The idea: leverage ChatGPT to provide personalized medical advice or coaching—essentially "Dr. ChatGPT" for everyday users.

OpenAI has been hiring for this, bringing on Nate Gross (co-founder of Doximity, the physician network) as head of healthcare strategy, and a former Instagram executive as VP of health products. At a healthcare conference in October, Dr. Gross noted that ChatGPT draws around 800 million weekly interactions, many from people seeking medical information.

This is a space where tech giants have repeatedly failed. Google Health? Shut down. Microsoft HealthVault? Gone. Why? Because healthcare is hard. It's regulated, it's sensitive, and getting it wrong can literally kill people.

But here's the thing: people are already using ChatGPT for medical questions. They're just doing it without any oversight, safety mechanisms, or regulatory framework. So in some ways, OpenAI formalizing a healthcare product could be safer than the status quo.

Still, the concerns are real. Medical advice from an AI that occasionally hallucinates? That's terrifying. And doctors are right to worry: not because AI will replace them (it won't, not anytime soon), but because patients armed with AI-generated medical information might become harder to treat, more anxious, or more mistrustful of professional guidance.

If OpenAI succeeds, it could revolutionize access to healthcare information, especially for people in underserved areas. If it fails, it could set back AI in medicine by years and lead to regulatory backlash that stifles legitimate innovation.

So What Does All This Actually Mean?

Let's zoom out. What we're witnessing is the collision of several massive forces:

1. Unprecedented capital flows into AI: Companies are spending trillions on infrastructure, acquisitions, and R&D, justified by the belief that AI will transform every industry. This is creating both genuine innovation and speculative excess.

2. Geopolitical competition: The U.S.-China rivalry is playing out through chip bans and cloud deals. Whoever controls the AI stack controls the future, and governments know it.

3. Regulatory uncertainty: Europe is backing down from its ambitious AI Act under pressure from industry and the U.S. We're in a race to the bottom on regulation, which could end badly.

4. Market concentration: Tech stocks—particularly AI-related ones—dominate the market to a degree we haven't seen since the dot-com peak. This makes the entire economy vulnerable to an AI correction.

5. Real breakthroughs: Amid the hype, there's legitimate scientific progress. AI-designed antibodies, advanced language models, and new applications in healthcare and science are happening.

The question is whether we can separate the signal from the noise. Which AI investments are building the future, and which are just burning capital? Which companies will be the Amazons and Googles of the AI era, and which will be the Pets.coms and Webvans?

Right now, we don't know. The honest answer is that we're in the middle of a transition so profound that we won't understand its full implications for years. The AI revolution might be the most important technological shift since electricity, or it might be a classic case of technology arriving later than expected and then being less transformative than promised.

What I do know is this: when a single company commits to spending over $1 trillion on computing infrastructure, when governments are weaponizing semiconductor exports, and when stock market valuations hinge on whether a company can invent artificial general intelligence, we're in uncharted territory.

The Verdict

Are we in a bubble? Probably some areas, yes. Is AI transformative? Also yes. Can both things be true? Absolutely.

The dot-com era gave us Amazon, Google, and eBay—companies that changed how we live. It also gave us thousands of failed startups and billions in losses. The AI era will likely be the same: enormous value creation alongside enormous value destruction.

The winners will be the companies that solve real problems with AI, rather than just slapping "AI-powered" on their marketing materials. The losers will be the ones chasing hype without substance.

For investors, the playbook is clear: be very, very careful about paying sky-high multiples for companies with no profits and vague promises about AGI. For policymakers, the challenge is regulating without stifling innovation. For the rest of us, it's about staying informed and skeptical—enjoying the benefits of AI without getting swept up in the madness.

Because here's the thing about technological revolutions: they're messy, chaotic, and rarely play out the way anyone predicts. The companies spending the most might not be the ones that win. The technologies generating the most buzz might not be the most valuable. And the future we're building with AI might look nothing like what any of us expect.

Buckle up. It's going to be a wild ride.

Worth reading: The Getty Images vs. Stability AI case is a microcosm of the broader copyright debate around AI training data. Getty dropped its main copyright claim due to lack of evidence about what data Stability used—highlighting how current IP laws are woefully unprepared for generative AI.

Stat that broke my brain: OpenAI's cloud deals total more than $1 trillion. For context, that's more than the GDP of Indonesia, the world's 16th largest economy.

Hot take: Meta's "Vibes" app is going to fail spectacularly, not because AI-generated video is bad, but because social media is about connection and status signaling. Nobody's going to brag about their AI-generated video the way they do about their TikTok follower count.

Prediction: Within two years, we'll see the first major AI company collapse, and it will send shockwaves through the sector. My money's on it being an enterprise AI firm that raised at a massive valuation but couldn't find product-market fit. (C3.ai is already circling the drain.)

The quiet story: While everyone's focused on ChatGPT and Sora, Google is reportedly in talks to boost its stake in Anthropic (maker of Claude AI) to a valuation over $350 billion. Google's strategy of hedging its bets by funding OpenAI's competitors might be smarter than Microsoft's all-in approach.

Until next time, stay skeptical and keep asking: "But how do they actually make money?"

Reply

or to participate

Recommended for you

No posts found