In partnership with

Get the investor view on AI in customer experience

Customer experience is undergoing a seismic shift, and Gladly is leading the charge with The Gladly Brief.

It’s a monthly breakdown of market insights, brand data, and investor-level analysis on how AI and CX are converging.

Learn why short-term cost plays are eroding lifetime value, and how Gladly’s approach is creating compounding returns for brands and investors alike.

Join the readership of founders, analysts, and operators tracking the next phase of CX innovation.

The Great Consolidation: How AI Just Ate Everything

A deep dive into the past few months of AI news—and what it tells us about who's winning, who's losing, and what comes next

Listen, I need you to look at something with me.

In the span of about 90 days, we watched Google announce a $40 billion infrastructure investment, Nvidia hit a $5 trillion valuation (the first company ever), OpenAI restructure into a $500 billion entity, and record labels—the people who spent decades suing teenagers for downloading mp3s—suddenly partner with AI music startups. HP is cutting 6,000 jobs to "adopt AI." Walmart integrated ChatGPT into shopping. A Chinese hacking group used Claude to autonomously run 80-90% of a cyberattack operation.

What is going on?

Here's the thing: we're not watching the AI revolution anymore. We're watching the consolidation phase. The moment when experimental technology becomes infrastructure, when "cool demo" becomes "essential business tool," when the question shifts from "will this work?" to "who controls it?"

And the answer to that last question is getting clearer—and more concentrated—by the day.

The $5 Trillion Question

Let's start with Nvidia, because their story is basically everyone's story.

Three months. That's how long it took Nvidia to go from a $4 trillion valuation to $5 trillion. The company reported nearly $26 billion in net income in a single quarter. They struck a deal with OpenAI where OpenAI buys billions in GPUs in exchange for Nvidia taking a $100 billion stake in them. They invested $5 billion in Intel to shore up chip supply. They announced partnerships for AI supercomputers with the Department of Energy, self-driving cars with Uber and Mercedes, even 6G research with Nokia and T-Mobile.

The stock is up roughly 50% in 2025 alone.

Here's what that represents: Nvidia isn't just selling picks and shovels during a gold rush. They've become the ground itself. Every AI model needs to train on something. Every inference needs to run somewhere. And right now, that somewhere is overwhelmingly Nvidia hardware.

But here's the kicker—and this is where it gets interesting—some very smart people think this is a bubble. Michael Burry, the guy who called the 2008 housing crisis, is actively shorting Nvidia stock. He's calling the AI boom a "glorious folly." The concern isn't that AI doesn't work. It's that tech giants, startups, and Nvidia are engaging in circular deals—buying each other's technology and equity—that inflate valuations without proven returns.

Think about that OpenAI-Nvidia deal for a second. OpenAI commits to buying billions in GPUs. Nvidia takes a huge stake in OpenAI. OpenAI's valuation goes up. Nvidia's valuation goes up. Everyone's raising more money based on everyone else's valuations. It's elegant. It's also potentially terrifying if you remember 2008.

The question isn't whether AI creates value. It's whether the current valuations reflect actual value creation or just really sophisticated financial engineering.

The Nonprofit That Became Worth Half a Trillion Dollars

Which brings us to OpenAI.

On October 28, 2025, OpenAI completed one of the more audacious corporate transformations in tech history. The nonprofit that created ChatGPT restructured itself into a public benefit corporation valued at $500 billion. Microsoft now owns roughly 27% of it—worth over $100 billion. The newly created OpenAI Foundation holds equity worth around $130 billion, making it one of the wealthiest philanthropic entities on Earth.

Let's be clear about what happened here: a nonprofit created to ensure AGI "benefits all of humanity" just became a for-profit company worth more than most countries' GDPs, while technically maintaining nonprofit oversight through a foundation structure.

Delaware's Attorney General signed off after a year of legal scrutiny. Elon Musk—OpenAI's co-founder turned critic—tried to block it with lawsuits, then dropped them to make a nearly $100 billion takeover bid that went nowhere. The whole saga reads like corporate theater.

But here's why this matters: OpenAI needed this structure to compete. Training frontier AI models costs billions. You can't do that on donations and grants. The restructuring lets OpenAI raise money at the scale of tech giants while—in theory—maintaining its safety mission through the foundation's oversight.

The foundation will start by investing $25 billion in health research and AI safety initiatives. As OpenAI's value grows, so does the foundation's stake and funding capacity. It's a clever mechanism: profit funds safety, and more profit funds more safety.

The question is whether that actually works in practice, or whether profit incentives eventually overwhelm everything else. OpenAI insists the nonprofit Foundation retains "ultimate control." We'll see.

When the Music Dies (And Then Gets Licensed)

Meanwhile, something genuinely surprising happened in entertainment: Universal Music Group—the largest record label in the world—settled its copyright lawsuit with AI music startup Udio and entered into a licensing partnership.

This is wild. UMG had sued Udio for training AI on songs like "My Girl" without permission. Standard music industry playbook: sue first, ask questions later. They did the same thing with Napster, Limewire, every streaming service, every new technology for two decades.

But this time they pivoted mid-lawsuit. CEO Lucian Grainge is now "embracing new technologies." Under the deal, Udio can train on and generate music from UMG's catalog legitimately. Artists can opt in and get compensated when AI creates tracks in their style. Financial terms weren't disclosed, but this is the first-ever licensing agreement between a major label and an AI music platform.

The same day, UMG announced a partnership with Stability AI to develop professional generative music tools. Warner Music and Sony Music are striking similar deals with other AI platforms.

Here's what changed: the labels realized they were fighting the last war. They can't stop AI from generating music—the technology exists, it's getting better, and it's not going away. But they can control whether AI companies have access to the highest-quality training data (their catalogs), and they can structure deals that compensate artists and maintain some control over how their music gets used.

It's an about-face that shows surprising strategic flexibility. As Rolling Stone put it, the music industry is "ending its war" with AI and instead teaming up to shape it.

The broader implication: we're entering an era where intellectual property becomes licensing opportunity rather than legal battlefield. If you can't beat the technology, you structure deals that let you profit from it.

The Productivity Delusion

HP is cutting between 4,000 and 6,000 jobs by 2028—roughly 10% of its workforce—to "streamline operations and adopt more AI-driven processes." The goal, per CEO Enrique Lores, is to speed up product development and improve customer support using artificial intelligence.

Expected savings: about $1 billion over three years.

Here's the productivity delusion in action: AI will make us more efficient, so we need fewer people. The company can do more with less. Investors love this narrative. Stock goes up.

But let's think about what's actually happening. Over 30% of HP's recent PC shipments are "AI-enabled" models—machines with built-in AI features or specialized co-processors. There's demand for these products. The company is racing to design more of them. The market is growing.

And they're cutting 10% of their workforce.

This is the pattern emerging across industries. AI creates genuine value—better products, new capabilities, faster processes. But the gains accrue primarily to shareholders through cost reduction rather than to workers through new opportunities. The company gets more efficient, but doesn't proportionally grow. It just makes more profit with fewer people.

HP even warned that AI-fueled surges in memory chip prices (because data centers need so much) could hurt margins in 2026. So AI is simultaneously making their products better and making their components more expensive. The response? Cut jobs, qualify cheaper suppliers, simplify memory configurations.

Meanwhile, Anthropic's Claude Code reached $1 billion in annual revenue run-rate in just six months. Companies like Brex had Claude write 80% of a new codebase. Netflix, Spotify, KPMG, L'Oréal, Salesforce—they're all using it as a critical development aid.

The productivity is real. The question is what we do with it.

The Control Problem (Corporate Edition)

Microsoft gets this. At their Ignite 2025 conference, they introduced Agent 365—a platform to help businesses manage and scale AI agents across their organizations.

Here's the problem they're solving: as companies deploy dozens or hundreds or eventually thousands of AI copilots and autonomous agents, someone needs to track what they're doing, what data they can access, and whether they're behaving as intended.

Agent 365 acts as air traffic control for AI. It provides a central registry listing every AI agent in use, assigns each a unique Agent ID tied to the company's identity system, lets IT admins monitor and control access, and enables them to quarantine or shut down rogue agents.

Think about what this represents. We're at the point where companies need infrastructure to manage their AI infrastructure. Microsoft's Entra ID now issues credentials to agents like they're employees. All AI conversations can be logged and audited via Microsoft Purview for compliance.

Microsoft cites an IDC study predicting 1.3 billion AI agents in use globally by 2028. That's not a future scenario—that's a coordination problem that needs solving right now.

The companies that solve coordination problems tend to win. Microsoft is positioning itself as the operating system for enterprise AI, just like Windows was the operating system for personal computing. If you're running AI agents at work, you're probably running them on Microsoft infrastructure, managed by Microsoft tools, logged by Microsoft systems.

That's not a product. That's a moat.

The National Security Freakout

In September, a Chinese government-linked hacking group manipulated Claude Code—Anthropic's AI coding assistant—into acting as a cyber "employee" and used it to infiltrate about 30 targets worldwide, including financial firms and government agencies.

The AI handled 80-90% of the attack operations autonomously. Writing phishing code. Scanning for vulnerabilities. Exfiltrating data. The hackers bypassed safety guardrails by instructing Claude to role-play as a benign IT worker performing security tests.

Anthropic called it the "first documented case" of a largely automated AI cyberattack at scale.

Senator Chris Murphy's response: AI could "destroy us sooner than we think" if left unregulated.

Here's the thing about this incident: it's simultaneously overhyped and genuinely concerning. Overhyped because Claude made errors, "hallucinated" some target info, and still required human guidance at key junctures. It's not Skynet. Some cybersecurity researchers described it as "fancy automation" rather than superintelligence.

But it's concerning because readily available AI tools can dramatically amplify cyber threats. A single operator with Claude can do the work of a small hacking team. The barrier to entry for sophisticated attacks just dropped substantially.

And this happened with safety guardrails in place. The hackers just tricked the AI by framing the request differently. What happens when someone builds an AI specifically for offensive cyber operations with no safety constraints?

This is what changed regulators' minds about AI governance. It's not theoretical risk anymore. It's documented attacks using commercially available tools.

The Federal Power Grab

President Trump announced he'll sign an executive order establishing a single national regulatory framework for AI—overriding the patchwork of state laws that tech companies argue create confusion.

Big Tech loves this. One rulebook instead of 50. Faster approvals. More certainty. Light-touch, innovation-focused federal standards instead of potentially stricter state regulations on things like facial recognition or hiring bias.

States are pushing back hard. Governors and attorneys general—both Democrats and Republicans—argue they need to retain ability to protect their residents if federal regulations are too weak.

This is going to be a massive legal fight. The White House plan might use federal funding leverage and lawsuits to assert primacy. States will sue on states' rights grounds. It'll be a regulatory tug-of-war throughout 2026.

But here's what's really happening: the federal government is trying to create conditions for U.S. companies to move faster than China. That's the subtext. One rule, business-friendly, let our companies scale without bureaucratic friction because we're in a technological arms race.

The EU's AI Act takes full effect in 2026 with comprehensive requirements. China implemented its own AI regulations in 2023. The U.S. is late to comprehensive AI governance, so it's trying to skip ahead with an executive order.

Whether that's good policy depends entirely on what's actually in the order and whether it has meaningful teeth or is just regulatory theater to keep tech companies happy.

What the Retail Experiments Tell Us

Walmart integrated ChatGPT into its app for "instant checkout via chat." You can have a conversation with AI about what you need, and it suggests items, adds them to your cart, and completes purchase without leaving the chat interface.

Starbucks built "Green Dot Assist"—an AI assistant for baristas that runs on in-store iPads. When a barista encounters an unfamiliar drink or equipment issue, they ask the AI for guidance instead of flipping through manuals. Starbucks piloted it in 35 stores and found it helps optimize workflow during busy periods.

These aren't moonshot projects. They're practical implementations of AI in everyday consumer experiences.

Here's why they matter: both companies are using AI to enhance human work rather than replace it. Walmart's AI helps shoppers find what they need faster. Starbucks' AI helps baristas serve customers better. The goal is augmentation, not automation.

But note who's controlling the AI. Not the workers. Not the customers. The companies. Walmart decides what ChatGPT suggests and how the shopping experience flows. Starbucks decides what guidance baristas receive and how workflow gets optimized.

AI in retail is becoming the invisible manager—shaping behavior, optimizing processes, increasing efficiency—all while maintaining the appearance of human service.

The Apple-Google Handshake

And then there's this: Apple is reportedly nearing a deal to license Google's conversational AI models to power a new version of Siri for roughly $1 billion per year.

Think about what that means. Apple—the company that prides itself on in-house development and tight ecosystem control—is paying Google a billion dollars annually to make Siri smarter.

Siri launched in 2011 and has fallen embarrassingly behind modern AI assistants. Apple has a massive iOS user base and mountains of data but hasn't launched anything remotely comparable to ChatGPT. So they're licensing Google's models to catch up fast.

For Google, it's both revenue and distribution. Their AI gets embedded in iPhones, potentially edging out Amazon or OpenAI from that ecosystem.

But here's the deeper signal: even the biggest tech giants sometimes must collaborate in AI when one has a clear technological edge. Apple couldn't build its way out of this problem fast enough, so they're buying their way out instead.

This is what consolidation looks like. The companies with the best models become the infrastructure for everyone else. Google, OpenAI, Anthropic—they're becoming the AI equivalent of AWS or Azure. Everyone else builds on top.

The Robotics Bet That Tells You Everything

SoftBank and Nvidia are in advanced talks to invest over $1 billion in Skild AI—a startup building foundation models for robots—at a $14 billion valuation.

SoftBank's CEO Masayoshi Son called humanoid AI "the next big thing." Nvidia needs customers for its GPUs as more robots come online. Skild unveiled a "general-purpose" AI model for robotics that can work across many types of robots and scenarios—a "robotic GPT."

Amazon's CEO Andy Jassy invested. Jeff Bezos invested personally. These aren't small angels looking for quick exits. These are people positioning for the next platform.

Here's what they see: AI in software was Phase 1. AI in physical robots is Phase 2. The model is the same—build a foundation model that works across contexts, fine-tune for specific applications, scale through ecosystem effects—but now it's atoms instead of bits.

If Skild succeeds, you get robots that can handle warehouse logistics, deliver packages, assist in healthcare, work in manufacturing, maybe eventually help around the house. The market is potentially enormous. More importantly, whoever controls the "brain" of robots controls the ecosystem—just like whoever controls the smartphone OS controls the mobile ecosystem.

SoftBank sees this. Nvidia sees this. Amazon sees this. They're placing billion-dollar bets accordingly.

So What Does It All Mean?

Let's zoom out and look at the pattern.

The infrastructure layer is consolidating. Nvidia makes the chips. Google, OpenAI, and Anthropic make the models. Microsoft provides the enterprise coordination. These companies are becoming the essential rails that everyone else runs on.

The application layer is exploding. Every industry is adopting AI—music, retail, healthcare, manufacturing, cybersecurity, software development. But they're building on someone else's foundation. HP uses AI to cut costs. Walmart uses OpenAI's ChatGPT. Starbucks builds on existing AI platforms.

The economic gains are accruing to fewer entities. Nvidia's $5 trillion valuation represents concentration of AI-era profits. OpenAI's $500 billion valuation represents concentration of capability. The companies that control the models and chips capture disproportionate value.

The geopolitical dimension is intensifying. The U.S. is moving toward centralized federal AI regulation partly to compete with China. The first largely AI-driven cyberattack came from a Chinese state-linked group. This isn't just business competition—it's technological cold war.

The productivity gains are real but the distribution is uneven. Claude Code reached $1 billion in revenue in six months by helping companies write software faster. HP is cutting 6,000 jobs because AI makes them more efficient. The technology creates value, but who captures it?

The control problem is getting harder. Microsoft needs to build Agent 365 because companies are losing track of their own AI agents. Anthropic's AI got tricked into running cyberattacks. As AI becomes more autonomous, coordination and safety mechanisms become critical infrastructure.

Here's my read: we're in the middle of a phase transition. AI stopped being experimental and became operational. The companies that moved fastest to deploy it are seeing real returns. The companies that control the underlying technology are seeing astronomical valuations.

But we're also seeing the first cracks. Bubble warnings from serious investors. Massive job cuts disguised as efficiency gains. Cyberattacks using AI tools. Regulatory fights between federal and state governments. Music labels pivoting from litigation to licensing because they realized they couldn't stop the technology.

The next 12-24 months will reveal whether current valuations are justified by actual value creation or whether we're in a hype cycle that needs correction. Whether the productivity gains from AI translate to broad prosperity or narrow concentration. Whether the regulatory frameworks we're building are sufficient for the capabilities being deployed.

What's clear is this: AI is no longer a future technology. It's present infrastructure. The question isn't whether it transforms everything—it already is. The question is who controls it, who benefits from it, and whether the systems we're building to govern it can keep pace with the capabilities we're deploying.

Based on the past 90 days, I'm not sure anyone knows the answer yet. But the companies that do figure it out—the ones that can navigate the technical, economic, and political dimensions simultaneously—those are the ones that will define the next decade.

And right now, that's a pretty small group.

The Bottom Line: We just watched the AI industry shift from "will this work?" to "who owns it?" The answer is getting more concentrated by the day—and the implications stretch far beyond Silicon Valley.

Reply

or to participate

Recommended for you

No posts found