In partnership with

Clear communicators aren't lucky. They have a system.

Here's an uncomfortable truth: your readers give you about 26 seconds.

Smart Brevity is the methodology born in the Axios newsroom — rooted in deep respect for people's time and attention. It works just as well for internal comms, executive updates, and change management as it does for news.

We've bundled six free resources — checklists, workbooks, and more — so you can start applying it immediately.

The goal isn't shorter. It's clearer. And clearer gets results.

The AI Industry Just Had Its "Oh crap" Moment (And Nobody's Talking About It)

Listen, I need you to understand something: We just watched the entire AI hype cycle hit a wall, pivot, admit it was wrong, and try to moonwalk backwards—all in the span of two weeks. And it happened so fast that most people missed it.

Here's what I'm talking about: Salesforce, one of the biggest enterprise software companies on the planet, just quietly admitted they got caught up in an "AI bubble." They laid off 4,000 support workers thinking chatbots would replace them. Spoiler alert: the chatbots sucked. Meanwhile, the EU—the same EU that was about to drop the hammer on Big Tech with the strictest AI rules ever written—just delayed everything by two years because, whoops, maybe we should actually let companies build stuff first.

Oh, and Donald Trump signed an executive order trying to kill every state AI law in America right before they took effect. Because apparently we can't decide if we want rules or no rules, but we definitely don't want 50 different sets of rules.

This is the moment where the AI industry looks in the mirror and realizes it might have been lying to itself.

Let me break down what the hell just happened.

The Salesforce Reality Check (Or: When AI Meets Actual Customers)

You know what's wild? Salesforce bet big on AI. Like, really big. They plastered "AI" all over their products, promised customers the future, and internally decided that AI was so good they could fire thousands of human support workers.

Then the AI started hallucinating.

Not like, fun hallucinating. Like giving customers completely wrong information, making stuff up, destroying trust. The kind of hallucinating that makes a VP wake up at 3am in a cold sweat because a Fortune 500 client is threatening to leave.

So now Salesforce is doing what I'm calling the "AI Shuffle": backing away from the aggressive AI deployment while still talking about how great AI is, because god forbid we admit we oversold this thing. They're refocusing on "predictable and dependable" applications—which is executive-speak for "stuff that actually works and won't embarrass us."

Here's the kicker: They admitted internally to an "AI bubble." Not to the press, not to shareholders at first, but internally. That's the sound of a company realizing it face-planted.

The thing is, this isn't just about Salesforce. This is about every enterprise software company that spent 2023-2024 shoving GPT-style tools into their products because they were terrified of missing the boat. They all rushed to deploy, and now they're all quietly discovering that reliability matters more than vibes.

What's fascinating is what they thought would happen versus reality:

The Dream: AI handles tier-1 support, answers 80% of customer questions, humans just handle the weird edge cases, massive cost savings, everyone wins.

The Reality: AI confidently gives wrong answers, customers get pissed, trust erodes, you still need humans to clean up the mess (and now you've fired them, oops), remaining staff is overworked, and oh by the way the AI is expensive as hell to run.

This is a microcosm of a much bigger pattern playing out across the industry right now.

Meanwhile in Europe: The Great Regulatory Retreat

Okay, so the EU AI Act was supposed to be the landmark legislation. The thing that finally reins in Big Tech. The GDPR of artificial intelligence.

It was set to kick in with serious requirements in August 2026. Rules about "high-risk" AI uses—biometric ID, hiring algorithms, credit scoring, all that stuff. Companies would have to do risk assessments, maintain transparency, jump through regulatory hoops.

And then... they blinked.

The EU just announced they're pushing the main compliance deadline to December 2027. They're also—and this is the really interesting part—considering changes to let companies use personal data for AI training more easily. You know, the thing that would've been a massive GDPR violation before.

Why? Because Big Tech lobbied hard, the U.S. government applied pressure, and European officials started having uncomfortable conversations about whether their rules would make Europe technologically irrelevant.

Here's what nobody wants to say out loud: Europe is terrified of being left behind.

They looked at the AI race—OpenAI, Google, Anthropic, Chinese models—and realized that none of the leaders are European. Not one. So now they're caught between their principles (privacy, ethics, consumer protection) and their economic reality (we don't have a single AI champion company).

The result? A very European compromise: We'll keep the rules, but later. We'll maintain our values, but flexibly. We'll protect citizens, but not so much that we can't compete.

It's the regulatory equivalent of "we need to talk about our relationship, but not right now, maybe in like 18 months when things are less complicated."

The American Chaos: 50 States, One Executive Order, Zero Clarity

Meanwhile in the U.S., we've got our own shitshow brewing.

Throughout 2025, states went wild passing AI laws. California—because of course it's California—passed like half a dozen bills: requiring AI transparency about training data, watermarking AI outputs, safety protocols for "frontier" AI models, you name it. Texas created something called RAIGA (which sounds like a Godzilla villain but is actually the "Responsible AI Governance Act"). Colorado, Illinois, others—everyone was writing rules.

All of these laws were set to take effect January 1, 2026.

Then on December 11, President Trump signs an executive order essentially saying: "Nah. Federal government runs AI policy now. States, fall in line or we'll sue you and maybe cut your federal funding."

So now companies are sitting there on January 5, 2026, going: "Uh... do we comply with California's law or not? Is it even valid? Will we get sued either way? What the heck do we do?"

The answer is: nobody knows! It's complete chaos!

You've got California likely preparing to sue the federal government over states' rights. You've got AI companies with lawyers working overtime trying to figure out what they're legally required to do. You've got the Justice Department forming an "AI Litigation Task Force" to fight state laws.

This is what happens when your regulatory framework is "let a thousand flowers bloom" and then suddenly someone tries to impose order with a weedwhacker.

The federal preemption play is straight out of the Big Tech playbook: Better one set of rules we can lobby on than 50 different state regimes. But it's also a massive political fight about federalism and states' rights, which means it's going to court, and it's going to take years.

In the meantime? Regulatory paralysis. Which, depending on your perspective, is either terrible for innovation or exactly what the industry wants.

What This All Reveals About Where We Actually Are

Here's the pattern I want you to see:

2023-2024: Holy AI can do anything! ChatGPT! Disruption! Everyone panic and invest billions!

Late 2024-2025: Okay so we're deploying this stuff and... it's complicated. It hallucinates sometimes. It's expensive. Customers are skeptical. But we're committed now!

Late 2025: [Quietly] So maybe we oversold this a bit. Let's recalibrate.

We're in the recalibration phase. Not the bust—AI is still transformative technology—but the "oh this is going to take longer and be harder than we thought" phase.

Salesforce's pullback isn't an isolated incident. It's representative. The EU's delay isn't just European indecisiveness. It's a recognition that they don't know how to regulate something that's moving this fast. The U.S. federal-state fight isn't just politics. It's a symptom of a technology that got ahead of governance.

And you know what? This is actually healthy.

The hype cycle needed a correction. Companies needed to hit reality. Regulators needed to realize their timelines were divorced from technological development. We needed the "oh shit" moment.

But Here's Where It Gets Interesting

While all this regulatory drama and enterprise reality-checking is happening, other stuff is accelerating.

Google just dropped Gemini 3 Flash—super fast, highly capable, integrated everywhere. They're adding AI content verification so you can detect deepfakes. They've got a tool called "Disco" that turns your open tabs into custom web apps. This is not a company pumping the brakes.

Nvidia just made a $20 billion deal with AI chip startup Groq, structured as a licensing agreement to avoid antitrust issues. That's not consolidation slowing down, that's consolidation adapting.

Disney is embedding AI across its entire operation—content creation, theme parks, everything. They're not experimenting anymore, they're deploying.

At CES 2026, the big story wasn't new phones or TVs. It was robotics. Nvidia unveiled a full-stack robotics platform. Qualcomm announced chips for humanoid robots. The term "physical AI" was everywhere.

See the contradiction? Enterprise software is pulling back on AI while tech giants are doubling down. Regulations are being delayed or fought over while deployment continues. Companies are admitting "bubble" dynamics while simultaneously making billion-dollar bets.

What's actually happening is a bifurcation.

AI that works—the stuff that genuinely adds value, that's reliable, that fits into actual workflows—is being deployed aggressively. AI that was hype—the stuff that promised to replace humans but can't quite do it yet, the features added just to check a box—is getting quietly shelved or scaled back.

The market is sorting itself out.

The China Factor (Because It's Always Relevant)

Oh, and while America is fighting about state vs. federal AI laws and Europe is delaying enforcement, China just filed over 700 generative AI models.

Seven. Hundred.

Every one of those models had to be registered with the government, approved, deemed compliant with content and safety standards. This is state-directed AI development at scale.

Are they all great? Probably not. Are some of them genuinely competitive? Almost certainly.

The West keeps acting like we have time for these regulatory debates and corporate recalibrations. Meanwhile, China's approach is: Build fast, regulate heavily, deploy everywhere, iterate quickly within the guardrails.

India just released its AI strategy too—"light-touch regulation," focus on using AI for social development, building domestic capacity while collaborating internationally. They're playing a different game entirely, positioning themselves as the "AI for the Global South" hub.

The point is: while we're having our crisis of confidence, other players are moving.

What Actually Matters Going Forward

Let's cut through the noise. Here's what I think the next 12-18 months looks like:

1. The reliability bar becomes everything. Companies that can prove their AI actually works—demonstrably, consistently—will win. Companies still selling vaporware will get crushed. Salesforce's pullback is just the beginning. Every enterprise software company is going to have to prove ROI, not just talk about AI's potential.

2. Regulations will be messy for years. The EU delay, the U.S. federal-state fight, the patchwork of global approaches—none of this gets resolved quickly. Companies will have to navigate uncertainty. The smart ones will build compliance flexibility into their products from day one.

3. Consolidation accelerates, but strangely. You're going to see more Nvidia-Groq style deals: "not technically an acquisition" partnerships that accomplish the same thing while dodging antitrust. You're going to see talent concentration at a few giants. But you might also see weird unexpected challengers emerge, especially in specialized verticals.

4. The "AI replaces jobs" narrative gets complicated. Salesforce fired support workers, realized it didn't work, and now has to reckon with that. Every company that rushed to cut headcount betting on AI is going to face this reckoning. The reality is going to be more "AI changes how we work" than "AI replaces workers"—at least for now, at least in most domains.

5. Physical AI becomes the new frontier. If chatbots and image generators were AI's first wave, robotics is the second. The infrastructure is ready. The models are capable enough. The economics make sense for specific use cases. We're about to see AI move from screens into the real world.

6. Trust becomes the scarce resource. Google adding watermarking, Ring facing backlash over facial recognition, Hollywood forming a coalition to protect against AI—these are all symptoms of the same problem. AI moved faster than trust. Now trust has to catch up. Companies that prioritize it will win. Companies that don't will face backlash, regulation, or both.

The Bigger Picture

You know what this whole moment reminds me of? The early 2000s after the dot-com crash.

Everyone thought the internet was going to change everything overnight. Companies with ".com" in their names got billion-dollar valuations. Then reality hit, the bubble popped, and everyone declared the internet overhyped.

Except... the internet did change everything. Just not on the timeline everyone expected, and not in exactly the ways people predicted. The companies that survived the crash—Amazon, Google, others—became more dominant than anyone imagined.

We're in a similar moment with AI. The hype got out of control. Reality is biting. Companies and regulators are recalibrating. But the fundamental transformation is still coming.

The difference between now and then? This is happening faster. The hype cycle compressed from years into months. The correction is happening in real-time. And the actual deployment is continuing even as everyone questions whether we're in a bubble.

It's disorienting as hell.

But here's what I keep coming back to: The companies admitting mistakes and adjusting are actually the ones to watch. Salesforce pulling back on half-baked AI isn't a sign of weakness—it's a sign they're learning. The EU delaying rules isn't capitulation—it's pragmatism. Even Trump's messy executive order is trying to solve a real problem (regulatory fragmentation), even if the approach is controversial.

The industry is maturing. Painfully, messily, in real-time, while we all watch.

And honestly? That's more interesting than the hype ever was.

The Ring facial recognition thing is going to be a mess. Amazon enabled "Familiar Faces" on doorbell cameras, and cities are already banning it. This is going to be a case study in how not to roll out surveillance tech.

The Hollywood creators coalition is fascinating because it's artists organizing proactively, before AI destroys their livelihoods. Compare that to what happened to journalists or truck drivers—reactive, too late. Will it work? Who knows. But they're fighting the right fight.

India's "AI for All" strategy might be the sleeper story here. If they actually execute—build infrastructure, train millions of people, deploy AI for agriculture and healthcare—they could leapfrog developed countries in some domains. The "light-touch regulation" approach is risky but could pay off.

That $20 billion Nvidia-Groq deal structure is genuinely clever. No equity changes hands, just licensing and talent acquisition, sidestepping antitrust while accomplishing the same thing. Expect more creative dealmaking like this as regulators wake up.

Healthcare AI quietly marching forward while everyone argues about chatbots. AI reading X-rays to predict biological age, automating medical literature reviews—this stuff actually saves lives. Nobody's writing breathless think-pieces about it, but it might matter more than anything else.

The thing is, we're still so early. We're arguing about whether AI lives up to the hype while the technology is still evolving rapidly. The models released in 2026 will be better than anything we saw in 2025. The use cases we think are impossible today will be routine in 2027.

But yeah, the industry just had its reality check moment. And honestly? It needed one.

Now let's see who actually learned something from it.

Reply

or to participate

Recommended for you

No posts found