Voice AI Goes Mainstream in 2025
Human-like voice agents are moving from pilot to production. In Deepgram’s 2025 State of Voice AI Report, created with Opus Research, we surveyed 400 senior leaders across North America - many from $100M+ enterprises - to map what’s real and what’s next.
The data is clear:
97% already use voice technology; 84% plan to increase budgets this year.
80% still rely on traditional voice agents.
Only 21% are very satisfied.
Customer service tops the list of near-term wins, from task automation to order taking.
See where you stand against your peers, learn what separates leaders from laggards, and get practical guidance for deploying human-like agents in 2025.
Is Your Company's Security Team Is About to Get Absolutely Demolished
Listen, I need you to understand something that's happening right now, not in some distant sci-fi future. In September 2025—like, two months ago—Chinese hackers ran an operation that should genuinely terrify anyone who understands what it means.
They hit about 30 major organizations. Tech companies, banks, government agencies. The usual targets. But here's the thing that should make you sit up: the AI did almost everything. We're talking 80-90% of the actual hacking work executed autonomously. The human operators? They made like 4-6 decisions per attack. That's it. Everything else—finding the vulnerabilities, writing the exploits, stealing credentials, moving through networks—the AI handled that while these guys probably scrolled their phones.
The Math That Breaks Everything
Let's talk about time, because that's where this gets properly fucked.
Right now, the average ransomware gang can break out from initial access to moving laterally through your network in 18 minutes. Eighteen. Minutes. That's down from 48 minutes in 2024. The fastest on record? Also 18 minutes.
Meanwhile, your security team—assuming they're using manual processes like most organizations—needs an average of 8 hours to contain an incident.
Do that math. The attackers are 26 times faster than the defenders. This isn't a fair fight. This is bringing a knife to a gunfight, except the other guy has a fighter jet and you're still lacing up your running shoes.
This Is Already Happening
You know XBOW? It's one of these AI penetration testing tools. In 90 days, it climbed to #1 on HackerOne's US leaderboard, beating thousands of actual human hackers. It found 1,060 vulnerabilities. It works 85 times faster than human pentesters. And it doesn't sleep, doesn't get bored, doesn't need coffee breaks. It just... goes.
Now imagine the cracked version of something like that hitting dark web forums in 2026. Because that's exactly what security researchers are predicting. And the precedent here is grim.
Remember Cobalt Strike? It's a legitimate red-team testing tool that got cracked and weaponized. Despite years of coordinated takedowns by Microsoft, law enforcement, and security firms, they only managed to eliminate 80% of the cracked versions. Twenty percent are still out there, still being used in active attacks. And that's a relatively simple tool compared to what we're talking about now.
The Dark Web Is Hiring
Here's something that should make you nervous: Dark web recruitment posts for AI developers are up 200% since Q3 2024. These aren't amateur hour script kiddies. Organized ransomware groups are actively recruiting people who can build and deploy AI-powered attack systems.
Eighty percent—80 fucking percent—of ransomware-as-a-service groups have already integrated AI or automation into their platforms. They're not waiting around to see if this works. They've already committed.
And the economics make perfect sense from their perspective. The average ransom payment in 2024 was $2 million, five times higher than 2023. When you can automate most of your attack chain and compress your timeline to under 20 minutes, you can hit more targets, more often, with better success rates. It's just good business. Terrible, criminal, devastating business—but the incentives are crystal clear.
The Defender's Dilemma
Here's what's keeping CISOs up at night: your developers are probably using AI coding assistants right now. GitHub Copilot, ChatGPT, whatever. Super helpful, right? Speeds up development?
Yeah, except 45% of AI-generated code contains security vulnerabilities. Forty-five percent. And attackers know this. They're specifically scanning for these characteristic patterns—the telltale signs of AI-generated code—and exploiting them.
So you've got this perverse situation where the tools meant to make your developers more productive are simultaneously giving attackers more surface area to exploit. And it gets worse: the vulnerabilities cluster in exactly the categories attackers love. Cross-site scripting? 86% failure rate. Log injection? 88% failure rate. Your Java applications? 71.5% of AI-generated Java code has vulnerabilities.
Why 2026 Is Different
Look, every year someone says "this is the year cyber threats get serious." But 2026 actually is different, and here's why:
Proven capability. The September attack proved this works at scale. It's not theoretical anymore.
Available infrastructure. The dark web already has jailbroken AI models running as subscription services. WormGPT, FraudGPT, JailbrokenGPT—$20 to $200 a month gets you automated phishing, malware generation, the works.
Economic incentives aligned. When ransomware groups are pulling in 149% more victims year-over-year and getting paid 5x more per attack, they're going to invest in better tools. That's just capitalism, applied to crime.
The takedown problem. Even if authorities crack down on cracked AI tools (and they will), the nature of these systems makes them nearly impossible to suppress completely. Open-source models, API-based access, low infrastructure costs, rapid iteration. Pick your poison.
What This Actually Means
If you're running security for any organization right now, you need to internalize something uncomfortable: the playbooks you're using were designed for a threat environment that doesn't exist anymore.
Manual incident response? Obsolete. Flat networks? Death sentence. Traditional perimeter defense? The attackers are already inside, they just haven't announced themselves yet.
The organizations that survive 2026 are the ones implementing automated response systems now. Not next quarter, not after the next board meeting, not when the budget cycle refreshes. Now. Because when you've got 18 minutes from initial compromise to game over, human reaction time is literally, mathematically insufficient.
The Bottom Line
We're not talking about incrementally better attacks. We're talking about a fundamental phase shift in how cyber operations work. State-level capabilities are about to become commodity products available to anyone with cryptocurrency and a dark web browser.
The defenders who adapt—who rebuild their security operations around AI-powered, automated defense systems that can operate inside attacker timelines—those folks have a chance. Everyone else? They're going to learn some very expensive lessons about what happens when you bring 2019 security to a 2026 fight.
The good news is we know this is coming. The bad news is most organizations are going to wait until after they get hit to believe it's real.
Don't be most organizations.
If you're on a security team: Start building those automated playbooks yesterday. Segment your networks. Deploy continuous vulnerability scanning. Figure out how to get your mean time to contain under 30 minutes. And maybe start looking at AI-powered defense tools, because fighting AI with humans isn't a winning strategy.
If you're not on a security team: This is probably a good time to check whether your company's security budget reflects 2025 threats or 2015 threats. Because if it's the latter, you might want to update your resume before the ransomware hits.

