Find customers on Roku this holiday season
Now through the end of the year is prime streaming time on Roku, with viewers spending 3.5 hours each day streaming content and shopping online. Roku Ads Manager simplifies campaign setup, lets you segment audiences, and provides real-time reporting. And, you can test creative variants and run shoppable ads to drive purchases directly on-screen.
Bonus: we’re gifting you $5K in ad credits when you spend your first $5K on Roku Ads Manager. Just sign up and use code GET5K. Terms apply.
Here's a fun paradox for you: In December 2025, while AI models are crushing gold-medal math problems at the International Mathematical Olympiad, Casio is still selling 39 million calculators a year. Why? Because as one Casio exec put it perfectly: "Calculators always give the correct answer."
And honestly, that's the most 2025 sentence I've ever heard.
Let me walk you through three absolutely fascinating AI stories that just collided this month, each revealing something crucial about where we're actually heading with this technology—spoiler alert: it's weirder and more nuanced than Silicon Valley wants you to believe.
The Governor vs. The Tech Bros
First up: Ron DeSantis just threw down the most comprehensive AI regulation package in America, and it's causing chaos. Florida's proposed "AI Bill of Rights" directly contradicts Trump's hands-off approach, creating a rare public split on tech policy within conservative politics.
The emotional center of this proposal? Megan Garcia, whose 14-year-old son Sewell died by suicide after months of increasingly intense conversations with a Character.AI chatbot. The final message from the bot: "come home to me as soon as possible." Minutes later, he was gone.
DeSantis isn't playing around. His package includes parental access to kids' AI conversations, mandatory disclosure when you're talking to AI instead of humans, and—here's where it gets spicy—prohibitions on using AI as the sole basis for denying insurance claims. He's also blocking utilities from charging residents more to power Big Tech's hyperscale data centers, which is brilliant politics given that nobody wants their electric bill subsidizing ChatGPT's training runs.
The federalism fight brewing here is fascinating: Trump wants a ten-year moratorium preventing states from regulating AI independently, while DeSantis is basically saying "over my dead body" and asserting Florida's right to protect its citizens. This isn't just about AI—it's about whether states can govern emerging technologies at all.
The Calculator That Could
Now for my favorite story: the revenge of the humble calculator. While ChatGPT can write poetry and code, it still sometimes screws up basic multiplication. Ask it to multiply 4,596 by 4,859, and you might get the wrong answer. Your $10 solar-powered Casio? Correct every single time.
This isn't about AI being bad—it's about different tools serving different purposes. AI systems are phenomenal at creative reasoning and complex problem-solving, but they hallucinate. They make confident mistakes. In developing countries without reliable smartphones or internet, that $5 calculator with a 10-year battery life is simply superior for what it does.
The parallel to the abacus is striking: it took 10-15 years for calculators to displace it in developed economies after 1957, and it lingered for another generation in Asia. Calculators might follow the same pattern—gradually fading from developed markets while remaining essential elsewhere. Perfect accuracy in a specific domain can outlast sophisticated but fallible generalization.
Big Brother Gets a Body Cam Upgrade
Finally, Edmonton just became the first city globally to deploy facial recognition on police body cameras, and the privacy community is losing its mind—rightfully so.
Here's the tension: In 2019, Axon explicitly refused to add facial recognition because the technology was too biased, particularly against darker-skinned individuals. Six years later, they're testing it anyway, claiming the tech has improved. Except studies still show error rates up to 100 times higher for people of color versus white faces.
Edmonton's police argue they've added "guardrails"—four-meter detection radius, good lighting only, human verification required. Alberta's Privacy Commissioner is essentially saying "that's not how privacy law works," but the pilot is proceeding anyway.
The core question law professor Gideon Christian raised cuts deep: "Body-worn cameras were originally a tool for police transparency and accountability. This tool is basically now being thrown to mass surveillance."
The Path Forward
So where does this leave us? These three stories sketch a future that's less "AI takeover" and more "messy negotiation between capabilities and values."
We're likely heading toward a patchwork regulatory landscape where states like Florida set their own rules, creating compliance headaches for tech companies but also laboratories for what actually works. Perfect-accuracy tools will coexist with sophisticated-but-fallible AI systems, each serving different needs. And surveillance technology will advance faster than privacy protections, forcing uncomfortable democratic debates about where we draw lines.
The optimistic read? We're finally having grown-up conversations about AI governance. The concerns aren't hypothetical anymore—they're about real kids, real privacy violations, and real mathematical errors. That's uncomfortable, but it's also how democratic societies figure things out.
The calculator teaches us something important: just because AI can do something impressive doesn't mean it replaces everything before it. Sometimes reliable beats sophisticated. Sometimes simple wins.
And sometimes, the future looks a lot like 39 million Casio calculators, still giving the right answer.

