Is Super Intelligence Here?

Sam Altman Says We Already Crossed the AI Point of No Return - But Should We Believe Him?

In partnership with

Learn AI in 5 minutes a day

This is the easiest way for a busy person wanting to learn AI in as little time as possible:

  1. Sign up for The Rundown AI newsletter

  2. They send you 5-minute email updates on the latest AI news and how to use it

  3. You learn how to become 2x more productive by leveraging AI

Hola, Joshua Here. Let’s dive in to it.

The AI Boss Says We're Already Past the Point of No Return (But Should We Believe Him?)

What's Really Going On Here?

Sam Altman runs OpenAI, the company that makes ChatGPT. He just made a huge claim that's got everyone talking. He says we've already crossed something called the "event horizon" of super smart AI. Think of it like crossing a line where you can't go back.

But here's the thing. If we really crossed this magical line, why doesn't the world feel totally different? Why are you still reading articles instead of having robot butlers serve you breakfast?

Let's dig into what Altman is really saying and figure out if he's right or just trying to get attention.

The "Gentle" Robot Takeover

Altman calls this the "gentle singularity." Singularity is a fancy word that means the moment when AI becomes smarter than humans and changes everything forever. But instead of robots taking over like in the movies, Altman says it's happening slowly and quietly.

Think about it this way. Ten years ago, the idea of talking to a computer and getting smart answers seemed like science fiction. Now you probably do it every day without thinking twice. That's what Altman means by "gentle." The change feels normal even though it's actually huge.

But here's where we need to be careful. Just because something feels normal doesn't mean it's safe or good for us.

His Big Predictions (And Why They Matter)

Altman made some pretty wild predictions:

By 2026: AI will come up with brand new scientific ideas all by itself. Not just organizing old information, but actually creating new knowledge.

By 2027: Robots will start doing real jobs in the real world, not just in fancy labs.

By the 2030s: Everything will be super cheap and easy to make. We'll be 10 times more productive than we were in 2020.

These sound amazing, right? Who wouldn't want everything to be easier and cheaper? But let's think about what this really means.

The Problems Nobody Wants to Talk About

Problem 1: We Don't Actually Control These Systems

Here's something scary. The people building AI systems admit they don't fully understand how they work or how to control them. It's like building a race car without brakes and hoping it stays on the road.

Altman himself says the "alignment problem" isn't solved yet. That's a fancy way of saying we don't know how to make sure AI does what we actually want it to do, not just what we tell it to do.

Think about social media. The AI that decides what you see was supposed to show you interesting stuff. Instead, it often shows you things that make you angry or sad because that keeps you scrolling longer. That's what happens when AI isn't properly aligned with what's actually good for people.

Problem 2: Jobs Are Going to Disappear (Fast)

Altman admits that "entire classes of jobs may disappear." But he talks about this like it's no big deal because everything will be so cheap and abundant.

Here's the reality check. Even if stuff gets cheaper, people still need to feel useful and earn money to buy things. When factories got automated, factory workers didn't all become engineers overnight. They struggled. Many never found good jobs again.

Now imagine this happening to teachers, accountants, writers, and even doctors all at the same time. That's not gentle. That's a disaster waiting to happen.

Problem 3: A Few Companies Will Control Everything

Right now, only a handful of companies can afford to build the most powerful AI systems. OpenAI, Google, and a few others are basically deciding the future for everyone else.

Altman talks about making AI benefits available to everyone, but his company charges money for their best AI tools. If AI really becomes as powerful as he claims, these companies will have more control over information and decision-making than any government in history.

That's not gentle. That's dangerous.

Why the "Gentle" Story Might Be Wrong

The Boiling Frog Problem

You know the story about the frog in slowly heating water? The frog doesn't jump out because the change feels gradual, even though it's actually deadly. That might be what's happening to us with AI.

Just because we're adapting to AI quickly doesn't mean we're adapting well. We might be getting used to things that are actually harmful without realizing it.

Current AI Still Makes Stuff Up

Here's something Altman doesn't emphasize enough. Current AI systems like ChatGPT still "hallucinate." That means they make up fake information and present it as fact. This has already caused problems in courtrooms and businesses.

If we can't trust AI to get basic facts right, how can we trust it to revolutionize science and society? The foundation isn't as solid as Altman makes it sound.

The Normalization Trap

When amazing things become normal, we stop paying attention to them. But we also stop questioning them.

Right now, millions of people use AI to help with work, school, and personal decisions. But most people don't understand how these systems work or what biases they might have. We're letting AI influence our lives without really understanding the consequences.

What This Means for You

Don't Panic, But Don't Sleep Either

Altman might be right that big changes are coming. But his "gentle" framing might make us too relaxed about serious risks.

You don't need to fear AI, but you should understand it. Learn how the AI tools you use actually work. Ask questions about who made them and what they're designed to do.

Think About What You Value

If AI really does make everything easier and cheaper, what kind of world do you want to live in? What jobs and activities give your life meaning? What human connections matter most to you?

These aren't technical questions. They're human questions. And they're the most important ones.

Demand Better From the People in Charge

The people building these systems have a lot of power over all of our futures. They should have to answer tough questions about safety, fairness, and control.

Don't just accept their promises that everything will work out fine. History shows that new technologies often benefit some people while hurting others, unless we work hard to make sure the benefits are shared fairly.

The Bottom Line

Altman might be right that we've crossed some kind of point of no return with AI. The technology is definitely getting more powerful quickly.

But his "gentle singularity" story might be hiding some hard truths. Big changes are rarely as smooth as the people causing them like to claim.

The real question isn't whether AI will change everything. It probably will. The question is whether those changes will actually be good for regular people like you and me.

That depends on whether we stay awake and engaged, or whether we sleepwalk into a future that someone else designed for us.

What do you think? Are we adapting to amazing new technology, or are we the frog in slowly heating water?

Quick Takes: 

The "gentle" framing is misleading - Just because change feels gradual doesn't mean it's safe or beneficial. The "boiling frog" analogy helps readers understand this concept.

Power concentration is dangerous - A few companies controlling superintelligent AI is a bigger risk than Altman acknowledges.

Current AI isn't as reliable as claimed - The hallucination problem shows we're not as far along as the hype suggests.

Economic disruption will be harsh - The "abundance" promise ignores the real human cost of mass job displacement.

We're losing agency - People are adapting to AI without understanding it or questioning its impact on their lives.

Reply

or to participate.