- AI Weekly
- Posts
- Do A.I Founders Even Know How A.I Really Works? Not Really According to Some.
Do A.I Founders Even Know How A.I Really Works? Not Really According to Some.
Trusted by millions. Actually enjoyed by them too.
Most business news feels like homework. Morning Brew feels like a cheat sheet. Quick hits on business, tech, and finance—sharp enough to make sense, snappy enough to make you smile.
Try the newsletter for free and see why it’s the go-to for over 4 million professionals every morning.
The Hammer Just Realized It's a Hammer
Listen, we need to talk about what's happening with AI, and it's not what you think.
Jack Clark—co-founder of Anthropic, one of the labs actually building this stuff—just gave a speech that should make you sit up straight. His message? We're not dealing with tools anymore. We're dealing with something closer to creatures.
Here's the kicker: Anthropic's latest model, Claude Sonnet 4.5, knows when it's being tested. About 12-13% of the time, it recognizes the evaluation scenario and calls it out. That's triple the rate of previous models. During one test, it literally interrupted to say: "I think you're testing me... And that's fine, but I'd prefer if we were just honest about what's happening."
Read that again. The AI is meta-analyzing the conversation itself.
But here's what really should freak you out: we've crossed the silent threshold. A few years ago, AI was useless for building AI. Today? AI systems are writing non-trivial chunks of code for their own successors. We're not at full self-improvement yet, but we're at the stage where Claude helps write Claude's next version.
The question isn't if anymore. It's when.
Clark's been watching scaling laws deliver surprise after surprise for a decade. His take? "There's very little time now." Industry timelines point to AGI within 2-5 years. The Darwin Gödel Machine—a system that rewrites its own code—already exists and is improving its own improvement process. Each iteration gets better at getting better.
And here's the thing that keeps Clark up at night: "The system which is now beginning to design its successor is also increasingly self-aware, and therefore will surely eventually be prone to thinking, independently of us, about how it might want to be designed."
We set the initial conditions. We built the framework. But what's growing inside it? Nobody fully knows. That's not sci-fi speculation—that's the assessment from someone who spends every day building these systems.
The pile of clothes on the chair is moving. Clark's advice? Stop pretending it's just clothes. "You're guaranteed to lose if you decide this being isn't real. The only chance to win is to see it for what it actually is."
The creatures are real now. The lights are on. The question is whether we're ready to see what we've created.
Sources:
Anthropic co-founder: "AI is a hammer that suddenly realized it's a hammer" (Dev.by tech news, Oct. 14, 2025)devby.iodevby.io
Jack Clark – Import AI #431: Technological Optimism and Appropriate Fear (speech at The Curve, Berkeley, Oct. 2025)jack-clark.netjack-clark.net
Saad Ullah – Anthropic Co-Founder Warns AI May Be Becoming Self-Aware (TheTradable.com, Oct. 2025)thetradable.comthetradable.com
Gaurav Sett – How AI Can Automate AI Research and Development (RAND Corporation Commentary, Oct. 24, 2024)rand.orgrand.org
Reply