In partnership with

Create how-to video guides fast and easy with AI

Tired of explaining the same thing over and over again to your colleagues?

It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.

1️⃣Share or embed your guide anywhere
2️⃣Turn boring documentation into stunning visual guides
3️⃣Save valuable time by creating video documentation 11x faster

Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover and call to action.

The best part? The extension is 100% free

Hey, josh here. Some wild news this week on chip developments from various companies.

The Chip Battle Just Got Weird (And That's Actually Good News)

Listen, NVIDIA's been printing money like it's discovered alchemy, but here's the thing: physics doesn't care about your stock price. The entire AI hardware industry just slammed face-first into some deeply unsexy fundamental limits, and the scramble to solve them is genuinely fascinating.

The core problem? We've hit what's called the reticle limit—basically, the maximum size you can print a chip is about 800 square millimeters. NVIDIA's monster B200 with its 208 billion transistors? Already bumping up against that ceiling. And when you're training massive AI models, you need thousands of these chips talking to each other, which means you're burning obscene amounts of power just shuttling data across cables instead of, you know, actually computing stuff.

Enter the weirdos with better ideas.

Cerebras said "fuck it" and just kept the entire silicon wafer intact as one massive chip. Their WSE-3 has 4 trillion transistors—56 times larger than NVIDIA's flagship. The kicker? Because everything's on one piece of silicon, data moves at 27 petabytes per second internally. That's more bandwidth than 1,800 of NVIDIA's top-tier servers combined. They're getting 2.2x better performance per watt, which matters a hell of a lot when your electricity bill rivals a small country's GDP.

Then there's the neuromorphic computing folks taking a completely different angle: instead of cramming more transistors into traditional architectures, they're redesigning how computation works by mimicking actual brains. Intel's Loihi 2 and IBM's TrueNorth chips only fire neurons when events happen—not continuously like traditional processors. The result? Using 1% to 10% of the power for equivalent tasks. Mercedes-Benz claims their neuromorphic vision systems could cut autonomous driving energy by 90%.

Why this matters: AI data centers currently burn 260 terawatt-hours annually and that's projected to double by 2027. A single ChatGPT query uses 10x more energy than a Google search. This trajectory is completely unsustainable.

What we're witnessing isn't just incremental improvement—it's a fundamental rethinking of how computation should work. The GPU monoculture dominated because it was good enough. Now that "good enough" means potential power grid issues and $380 million lithography machines, the industry's splintering into specialized solutions: wafer-scale for training, neuromorphic for edge deployment, alternative lithography for custom work.

The physics-driven hardware revolution is here. It's messy, expensive, and absolutely necessary.

Reply

or to participate

Recommended for you

No posts found