• AI Weekly
  • Posts
  • AI's Whirlwind Week: From Pentagon Deals to Teen Millionaires

AI's Whirlwind Week: From Pentagon Deals to Teen Millionaires

The most jaw-dropping AI stories from June 15-23, 2025

In partnership with

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

Hey, Joshua here with another feature article. Some great developments this past week. Let’s dive in.

AI's Whirlwind Week: From Pentagon Deals to Teen Millionaires

Well, well, well. If you thought AI was moving fast before, this past week just cranked everything up to ludicrous speed. We're talking about a 16-year-old building a $12 million AI empire, the Pentagon writing nine-figure checks to ChatGPT's creators, and tech giants literally arguing about what "artificial general intelligence" even means. Oh, and robots are now playing badminton because apparently we needed that in our lives.

Buckle up, because the AI world just served us a reality check that feels like science fiction crashed into a boardroom and had a baby with a startup pitch deck.

The Stories That Made Our Jaws Drop

OpenAI Becomes a Military Contractor (And Everyone's Talking About It)

Let's start with the biggie: OpenAI just pocketed a cool $200 million from the U.S. Department of Defense. Yeah, you read that right—the company behind ChatGPT is now officially in the war business. The Pentagon handed them this contract on June 16th to develop "frontier AI capabilities" for both battlefield operations and boring government paperwork.

What's wild here isn't just the money (though $200 million is nothing to sneeze at). It's the complete 180 from OpenAI's previous stance of "we don't do military stuff." Apparently, when you're pulling in $10 billion annually and eyeing a $300 billion valuation, principles become... flexible. This move puts them directly in competition with defense darlings like Palantir, and honestly, it's a signal that the AI arms race isn't just about who has the best chatbot anymore—it's about who controls the future of warfare itself.

Tech Titans Can't Even Agree on What AGI Actually Is

Here's where things get philosophical and a bit ridiculous. The Financial Times dropped a bombshell on June 19th revealing that Big Tech literally cannot agree on what Artificial General Intelligence means. OpenAI thinks it's when AI outperforms humans at economically valuable tasks. Others want it to match "adult-level cognitive capacities." Meta's over here suggesting we scrap the term altogether and use "ASI" (Artificial Superintelligence) instead.

It's like watching the world's smartest people argue about whether a hot dog is a sandwich while building nuclear reactors. Sam Altman's out here saying AGI might arrive "sooner than most people think," while critics are calling the whole thing a "capital-raising bubble" and "vibes and snake oil." Margaret Mitchell from Hugging Face isn't pulling punches either. The problem? This definitional chaos is making it impossible to regulate, invest in, or even understand what we're racing toward. We're essentially playing pin the tail on the superintelligence.

European Scientists Just Made Computers Think at Light Speed

While everyone's arguing about definitions, some brilliant Europeans quietly revolutionized computing on June 20th. Two research teams figured out how to do AI computations using laser pulses through ultra-thin glass fibers—and get this—they're performing image recognition tasks in under a trillionth of a second. That's thousands of times faster than your current laptop's silicon brain.

This isn't just a cool lab trick. We're talking about a potential energy revolution that could make AI sustainable and blazingly fast. Imagine AI systems that don't require massive data centers sucking up entire city's worth of electricity. This breakthrough could be the difference between AI being a climate disaster or a climate solution. The researchers are already working on putting this tech on chips, which means your future phone might literally think at the speed of light.

Anthropic Drops the Claude 4 Bomb

Not to be outdone, Anthropic released their Claude 4 family—Opus 4 and Sonnet 4—and these models are making some serious noise. Opus 4 is crushing coding benchmarks with a 72.5% score on SWE-bench and 43.2% on Terminal-bench. For context, that's like going from a decent amateur to a professional-level programmer overnight.

But here's what's really interesting: these models aren't just better at answering questions, they're designed to be actual AI agents. They can plan, use tools, and work on tasks for hours without human intervention. Anthropic is betting big on this "agentic" approach—AI that doesn't just chat but actually does stuff. With pricing at $15 input/$75 output per million tokens for Opus 4, they're clearly targeting the enterprise market that's willing to pay premium prices for premium AI labor.

A 16-Year-Old Just Built a $12 Million AI Empire

Meet Pranjali Awasthi, who's probably making your career achievements feel a bit underwhelming. This teenager founded Delv.AI, a company focused on making academic research more accessible using AI, and somehow convinced investors to value it at $12 million. She's got backing from On Deck and Pioneer Fund, and she's inspiring a whole generation of young founders to think bigger.

What's fascinating isn't just her age—it's that she's tackling a real problem (academic research is notoriously hard to navigate) with AI tools that didn't exist when she was in middle school. This story represents something bigger: the democratization of AI development. When teenagers can build million-dollar AI companies from their bedrooms, we're not just seeing technological progress—we're witnessing a fundamental shift in who gets to shape our AI future.

India's Homegrown AI Assistant Goes Full Agent Mode

While Silicon Valley argues about definitions, India's Krutrim launched Kruti on June 12th—an AI assistant that actually does things instead of just talking about them. This isn't another chatbot; it's booking your cabs, ordering your food, paying your bills, and doing it all in 13 Indian languages.

The strategic implications here are huge. India is building domestic AI capabilities that serve local needs in local languages, reducing dependence on Western tech giants. Kruti represents a new model: AI that's culturally aware, regionally relevant, and task-oriented. It's also a preview of where AI assistants are heading globally—from answering questions to actually managing your life.

The Wild Cards: Malicious AI, Robot Athletes, and Chinese Avatar Millionaires

The week wasn't all corporate deals and research breakthroughs. WormGPT—the evil twin of ChatGPT—evolved into new, more dangerous variants built on Grok and Mixtral models, automating cyberattacks with scary precision. Meanwhile, Chinese researchers built a four-legged robot that plays badminton (because apparently we needed robot athletes), and AI avatars in China earned $7 million in seven hours through influencer partnerships.

These stories might seem like sideshows, but they're actually revealing the full spectrum of AI's impact. From cybercrime to sports to entertainment, AI isn't just changing how we work—it's reshaping every aspect of human experience, including the weird and wonderful parts we never saw coming.

Where We're Heading: A Critical Look at AI's Trajectory

After digesting this week's AI feast, one thing becomes crystal clear: we're not just witnessing technological progress anymore—we're watching the emergence of an entirely new economic and social order. But here's the uncomfortable truth nobody wants to say out loud: we're building this future while arguing about the blueprints.

The fact that tech leaders can't agree on what AGI means isn't just an academic problem—it's a governance crisis. How do you regulate something that doesn't have a definition? How do you prepare society for changes you can't even describe? We're essentially flying blind into the most consequential technological transition in human history.

The Real Revolution Isn't Technical—It's Social

What struck me most about this week's stories isn't the technological achievements (though light-speed computing is pretty cool). It's the social dynamics. A teenager building a multi-million dollar AI company. India developing domestic AI capabilities. Chinese AI avatars out-earning human influencers. The Pentagon writing checks to Silicon Valley.

We're witnessing the democratization and weaponization of AI happening simultaneously. While that's exciting for innovation, it's terrifying for stability. When AI development can happen anywhere, by anyone, with increasingly powerful tools, traditional gatekeepers lose their grip. That's liberating and destabilizing in equal measure.

The Three Futures We're Racing Toward

Based on this week's developments, I see three possible trajectories converging:

  1. The Agentic Future: AI systems that don't just respond but act, plan, and execute tasks autonomously. Anthropic's Claude 4 and India's Kruti are early previews.

  2. The Geopolitical Future: AI as a national security asset, with countries and companies competing for AI supremacy. OpenAI's Pentagon deal is just the beginning.

  3. The Distributed Future: AI development happening everywhere, by everyone, breaking down traditional barriers between developers and users. The 16-year-old founder story exemplifies this.

The question isn't which future we'll get—it's how these three forces will interact and whether we can manage the collision.

The Uncomfortable Truth

Here's what nobody wants to admit: we're building AI systems faster than we can understand their implications. This week's stories reveal an industry that's simultaneously incredibly sophisticated and surprisingly naive about its own impact. We can make computers think at light speed, but we can't agree on what thinking means.

The real challenge isn't technical anymore—it's human. Can we govern systems we don't fully understand? Can we distribute power without losing control? Can we innovate responsibly while competing globally?

Based on this week's evidence, we're about to find out whether humanity is as good at wisdom as we are at intelligence. The jury's still out, but the clock is definitely ticking.

Reply

or to participate.