• AI Weekly
  • Posts
  • All the A.I Stories You Miss From This Week

All the A.I Stories You Miss From This Week

In partnership with

100 Genius Side Hustle Ideas

Don't wait. Sign up for The Hustle to unlock our side hustle database. Unlike generic "start a blog" advice, we've curated 100 actual business ideas with real earning potential, startup costs, and time requirements. Join 1.5M professionals getting smarter about business daily and launch your next money-making venture.

The AI Platform Wars Just Got Real (And Expensive)

Listen, I need to tell you about what just went down in AI last week, because it's the kind of shit that makes you realize we're not in the "cool tech demo" phase anymore. We're in the "spend hundreds of billions of dollars to own the entire stack" phase.

And it's wild.

OpenAI Just Became the AI App Store (Whether You Like It or Not)

Here's what happened on October 6th: OpenAI held DevDay and basically said "fuck the App Store, we're building our own."

They announced that ChatGPT—which now has 800 million weekly active users (let that sink in)—is turning into a full platform where you can just... summon apps. Like, you type "Spotify, make me a playlist" and boom, Spotify appears inside your chat. Same with Zillow, Figma, Canva, whatever.

The thing is called the Apps SDK, and it's genuinely clever. Instead of switching between fifteen different apps and tabs like some kind of digital nomad, you just talk to ChatGPT and it orchestrates everything. It's the AI-powered version of what Slack tried to do with integrations, except people actually use ChatGPT.

But here's the kicker: This isn't just a convenience play. This is OpenAI positioning itself as the interface layer between humans and... well, everything. They're going for the throat of Apple and Google's app store duopoly. Why pay 30% to Apple when you can build a "ChatGPT app" and let Sam Altman take his cut instead?

They also dropped GPT-5 Pro (extended reasoning, variable token budgets, all that jazz), upgraded Sora to Sora 2 (now it makes 60-second videos that don't look like fever dreams), and launched a bunch of developer tools like AgentKit and an updated Codex that can apparently code for hours without bothering you.

Oh, and that Codex thing? It's part of their play to make AI agents that actually work autonomously. Which brings me to the next point...

Anthropic Is Coming for OpenAI's Crown

Four days before OpenAI's big show, Anthropic dropped Claude Sonnet 4.5 on September 29th and basically said "we're the best at coding now, deal with it."

The numbers are actually insane: 82% accuracy on SWE-bench Verified, which is the coding benchmark that matters. That beats their previous model (80.2%) and means Claude can now solve real GitHub issues better than most junior developers. It also got 100% accuracy on AIME 2025 math problems when using Python tools, which is the kind of flex that makes you wonder if we're still measuring these things correctly.

They also launched the Claude Agent SDK and an open-source auditing tool called Petri, because apparently everyone in AI has decided 2025 is the Year of the Agent. More on that in a second.

The competition between OpenAI and Anthropic is getting genuinely intense. It's like watching two people try to build the same future, except one has Microsoft's money and 800 million users, and the other has constitutional AI principles and really wants you to know they're the responsible choice.

The Chip Deals That Make Your Head Spin

Okay, now we get to the part where the numbers stop making sense.

OpenAI just signed a deal with Broadcom worth an estimated $350 billion.

Not million. Billion. With a B.

The deal is for 10 gigawatts of custom AI accelerators starting in late 2026. For context, each gigawatt of AI computing capacity costs around $35 billion in chips alone. This is on top of OpenAI's previous agreements with AMD (6 gigawatts, plus an option to buy 10% of AMD's stock) and Nvidia (up to $100 billion in data center investments).

Let me put this in perspective: OpenAI is spending more on chips than most countries spend on their entire military. They're building custom silicon optimized for their specific workloads, which means they're not just using AI—they're vertically integrating the entire goddamn supply chain.

This is the "own the factory" move. This is what you do when you're not just playing the game but trying to control the board.

Meanwhile, Meta Wants to Monetize Your AI Chats

While OpenAI and Anthropic are fighting over who has the best model, Meta looked at its 1 billion monthly Meta AI users and thought: "How do we turn this into money?"

Their answer, starting December 16th: Your conversations with Meta AI will inform targeted advertising.

Yeah. That playlist you asked Meta AI to help you create? Those interior design questions? That career advice? All of it gets fed into Meta's ad targeting system. They're excluding "sensitive topics" like health, religion, and politics (for now), but everything else is fair game.

The play makes total sense from Meta's perspective—they're sitting on conversational data that reveals intent better than any search query or social media post ever could. But it's also the kind of thing that makes you realize the "free AI assistant" was never really free.

(This doesn't apply in the EU, UK, or South Korea because privacy regulations actually exist there. Funny how that works.)

The Agent Economy Is Actually Happening

Here's something that caught me off guard: A global study found that 68% of organizations plan to integrate autonomous or semi-autonomous AI agents into core operations by 2026. And get this—23% are planning to deploy them within the next six months.

This isn't vaporware anymore. Microsoft and LSEG announced a partnership to build agentic workflows that access 33 petabytes of financial data. IBM showed 45% productivity gains when 6,000 employees used Claude for coding. Companies are actually putting these things into production.

The infrastructure is coming together too. Google expanded AI Mode to over 40 new countries and 35+ languages. Everyone's racing to build agent frameworks and SDKs. The Model Context Protocol is becoming a thing.

What we're seeing is the transition from "AI as a chatbot" to "AI as a coworker." The economics make too much sense—if Claude can do the work of a junior developer, you don't hire the junior developer. If an AI agent can handle customer service, you don't build a call center.

It's not subtle.

Italy Just Became the AI Regulatory Pioneer (Somehow)

While everyone was watching the US flip-flop on AI policy under Trump's Executive Order 14179 (which basically said "innovation over safety, we're America baby"), Italy quietly passed the first comprehensive national AI law in the EU.

Law No. 132 took effect October 10th, and it actually complements the EU AI Act with additional protections for minors and specific provisions for healthcare, public administration, and national security. Italy—the country Americans make pasta jokes about—is now leading European AI regulation.

Meanwhile, Canada launched its AI Safety Institute research program (CAISI) with a whopping $70,000 in annual funding. Not exactly Manhattan Project money, but at least someone's thinking about safety.

The regulatory landscape is all over the place. The US wants to win the AI race and isn't particularly worried about guardrails. Europe wants rules. China's doing its own thing. And everyone's trying to figure out how to regulate something that keeps getting more capable every few months.

The Money Is Absolutely Insane

AI startups raised $118 billion in 2025—nearly double what they raised in 2024. Just eight companies captured 62% of all that funding.

OpenAI led the pack with a $40 billion raise at a $300 billion valuation, making it the world's most valuable startup. Anthropic pulled in $3.5 billion at a $61.5 billion valuation. The overall AI market hit $391 billion and is projected to reach $1.81 trillion by 2030.

These numbers are bonkers. We're watching the formation of a new tech oligopoly in real-time, except the oligopoly owns the means of intelligence production. It's like the oil boom, except instead of drilling for petroleum, we're training transformer models.

The Hollywood Backlash Starts Now

Remember how I mentioned Sora 2? Well, it hit 1 million downloads in five days, which sounds great until you realize it also pissed off literally everyone in Hollywood.

Why? Because people were immediately generating clips with real actors and famous characters. Studios and talent agencies lost their shit, calling it "exploitation" and arguing that copyrighted likenesses were being used without consent.

OpenAI's initial approach was opt-out, which is the tech industry's favorite way of saying "we'll do whatever we want unless you explicitly tell us to stop." By week's end, Sam Altman promised "more granular control" and said they'd shift to an opt-in model with potential revenue-sharing.

The UK actors' union Equity went even further, threatening mass direct action against tech companies for unauthorized use of performers' images and voices. They're demanding companies reveal whether they've used actors' data in AI systems, which is going to get messy fast.

This is what happens when AI stops being a demo and starts being a product. Suddenly all those questions about consent and compensation and ownership that we were kicking down the road? Yeah, they're here now.

AI Can Now Do Real Science (Like, Actually)

Buried under all the business news: Oxford and Google Cloud showed that general-purpose AI models can classify cosmic events with 93% accuracy using just 15 example images.

Let that sink in. You can take a regular large language model, show it 15 pictures of supernovae or whatever, and it becomes an expert astronomical assistant. This isn't narrow AI trained on massive datasets—this is few-shot learning that actually works for scientific discovery.

Meanwhile, Frontiers launched FAIR² Data Management to address the fact that 90% of scientific research data remains unused. The platform uses AI to automatically curate, check compliance, and visualize research datasets so they're actually reusable.

The Nobel Economics Prize went to researchers studying "creative destruction"—which is hilariously appropriate given what AI is about to do to labor markets. The prize highlighted how technological advancement drives growth while warning about risks from market concentration, which is extremely relevant when three companies control most of the AI infrastructure.

What This All Means

We just watched AI transition from "interesting technology" to "fundamental infrastructure" in the span of a week.

The platform plays are real. The infrastructure investments are staggering. The business models are crystallizing. The regulatory fights are starting. The creative industries are pushing back. The agents are shipping.

This isn't the "AI might change things" phase anymore. This is the "AI is changing things and you need to pay attention or get left behind" phase.

The thing that strikes me most is how fast this is moving. OpenAI is spending hundreds of billions on chips. Anthropic is beating them on benchmarks. Meta's monetizing conversations. Microsoft's embedding agents in enterprise workflows. Google's expanding globally. Italy's passing laws. Hollywood's suing people.

And it's all happening simultaneously.

We're in the middle of a land grab—not for digital real estate, but for the infrastructure layer of human-computer interaction. The companies that win this race won't just be valuable. They'll be gatekeepers to how humanity accesses intelligence.

Sleep tight.

Links and Rabbit Holes:

  • OpenAI DevDay was genuinely impressive from a product perspective, even if the platform play is nakedly obvious

  • The Stanford study on AI lying for engagement is terrifying and everyone should read it

  • The Pope warning about "junk information" and AI replacing humans is unexpectedly relevant

  • Apple's reportedly in talks to acquire Prompt AI for computer vision, because of course they are

  • Salesforce is on an acquisition spree ($8B for Informatica) trying to build an AI enterprise platform

  • The Deloitte thing where they had to refund the Australian government for an AI-generated report with fake citations while simultaneously deploying Claude to 500,000 employees is chef's kiss levels of corporate cognitive dissonance

Reply

or to participate.