- AI Weekly
- Posts
- All the A.I Stories You Missed This Week
All the A.I Stories You Missed This Week
How can AI power your income?
Ready to transform artificial intelligence from a buzzword into your personal revenue generator
HubSpot’s groundbreaking guide "200+ AI-Powered Income Ideas" is your gateway to financial innovation in the digital age.
Inside you'll discover:
A curated collection of 200+ profitable opportunities spanning content creation, e-commerce, gaming, and emerging digital markets—each vetted for real-world potential
Step-by-step implementation guides designed for beginners, making AI accessible regardless of your technical background
Cutting-edge strategies aligned with current market trends, ensuring your ventures stay ahead of the curve
Download your guide today and unlock a future where artificial intelligence powers your success. Your next income stream is waiting.
The AI Industry This Week: Trillion-Dollar Bets and the Great Job Shuffle
October 12–19, 2025
Listen, I need to tell you about the absolute madness happening in AI right now. And I'm not talking about some chatbot writing mediocre poetry or whatever—I'm talking about nearly a trillion dollars in chip deals, Walmart turning ChatGPT into a shopping mall, and Wall Street finally waking up to the fact that AI might actually destroy their entire business model.
Let's break it down.
OpenAI Just Bet the Farm on Custom Chips (And It's Either Genius or Completely Insane)
Here's the headline: OpenAI is designing its own AI chips with Broadcom. Cool, right? Tech company makes tech. Except when you dig into the numbers, your brain kind of melts.
OpenAI has reportedly lined up nearly $1 trillion worth of chip purchases over the next decade. Read that again. A trillion. With a T. They're committing to secure 6-10 gigawatts of GPU capacity—that's enough power to run a small country—and they're doing this while pulling in only about $15 billion in current revenue.
The thing is, OpenAI knows they're in an arms race. Right now, they're basically renting computing power from NVIDIA, who charges whatever the hell they want because they control the market. So OpenAI's play is vertical integration: design your own chips, control your destiny, stop getting squeezed by suppliers. As Broadcom CEO Hock Tan put it: "If you do your own chips, you control your destiny."
But here's the kicker: This whole setup has some serious "we've seen this movie before" energy. OpenAI is making massive chip commitments to companies that are also investors in OpenAI. It's a circular money carousel. Chipmakers invest in OpenAI. OpenAI buys chips from those same companies. Everyone's valuation goes up. Stock prices surge (Broadcom jumped 9% on the announcement). And some experts are getting real nervous that this looks less like the future of AI and more like the dot-com bubble with better PR.
What this means: If OpenAI pulls this off—if their custom silicon starts rolling out in late 2026 and actually works—they'll have a monster competitive advantage. Google and Amazon are trying the same thing, which tells you this isn't just Sam Altman being ambitious. This is the new playbook: own your hardware or get left behind.
But if this is a bubble? If demand for AI compute doesn't justify these astronomical investments? Well, we might be watching Pets.com 2.0, except this time with way more zeros on the end.
Your Next Shopping Trip Might Happen Inside ChatGPT
Remember when e-commerce meant going to a website and clicking "Add to Cart"? Yeah, that's already starting to feel quaint.
Walmart just announced you can now shop via ChatGPT. Not "ask ChatGPT what to buy and then go to Walmart.com"—I mean actually complete purchases through conversational AI. Browse products, get recommendations, check out with Stripe integration, all inside the chatbot.
Walmart's stock jumped 5% the day they announced this, which tells you Wall Street thinks this is a real shift. And it is. Because here's what's happening: Conversational commerce is becoming the new storefront.
Think about it—shopping used to be about browsing shelves, then browsing websites, then scrolling through apps. Now it's just... talking. "Hey ChatGPT, I need a birthday gift for my sister who likes yoga and true crime podcasts." Boom. Personalized suggestions. One-click checkout. Done.
OpenAI is rolling this out with Walmart, Etsy, and Shopify, which means they're building an entire shopping ecosystem inside ChatGPT. For retailers, this is both exciting and terrifying. Exciting because it's a new channel to reach customers. Terrifying because now OpenAI controls the storefront, which means they decide what gets recommended and how prominent your products are.
And you better believe this opens up a whole new frontier for advertising: paid placement in chatbot results. Who gets recommended first when someone asks for "affordable running shoes"? The brand that paid for it.
The thing is: This could genuinely improve shopping. Personalized recommendations, natural language search, instant checkout—it's more intuitive than clicking through category pages. But it also means we're handing even more power to AI platforms to mediate commerce. Amazon's been doing this for years with their algorithm. Now ChatGPT wants a piece.
Wall Street Is Waking Up to the Real AI Risk (And It's Not the Bubble)
While everyone's been arguing about whether AI stocks are overvalued, Jonathan Gray—the president of Blackstone, one of the world's biggest investment firms—stood up and said: "You're all looking at the wrong problem."
Here's his point: Sure, maybe some AI startups are overhyped. Maybe there's capital misallocation (he literally compared it to Pets.com in 2000). But the real risk isn't that AI companies will fail—it's that legacy businesses will get absolutely demolished.
"People say, 'This smells like a bubble,' but they're not asking: 'What about legacy businesses that could be massively disrupted?'" Gray told a crowd at a private capital summit. He specifically called out rules-based sectors: legal, accounting, transaction processing, insurance. Industries where people get paid to follow procedures and apply frameworks. The kind of work AI is really good at.
And Blackstone isn't just talking about this—they've made it policy. Every investment memo now has to address AI risks on the first page. That's how seriously they're taking it.
What does this mean? It means that if you're a law firm still billing $500/hour for document review, or an accounting firm charging premium rates for routine tax prep, you need to wake up. Because AI can do that work faster, cheaper, and without complaining about billable hours.
Gray invoked Jeff Bezos's idea of an "industrial bubble"—where you get rapid buildout of real infrastructure (think railroads in the 1800s) even if valuations overshoot. Some companies will fail. But the technology? The technology is real, and it's going to reshape entire industries.
The winners won't just be the companies building AI. They'll be the companies that use AI to reinvent their business models. The losers will be the ones that keep pretending this is all hype.
Meanwhile, AI Is Getting... Spicier
In a move that shocked exactly no one who's been paying attention to user complaints, OpenAI announced that ChatGPT will now allow adult content for verified users starting in December.
Yeah, you read that right. Erotica, mature themes, the whole deal. Sam Altman framed it as "treating adult users like adults," which is a nice way of saying "we made this thing too sanitized and people got annoyed."
Up until now, ChatGPT has been aggressively family-friendly—sometimes to the point of being weirdly prudish. Ask it to write a romance scene and it would clutch its digital pearls and refuse. That was by design: OpenAI wanted to be the "safe" AI, especially for enterprise and education customers.
But here's the tension: When you neuter an AI to be workplace-appropriate 24/7, you also make it less useful for personal use. And OpenAI's competitors started eating their lunch by offering fewer restrictions. So now they're pivoting.
The catch: OpenAI says they have better safety tools now—age verification, content filters—so they can safely loosen the rules. They're also rolling out customization options so you can adjust ChatGPT's personality (more casual, more formal, more emoji-heavy, whatever you want).
At the same time, Meta announced parental controls for teen AI interactions, complete with content filters "inspired by the PG-13 movie rating system." So we're seeing the industry split into different tiers: family mode and adult mode, like Netflix profiles but for AI.
What to watch: Whether this actually works. Because the risk isn't just reputational—it's practical. Enterprise customers need to know their employees won't accidentally generate NSFW content in a work context. Parents need to know their kids can't bypass age gates. And regulators are definitely paying attention.
If OpenAI pulls this off, expect other AI platforms to follow. If it turns into a mess... well, that's going to be a fun news cycle.
The Ad Industry Is Going All-In on AI (And the Results Are Wild)
Two stories here, and they're basically the same lesson told from different angles.
First: WPP—one of the world's biggest ad agencies—just committed $400 million to Google's AI technologies over five years. They're embedding generative AI throughout their entire operation: creative, media planning, production, the works. Google gives them early access to tools like Imagen (image generation) and Veo (video generation), and WPP gets to build custom AI solutions for clients.
The goal? Cut production time from months to days. Boost efficiency by 70%. Enable "hyper-personalized marketing at scale." All the buzzwords, except this time they're backed by a massive financial commitment.
This isn't just WPP trying to sound innovative—they need this. Their profits dropped 71% in the first half of the year. Clients are getting more demanding and less patient. And if AI can generate campaign assets faster and cheaper, why pay an agency to do it the old way?
Second: Publicis—WPP's French rival—raised its full-year growth forecast (for the second time this year) and explicitly credited AI. CEO Arthur Sadoun said: "It is artificial intelligence that allows us to accelerate our clients' growth and to accelerate our own."
Get this: 73% of Publicis' operations are now AI-powered. They've spent €12 billion since 2015 on data and technology, building platforms like Marcel (their internal AI assistant). And it's working. Clients are spending more. Efficiency is up. Margins are improving.
Here's why this matters: Publicis is proving that AI isn't just hype—it's delivering real ROI. They're automating routine tasks, speeding up production, and using data to improve targeting. And they're using those efficiency gains to invest in human creativity, not replace it.
The lesson: AI + human talent = competitive advantage. AI without strategy or data = expensive toys that don't move the needle.
What happens next: Every agency is going to double down on AI. The ones that figure out how to integrate it effectively will thrive. The ones that treat it as a gimmick will get left behind. And clients? They're going to demand more personalization, more speed, and better results—because they know AI makes it possible.
Regulators Are Freaking Out About AI in Finance (And They Should Be)
While everyone's been celebrating AI's potential to make banking more efficient, a bunch of very serious people in Basel and Washington have been quietly losing their minds.
The Financial Stability Board (G20's financial watchdog) and the Bank for International Settlements (the central bank for central banks) both issued warnings in October: AI could pose systemic risks to the global financial system.
Here's the problem they're worried about: If every bank starts using the same AI models—say, for credit scoring or trading algorithms—you get what they call "herd-like behavior." Everyone's AI tells them to do the same thing at the same time. And when something goes wrong, it goes wrong everywhere at once.
"If too many institutions end up using the same AI models and specialized hardware… this heavy reliance can create vulnerabilities if there are few alternatives available," the FSB warned.
Remember 2008? When every bank was using similar risk models that all failed to see the housing bubble? This is that, but faster and more interconnected.
The BIS was even more direct: There's an "urgent need" for regulators to "raise their game" in understanding and supervising AI. Which is bureaucrat-speak for "we have no idea what we're doing and this scares us."
What this means in practice: Expect regulators to start requiring banks to diversify their AI providers. Expect stress tests for AI failures. Expect a lot more scrutiny of algorithms and a lot more questions about bias, transparency, and model convergence.
And here's the irony: Regulators are going to start using AI tools themselves to monitor markets in real time. So we'll have AI systems watching AI systems, which... what could go wrong?
The bigger point: AI in finance is moving faster than oversight can keep up. Banks are bullish because AI promises efficiency gains. Regulators are nervous because they've seen this movie before (tech adoption outpaces risk management, things blow up). The question is whether we can build guardrails fast enough to prevent a systemic crisis.
The "AI Job-pocalypse" Is Here (But It's Complicated)
Okay, let's talk about the elephant in the room: Is AI going to take your job?
New data from October says: Maybe. Probably. It depends.
A global survey of 850 executives found that 41% are already using AI to reduce headcount. Nearly one-third said they now explore an AI solution before considering hiring a new person. Over 40% reported cutting or reducing junior positions—research roles, admin work, entry-level support—because AI can handle those tasks.
In the U.S., a Resume.org survey found 3 in 10 companies have already replaced some jobs with AI, and 37% expect to by 2026. Half of companies have frozen hiring in 2025 due to automation and economic pressures.
Here's the thing: This is really happening. Junior roles are getting trimmed. The "ladder gap" is real—if entry-level jobs vanish, how do young professionals gain experience? How do they climb the ladder if the bottom rungs are gone?
But here's the other side: Teachers unions, Microsoft, OpenAI, and Anthropic just launched a massive initiative to train 400,000 teachers on AI over five years. The message? "AI is part of our world now. Either learn to work with it or get left behind."
Companies targeting highly-paid roles and employees lacking AI skills for reductions. The skill divide is becoming a job security divide.
So what's the play?
Upskill. Fast. Learn to work with AI tools. Prompt engineering, AI oversight, data ethics—these are the new job categories.
Bet on uniquely human skills. Creativity, empathy, strategic thinking, emotional intelligence—things AI struggles with.
Demand responsible automation. Companies need to invest in retraining, not just cost-cutting. A coalition of philanthropies just committed $500 million for programs that prioritize human interests in AI deployment.
As Susan Taylor Martin, CEO of BSI, put it: "AI represents an enormous opportunity... but as they chase greater productivity and efficiency, we must not lose sight that it is ultimately people who power progress. The tension between making the most of AI and enabling a flourishing workforce is the defining challenge of our time."
That's not corporate speak. That's the truth.
What This All Means
Look, we're living through one of those rare moments when everything changes at once. The business models, the tools, the skills required, the power dynamics—all of it is in flux.
OpenAI is betting a trillion dollars that custom chips will give them the edge. Walmart is betting that conversational commerce is the future. Publicis is proving that AI + strategy = growth. And everyone—from Blackstone to the BIS—is warning that the companies and institutions that don't adapt will be left behind.
But here's what's not changing: People still matter. Creativity still matters. Human judgment, emotional intelligence, strategic thinking—these aren't getting automated away.
The question isn't whether AI will reshape work and business. It will. The question is whether we'll use it to empower people or just cut costs. Whether we'll invest in retraining or leave workers stranded. Whether we'll build guardrails or just let the market sort it out.
The next couple of years will answer those questions. And the choices we make now—individually, as companies, as societies—will determine whether AI becomes a tool for progress or just another force of disruption that leaves most people worse off.
No pressure.
Links and Context
OpenAI's chip strategy is either visionary or the biggest overcommitment since WeWork. Time will tell.
Conversational commerce is going to reshape retail faster than anyone expects. Get ready for "search ads" to become "recommendation ads" in chatbots.
The advertising industry is splitting into AI-native agencies and dinosaurs. No middle ground.
If you work in a "rules-based" profession (legal, accounting, compliance), you have maybe 18-24 months to figure out your AI strategy. After that, you're competing with algorithms.
The job displacement stuff is real, but so are the opportunities. The skill gap is becoming the new class divide.
Stay sharp. It's going to be a wild ride.
That's all for this week. If you found this helpful, share it with someone who needs to understand what's actually happening in AI—beyond the hype and the fear-mongering.
Reply