• AI Weekly
  • Posts
  • OpenAI Just Launched a Social Network Where Nothing Is Real (And Everyone's Addicted)

OpenAI Just Launched a Social Network Where Nothing Is Real (And Everyone's Addicted)

and it debuting at #1 on the app store

In partnership with

Trusted by millions. Actually enjoyed by them too.

Most business news feels like homework. Morning Brew feels like a cheat sheet. Quick hits on business, tech, and finance—sharp enough to make sense, snappy enough to make you smile.

Try the newsletter for free and see why it’s the go-to for over 4 million professionals every morning.

OpenAI Just Launched a Social Network Where Nothing Is Real (And Everyone's Addicted)

On September 30th, 2025, OpenAI did something that should probably concern us more than it does: they launched a social media app where literally every piece of content is fake. Not "staged for Instagram" fake or "heavily filtered" fake. Actually, entirely artificially generated fake. No cameras involved. No real moments captured. Just you, a text prompt, and an AI that can put you anywhere doing anything.

Within five days, Sora hit one million downloads. By October 3rd, it was the #1 app on the US App Store, dethroning ChatGPT—OpenAI's own previous blockbuster.

Here's the kicker: it's invite-only and only available in North America. When access is this restricted and growth is this explosive, you know something genuinely unprecedented is happening.

Welcome to the future of social media, where reality is optional and nobody seems to mind.

The Numbers That Should Make You Nervous

Let's start with what Sora actually accomplished, because it's kind of insane.

Bill Peebles, OpenAI's head of Sora, casually mentioned that the app hit one million downloads faster than ChatGPT did. Think about that. ChatGPT was the fastest-growing consumer application in history when it launched. It became a cultural phenomenon. Sora beat it.

The first-day numbers: 56,000 iOS downloads. Within 48 hours: 164,000 installations. By day three, it was #1 on the App Store, sitting above Google's Gemini and OpenAI's own ChatGPT.

Remember, this is with massive barriers to entry. You need an invite code (which people are sharing like concert tickets on social media). You need to be in the US or Canada. And yet the demand is so intense that the app still exploded.

What's driving this? Well, OpenAI figured out something that should have been obvious but somehow wasn't: people don't just want to see AI-generated videos. They want to star in them.

Sora 2: When AI Videos Got Scary Good

The tech underneath all this is Sora 2, which OpenAI is positioning as "the GPT-3.5 moment for video generation." If you remember, GPT-3.5 was when ChatGPT went from "neat tech demo" to "holy shit this is actually useful."

Sora 2 is that leap for video.

Here's what makes it different from the janky AI video tools you've seen before:

Synchronized audio-visual generation. Sora 2 doesn't just generate video—it creates the audio at the same time. Dialogue with lip-sync. Environmental sound effects. Background music that actually matches the mood. Previous AI video tools made you bolt on audio afterward, which was clunky as hell and always felt off. Sora 2 does it natively.

Actual physics simulation. This is where it gets impressive and slightly terrifying. Earlier AI video models would let basketballs teleport through backboards or water behave like sentient jelly. Sora 2 understands how the physical world works. Basketballs bounce correctly. Water flows with proper fluid dynamics. You can generate an Olympic gymnastics routine or a paddleboard backflip, and the physics will be... right.

One demo shows a figure skater with a cat balanced on their head. The cat moves and responds to the skater's motion realistically because Sora 2 actually understands momentum and balance. It's not just generating pretty pictures—it's simulating reality.

Enhanced creative control. You can direct complex multi-shot sequences while maintaining consistent world states and character appearances. Previous tools would struggle with this—your character's shirt might change colors between shots, or the background would morph into something completely different. Sora 2 can maintain temporal consistency, which is crucial for actual storytelling.

The maximum video length is 10 seconds for regular users, 20 seconds for Pro subscribers ($200/month—yes, really). Resolution tops out at 1080p. It's iOS-only for now, though you can access it via web browser on Android.

Is it perfect? No. But it's good enough to be genuinely addictive, which turns out to be the only threshold that matters.

The Cameos Feature: Viral Growth on Steroids

Here's where OpenAI got genuinely clever from a product perspective.

The "Cameos" feature lets you insert yourself—your actual face and voice—into AI-generated videos. You record a short video-and-audio capture once, and boom, you've created a reusable "character" that can be dropped into any AI-generated scenario.

Want to see yourself scoring the winning goal in the World Cup? Done. Performing at Coachella? Easy. Fighting dragons in a medieval fantasy? Why not.

But the real genius is what happens next: you can use your friends' cameos too (with their permission). This creates what product analysts are calling "cascading content creation." Your friend makes a funny video starring themselves. You remix it, swapping in your face. Someone else sees that and remixes yours. A single video spawns dozens of derivative works across social networks.

It's the viral growth mechanic TikTok dreams about, except instead of dancing trends, it's "let me put my face in increasingly absurd AI-generated scenarios."

The consent mechanisms are actually pretty thoughtful—you can specify who has permission to use your likeness, review all videos featuring your cameo, delete them, revoke access anytime, even set behavioral preferences like "always wearing sunglasses." OpenAI clearly thought about the deepfake implications.

But here's the thing: even with consent mechanisms, we're still talking about technology that lets you put anyone into any scenario. The potential for misuse is... substantial.

Let's Talk About "AI Slop"

The term that keeps appearing in coverage of Sora is "AI slop"—the flood of artificial content that critics worry will crowd out authentic human creativity.

And listen, they're not wrong to worry.

Sora's interface deliberately mimics TikTok and Instagram Reels. Vertical video streams, swipe navigation, algorithmic recommendations. But here's the critical difference: every single piece of content is artificially generated. Nothing on Sora's feed is real. Nothing was actually filmed. Every video is synthetic.

This isn't social media anymore—it's synthetic media.

Early users report finding the app "genuinely addictive." OpenAI's internal testing showed employees using it so frequently that managers jokingly suggested it might impact productivity. The ability to star in personalized AI videos, to become the protagonist in increasingly elaborate fantasies, is proving irresistibly engaging.

The remix functionality keeps people hooked. You can easily modify existing videos by changing prompts, swapping cameos, adjusting visual styles. It's TikTok's "duet" feature on steroids, except you're not just responding to content—you're literally inserting yourself into it and transforming it completely.

But what happens when our feeds are entirely populated by synthetic content? When the distinction between "things that happened" and "things an AI generated" disappears completely?

One commentator put it bluntly: "OpenAI has officially broken the notion of 'what's real'."

We're not talking about a distant future concern here. This is happening now, in an app that millions of people are already using.

Predictably, Sora immediately ran face-first into copyright law.

Within days of launch, users were generating videos featuring SpongeBob SquarePants, Mario, Rick & Morty characters, Pokémon—basically every major IP you can think of. The app's ability to generate copyrighted characters with startling accuracy created what can only be described as an intellectual property nightmare.

The Motion Picture Association (MPA) lost its mind. On October 7th, they issued a statement urging OpenAI to "take immediate and decisive action" to prevent copyright infringement. MPA CEO Charles Rivkin emphasized that established copyright law should protect creators' rights and that the burden lies with OpenAI—not rightsholders—to prevent violations.

Here's where it gets interesting from a legal strategy perspective: OpenAI initially adopted an opt-out approach. Basically, "we'll use copyrighted content unless you specifically tell us not to."

This is... not how copyright law works.

As intellectual property attorney Jason Bloom noted: "You can't merely post a notice to the public stating that you will use everyone's works unless they tell you not to. That is not how copyright operates."

Copyright is opt-in by default. You need permission to use someone else's work. You can't just take it and wait for them to complain.

Facing mounting pressure, CEO Sam Altman announced plans to provide rightsholders with "more granular control" over character generation. OpenAI implemented a Copyright Dispute form for takedown requests. But critics argue this still places an unfair burden on content owners to police the platform—a whack-a-mole game where rightsholders have to constantly monitor and request takedowns.

The fundamental question remains unanswered: when an AI model is trained on copyrighted material and can then generate content featuring those copyrighted elements, who's responsible? The AI company? The user who prompted it? Both?

We're watching these legal battles play out in real-time, with billions of dollars in IP value hanging in the balance.

The Business Model: Freemium on Steroids

OpenAI's pricing structure for Sora reveals their ambitions pretty clearly:

Free tier: 5-10 video generations per month with watermarks. Enough to hook you but not enough to really satisfy.

ChatGPT Plus ($20/month): Limited Sora 2 access. The "taste" tier.

ChatGPT Pro ($200/month): Full unrestricted access to Sora 2.

That $200/month price point is wild. It signals that OpenAI views this as professional-grade content creation software, not just a social toy. They're betting that content creators, marketers, and media professionals will pay Netflix-premium prices for access to cutting-edge AI video generation.

And early signs suggest they might be right. Despite the insane price, ChatGPT Pro subscriptions have reportedly been selling well, driven largely by Sora access.

For context: Netflix's most expensive tier is $25/month. Spotify Premium is $11/month. OpenAI is charging 8-16 times more than established entertainment subscriptions because they're selling something fundamentally different—not access to content, but the ability to generate content.

It's the difference between renting movies and owning a movie studio. Except the studio is powered by AI and fits in your pocket.

Meta's Response: The AI Arms Race Heats Up

Meta was watching all this very carefully. Within weeks of Sora's launch, they released their own competitor: Vibes, an AI-generated video feed integrated into the Meta AI app.

Vibes uses the same TikTok-style interface—vertical scrolling, algorithmic recommendations, entirely AI-generated content. It's Meta's attempt to compete in the synthetic media space before OpenAI establishes dominance.

But here's the thing: replicating Sora's capabilities is harder than it looks. The integration of advanced physics simulation, synchronized audio generation, and the innovative cameos feature represents years of development work. Meta's throwing their considerable resources at the problem, but they're playing catch-up.

Industry analysts are divided on whether Sora poses a genuine threat to established social platforms. Mizuho analyst Lloyd Walmsley put the probability of Sora becoming a "TikTok-killer" at around 5%—relatively low. But he's talking about near-term disruption. The longer-term implications are harder to dismiss.

What happens when the next generation of users grows up with Sora? When the distinction between "authentic" social media and "synthetic" social media stops mattering? When the question becomes not "did this really happen?" but "is this entertaining?"

We might be watching the early stages of a fundamental platform shift, the kind that comes along once a decade. Instagram did it to Facebook. TikTok did it to Instagram. Maybe AI-native platforms like Sora are about to do it to everyone.

What This Actually Means For Society

Okay, let's zoom out for a second because the implications here go way beyond "neat new app."

The end of visual authenticity. For basically all of human history, seeing something was evidence that it happened. Photography and video strengthened that connection—"I saw it on camera" meant "it's real." That's... over now. Sora and tools like it mean that video is no longer reliable evidence of anything. Every video you see could be entirely synthetic, and you'd have no way to know just by looking.

Democratized content creation. On the flip side, Sora enables anyone to produce professional-quality video content without technical expertise or expensive equipment. This could fundamentally change how we communicate. Why send a text when you can generate a personalized video message? Why write an email when you can create a video presentation?

New forms of creative expression. Artists and creators are already using Sora to generate content that would be impossible or prohibitively expensive to produce traditionally. Fantasy sequences, sci-fi scenarios, experimental visual styles—all suddenly accessible to anyone with an internet connection.

Regulatory and ethical nightmares. Deepfake regulation was already complicated. Now multiply that by a million. How do you regulate when everyone has deepfake technology in their pocket? How do you protect people from having their likeness used maliciously? How do you prevent election misinformation when anyone can generate a realistic video of a politician saying anything?

OpenAI has implemented some safeguards—C2PA metadata for content provenance, visible watermarking, consent mechanisms for cameos. But critics argue these are insufficient. Watermarks can be removed. Metadata can be stripped. And even with consent for cameos, the potential for misuse is enormous.

The Addictive Quality Nobody's Talking About Enough

Here's what bothers me most about Sora, and it's something that's getting lost in the copyright debates and technical discussions: the app is deliberately designed to be addictive in a way that exploits fundamental human psychology.

Social media platforms have spent decades optimizing for engagement. They've gotten very good at it. Infinite scroll, algorithmic recommendations, variable reward schedules, social validation through likes and shares—these are all proven mechanisms for capturing attention and creating compulsive usage patterns.

Sora takes all of that and adds a new dimension: narcissistic gratification. You're not just consuming content or creating content about your real life. You're starring in fantasies. You're the protagonist in an unlimited number of scenarios constrained only by your imagination.

The psychological pull of that is intense. Humans are wired for storytelling, for imagination, for seeing themselves as heroes in narratives. Sora taps directly into that wiring and makes it effortless to indulge.

Early user reports consistently mention the addictive quality. "I couldn't stop using it." "I found myself opening the app constantly." "I spent three hours just generating different versions of myself."

OpenAI's internal testing showed similar patterns—employees using the app so much it became a productivity concern. And remember, these are tech workers who build these tools, who understand the psychological mechanisms at play. If they're getting hooked, what happens with general consumers?

We're only a few weeks into this experiment. We have no idea what the long-term psychological effects will be of spending hours daily inhabiting AI-generated fantasies of ourselves.

Here's Why This Matters More Than You Think

Sora's success represents a fundamental shift in how we think about social media and reality itself.

For the past two decades, social media has been about capturing and sharing authentic moments from our lives. Even when those moments were staged, filtered, and curated, they were still rooted in reality. Instagram photos were photoshopped, sure, but there was still a real person in a real place when the shutter clicked.

Sora breaks that connection completely. There's no underlying reality. It's simulation all the way down.

And people love it.

The app's explosive growth suggests that consumers aren't just accepting this shift—they're actively seeking it out. The ability to generate personalized video content is proving more compelling than the authenticity that earlier social platforms promised.

This has implications for everything:

Politics and misinformation: When everyone can generate realistic video of anyone saying anything, how do we establish truth? How do we prevent the weaponization of synthetic media in elections?

Mental health: What happens to our sense of self when we spend significant time inhabiting AI-generated fantasies? When the person we see on screen—doing incredible things, living exciting lives—is simultaneously us and not us?

Creative industries: If anyone can generate professional-quality video content instantly, what happens to traditional content creators? To production companies? To the entire infrastructure of media production?

Legal frameworks: Our laws around libel, defamation, fraud, and identity theft weren't written for a world where anyone can generate realistic video of anyone else. The legal system is going to have to catch up fast.

The Road From Here

So where does this go?

Short term, Sora continues its explosive growth. OpenAI expands availability beyond North America. Android gets native support. More features get added. The technology gets better, the videos get more realistic, the physics simulation gets more accurate.

Competitors scramble to catch up. Meta's Vibes is just the first. Google, TikTok, Snap—everyone with a social platform is now racing to integrate AI video generation. The ones who move too slowly risk obsolescence.

Medium term, we see the first major scandals. Deepfakes used for fraud. Political misinformation. Celebrity likeness abuse. These will drive regulatory responses—probably a patchwork of state and federal laws in the US, more comprehensive frameworks in Europe. The regulations will be messy and inadequate because technology is moving faster than legislation ever can.

Long term? We're entering what some are calling the "post-truth visual era." The assumption that video evidence proves something actually happened is dying. We'll need new frameworks for establishing truth, new norms for distinguishing synthetic from authentic, new literacies for navigating a world where reality is optional.

Or maybe—and this is the darker possibility—we stop caring about the distinction. Maybe authenticity becomes quaint, a relic of the pre-AI era. Maybe we embrace synthetic media fully, preferring beautiful AI-generated fantasies to messy reality.

Sora's meteoric rise suggests we might be heading in that direction faster than anyone expected.

The memes about Sora have been predictably excellent. My favorite: someone generated a video of themselves winning an Oscar, then captioned it "Finally getting the recognition I deserve."

The comment section was full of people doing the same thing. Hundreds of AI-generated acceptance speeches. Everyone's a winner in the Sora cinematic universe.

But beneath the jokes is something genuinely unsettling. We're watching the democratization of reality manipulation happen in real-time. The technology that intelligence agencies and Hollywood studios had exclusive access to just a few years ago is now available to anyone with a smartphone and an invite code.

OpenAI moved fast and broke things, as Silicon Valley likes to say. They launched a product that raises profound questions about truth, identity, and reality itself, and they're figuring out the answers on the fly while millions of users generate increasingly elaborate fantasies.

The Copyright disputes will be settled eventually, probably with rightsholders getting more control and OpenAI paying substantial licensing fees. The regulatory framework will emerge, messy but functional. The technology will improve, the competitors will proliferate, the market will mature.

But we can't un-break the notion of "what's real." That genie's out of the bottle. Every video you see from now on could be entirely synthetic, and you'd have no way to know just by watching.

The question isn't whether this technology exists—it does, it's here, and millions of people are already using it. The question is what we do with it. How we adapt. What norms we establish. What protections we build.

And whether we can resist the seductive pull of inhabiting AI-generated fantasies where we're always the hero, always winning, always perfect.

Because here's the thing about Sora that nobody's quite saying out loud: it's not just a social media app. It's a reality engine. And we've given it to everyone, all at once, without really thinking through what that means.

Five days. One million downloads. Number one on the App Store.

We're all about to find out what happens next. Ready or not.

Reply

or to participate.