The CX platform redefining AI’s next decade
In customer experience, cost savings don’t mean much if loyalty declines.
Gladly helps brands achieve both—maximizing efficiency and lifetime value.
With 240M+ conversations powered and $510M in cost savings delivered, Gladly has the evidence that customer engagement, not deflection, drives stronger economics. Our unified architecture and context-aware AI enable brands to serve customers faster, more personally, and with higher satisfaction—without compromising long-term profitability.
Explore the awards, research, and momentum behind this shift in our Media Kit.
The RAM Crisis of 2026: How OpenAI Just Broke the Entire Memory Market
Listen, I need you to understand something: RAM prices have increased by over 600% in less than a year. Not 60%. Six hundred percent. The stick of DDR5 memory that cost you $95 last summer? It's $560 now. And it's going higher.
This isn't a chip shortage. This isn't crypto miners buying up GPUs. This is something we've genuinely never seen before—a single company securing 40% of the world's memory production capacity, and the entire consumer technology market getting rationed on what's left.
Welcome to the RAM crisis of 2026. Let me explain what the hell is happening.
The OpenAI Deal That Ate the World
In October 2025, Samsung and SK Hynix—two of the three companies that basically make all the world's memory—signed supply agreements with OpenAI for something called the Stargate project. You might have heard about Stargate: it's this $500 billion AI infrastructure initiative backed by OpenAI, Oracle, and SoftBank to build massive data centers globally.
Here's the kicker: the deal allocates 900,000 DRAM wafers per month to OpenAI.
If your eyes just glazed over at "wafers per month," let me put this in perspective. Global DRAM production capacity is about 2.25 million wafer starts per month total. OpenAI just secured 40% of all memory production on Earth. For one project.
This isn't like Amazon buying a lot of servers. This is OpenAI buying the factories that make the components that go into servers before anyone else can get them. They're not buying finished RAM sticks—they're buying undiced wafers, the raw semiconductor sheets before they're sliced into individual chips. This is upstream intervention at a level that removes production capacity from the broader market entirely.
And the market? The market lost its mind.
The Price Explosion: A Timeline
Let's walk through what happened, because the precision of this is remarkable:
Mid-2025: Things are fine. A 32GB DDR5-6000 kit costs under $95. Enterprise 64GB modules are running about $255. Life is good.
June 2025: Manufacturers announce they're phasing out DDR4 production to focus on DDR5 and HBM (high-bandwidth memory for AI chips). Prices start climbing slightly.
September 2025: The breaking point. Prices accelerate visibly across all markets.
October 2025: The OpenAI-Samsung-SK Hynix deals go public. Simultaneously—and I mean on the same week—prices inflect sharply upward in the US, UK, Europe, and Australia. Different currencies, different tax systems, same pattern. This is when you know it's real.
Q4 2025: Memory prices surge 40-50% in a single quarter. Samsung announces contract price increases exceeding 100%. That $95 RAM kit? Now $184. The $255 enterprise module? Now $450.
Q1 2026 (where we are now): Additional 40-50% increases projected. Some high-capacity modules have climbed over 600% from early 2025 levels. Enterprise memory approaching $700 per module, with projections hitting $1,000 by year-end.
The synchronized global nature of the October inflection point tells you everything. This isn't speculation. This isn't regional supply chain disruption. This is a genuine structural shock to global semiconductor supply.
But Wait—It's Not Just OpenAI
Here's where it gets more complex. OpenAI's deal is the biggest and most visible, but they're not acting in isolation. They're the loudest player in an orchestra of hyperscalers all screaming for the same resource.
Microsoft, Google, and Amazon are also demanding unprecedented memory volumes for AI infrastructure. Meta's building massive training clusters. Every tech company with a large language model is in an arms race for compute. And compute needs memory—lots of it.
The manufacturers saw this coming and made a bet. They're deliberately reallocating production capacity away from commodity DRAM (the stuff in your laptop) toward high-bandwidth memory for AI accelerators. HBM uses significantly larger dies, commands premium pricing, and sells to customers with effectively unlimited budgets and multi-year contracts.
SK Hynix reported in October that its HBM, DRAM, and NAND capacity is "essentially sold out" for all of 2026. Micron—one of the big three memory manufacturers—has exited the consumer memory market entirely to focus exclusively on enterprise and AI customers.
Think about what that means: a major memory manufacturer looked at the market and said, "We're not even going to bother competing for consumer business anymore. The margins in AI are so much better that it's not worth our time."
The Cascade Effect: Who Gets Screwed
When 40% of supply disappears into one project and manufacturers deprioritize consumer products, you get a cascade. Let's break down who's feeling the pain:
Your Next Computer
PC manufacturers are staring down 15-20% price increases across product lines in H2 2026. Framework Laptop already announced 50% increases on RAM upgrade options. Dell's hiking commercial PC prices 10-30% starting December.
Building a gaming PC? That 32GB of DDR5 you need for a modern system now costs $400-500 instead of $150. And this hits at exactly the wrong time—Windows 11's emergence after Windows 10's deprecation means you need higher baseline specs. Copilot+ PCs require minimum 16GB, often 32GB for good performance.
The thing is, people will pay it. They'll grumble, but they'll pay. You need a computer.
Your Next Phone
Memory represents 10-20% of smartphone build costs now. IDC projects potential smartphone market contraction of 5-8% in 2026 under moderate shortage scenarios.
Budget Android manufacturers—Xiaomi, Oppo, Vivo—are getting crushed because they operate on thin margins. They'll pass costs directly to consumers. Premium manufacturers like Apple and Samsung have more cushion, but even they're feeling pressure.
Apple can probably absorb this better than most. They've got vertical integration, massive purchasing power, and premium pricing. But make no mistake: your next iPhone is going to be more expensive, and memory capacity won't increase as much as it would have in a normal year.
Enterprise and Cloud
This is where it gets really expensive. Those 64GB DDR5 RDIMM modules—the workhorses of modern data centers—are projected to hit $1,000 by end of 2026. That's approaching $1.95 per gigabit, nearly double the previous 2018 peak of $1.00 per gigabit.
Cloud service providers are responding by aggressively stockpiling inventory. AWS, Azure, Google Cloud—they're all panic-buying memory before prices go higher. Which, of course, makes prices go higher.
What does this mean for you? Cloud service prices are going to increase. Maybe not immediately—these companies have existing inventory and will try to maintain competitive pricing—but it's coming. Those AWS bills you're paying? Plan for them to grow.
The "Hyper-Bull" Market: Worse Than 2018
Industry analysts have a specific term for what's happening: a "Hyper-Bull" phase. According to Counterpoint Research: "The market has entered a 'Hyper-Bull' phase, with current conditions surpassing the historic peak of 2018."
If you were building PCs in 2018, you remember. That was brutal. RAM prices were insane. Everyone complained.
This is worse.
The 2018 shortage was driven by cryptocurrency mining, smartphone demand growth, and some supply constraints. But fundamentally, it was a cyclical imbalance that corrected within 18 months.
This is structural. This is manufacturers deliberately realigning their entire production strategy around AI infrastructure. This is demand that isn't going away when crypto crashes or smartphone sales plateau.
Phison's CEO warned that the NAND flash shortage alone could persist "for the next ten years." A decade-long memory supercycle. Let that sink in.
Why Can't They Just Make More?
You might be thinking: okay, prices are high, demand is high, so manufacturers will just build more fabs and make more memory, right? Supply and demand?
Here's the problem: building semiconductor fabrication capacity takes years and costs tens of billions of dollars.
A new leading-edge fab costs $15-20 billion and requires 3-4 years to construct and ramp to full production. You need ultra-pure water systems, cleanrooms with air quality thousands of times better than hospital operating rooms, specialized equipment from a handful of suppliers (often with year-long lead times), and a workforce with highly specialized skills.
TSMC, Samsung, and Intel have all announced capacity expansions. They're breaking ground on new fabs. But the earliest meaningful capacity comes online is 2027-2028. And even then, much of that new capacity is already pre-allocated to the same hyperscalers driving current demand.
The supply response is coming. It's just coming slowly, and by the time it arrives, AI infrastructure demand may have grown even more.
What This Really Means: The AI Tax
Let's zoom out. What we're watching is the emergence of what I'd call the AI Tax—the cost of AI development being distributed across the entire technology supply chain.
OpenAI, Microsoft, Google, Meta—these companies are making massive bets that AI will generate sufficient returns to justify this level of infrastructure investment. They're not just buying memory; they're buying the option to build transformative AI systems that could be worth trillions.
But here's the thing: we don't actually know if those bets will pay off. We don't know if AI capabilities will advance fast enough or create enough economic value to justify $500 billion infrastructure projects. We don't know if consumers will pay enough for AI features to recoup these investments.
What we do know is that everyone else—consumers, businesses, smaller tech companies—is paying the price right now through higher costs for basic computing resources.
This is a form of resource allocation by market power. OpenAI and the hyperscalers have effectively said: "AI infrastructure is more important than consumer electronics, and we have the capital to make that reality."
And they're right, in a sense. If you believe AI is going to be the dominant technology of the next decade, then yes, it makes sense to prioritize AI infrastructure over consumer RAM. The problem is that decision is being made by a handful of companies with the deepest pockets, not by any democratic or deliberative process.
The Broader Economic Implications
This has ripple effects beyond just expensive RAM:
Innovation Tax: Smaller companies and startups building AI products now face massively increased infrastructure costs. Y Combinator companies trying to compete with OpenAI aren't just competing on model quality—they're competing for access to scarce compute resources. This entrenches existing players.
Digital Divide Amplification: If computers and phones become significantly more expensive, that creates real barriers to access. The people most affected by 20% price increases on budget Android phones aren't tech workers—they're people for whom that price difference determines whether they can afford a smartphone at all.
Cloud Cost Pressure: As cloud providers face higher memory costs, they'll pass those costs along. This affects every company running infrastructure in the cloud, which is most companies. That cost pressure feeds into consumer prices across the economy.
Geopolitical Leverage: Memory production is concentrated in South Korea and Taiwan. The fact that two countries control global memory supply becomes a significant geopolitical issue when that memory is essential for AI development. China's already investing heavily in domestic semiconductor production partly because of this vulnerability.
What Happens Next: Three Scenarios
Scenario 1: The Soft Landing (Unlikely)
New fab capacity comes online in 2027-2028 faster than expected. AI infrastructure demand moderates as companies realize returns aren't materializing fast enough. Prices stabilize in late 2026 and decline gradually through 2027. Consumer market recovers by 2028.
Probability: 20%
Scenario 2: The Extended Squeeze (Most Likely)
Current dynamics persist through 2026 and into 2027. Prices remain elevated but stabilize rather than continuing to climb exponentially. New capacity arrives in 2027-2028 but is largely pre-allocated to existing hyperscaler contracts. Consumer market experiences persistent supply constraints with prices 2-3x pre-crisis levels becoming the new normal. Genuine relief doesn't arrive until 2028-2029.
Probability: 60%
Scenario 3: The Supercycle (Plausible)
AI demand continues accelerating faster than supply can respond. New applications (AI agents, real-time multimodal models, etc.) require even more memory-intensive infrastructure. The "decade-long supercycle" predictions prove accurate. Memory remains supply-constrained through the late 2020s. This fundamentally reshapes the consumer electronics market, with device upgrade cycles extending and lower-memory devices becoming much more common.
Probability: 20%
What You Should Actually Do
If you're a consumer:
Buy now if you're planning a PC build in the next year. Prices are going up, not down, through at least mid-2026.
Don't expect meaningful Black Friday/Cyber Monday deals on memory. There's no inventory glut to discount.
Consider buying higher capacity than you think you need. Upgrading later will be even more expensive.
If you're a business:
Review your cloud infrastructure costs and optimize aggressively. Those costs are going up.
Consider multi-year reserved instance pricing if you're on AWS/Azure/GCP. Lock in current pricing before increases hit.
If you run on-premises infrastructure, stockpile critical components now.
If you're building products:
Design for lower memory footprints. The era of cheap, abundant RAM is over for a while.
Consider whether your product really needs AI features if they require cloud infrastructure. The unit economics may not work.
The Uncomfortable Truth
Here's what really keeps me up at night about this: we're watching the technology industry make a massive, coordinated bet on AI—with other people's money and resources.
OpenAI gets 40% of global memory production. Your laptop costs 50% more. Are those two things connected? Absolutely. Did you get a vote on that trade-off? Not really.
The standard response is: "That's just markets working." And sure, it is. Companies with capital are bidding up prices for scarce resources. Economics 101.
But there's something qualitatively different about a handful of companies controlling access to fundamental computing resources. Memory isn't like luxury goods where high prices just mean some people don't get the fancy version. Memory is infrastructure. It's in everything from servers to smartphones to medical devices.
When infrastructure becomes scarce, who gets access matters. Right now, the answer is: whoever can pay the most. And the people who can pay the most are making one specific bet about the future—that AI is valuable enough to justify any cost.
Maybe they're right. Maybe in five years we'll look back and say, "Of course it made sense to prioritize AI infrastructure. Look at all the value it created."
Or maybe we'll look back and realize we rationed basic computing resources to pursue a technology whose returns didn't justify its costs, while making everyday technology significantly more expensive for everyone else.
The thing is, we're going to find out. We don't have a choice. The bet's already been made.
Links and Adjacent Thoughts
The parallel to the 1970s oil shocks is striking: a critical industrial input suddenly becomes scarce due to geopolitical/market dynamics, cascading price increases throughout the economy, discussion of "energy independence" replaced by "semiconductor independence"
There's a cynical read here where this is actually great for Samsung and SK Hynix—they get to sell to OpenAI at volume and watch consumer prices triple, maximizing margins across the board
The Framework Laptop price increases are particularly notable because Framework is specifically designed to be modular and upgradeable. If even they're saying "we can't absorb these costs," that tells you how severe this is
Curious how this intersects with China's semiconductor ambitions. They're locked out of cutting-edge process nodes but are investing heavily in memory production. Does this crisis accelerate their domestic capacity buildout?
Wild that Micron just... left the consumer market. Imagine if Ford said "we're not making sedans anymore, only trucks for commercial fleets." That's the equivalent.
The "undiced wafers" detail matters more than it seems—OpenAI isn't even waiting for finished products, they're taking raw fab output. That's serious.
Remember when everyone was worried about crypto miners making GPUs expensive? Turns out the real threat was AI companies buying up the entire memory supply chain.
If you're doing system architecture work right now, you're probably rethinking everything about memory utilization. The era of "just throw more RAM at it" is dead.
There's a scenario where this drives genuine innovation in memory-efficient computing. Necessity is the mother of invention, etc. But that's a long-term benefit that doesn't help you afford a laptop in 2026.
The AI-to-crypto pipeline is real: both create artificial scarcity (crypto = compute for mining, AI = compute for training), both benefit early movers with capital, both externalize costs to regular users. Different mechanisms, similar dynamics.
The bottom line: We're living through a fundamental restructuring of semiconductor supply chains in real time. The AI boom isn't just changing software—it's changing the physical infrastructure of computing in ways that will affect every device you buy for years to come.
The prices you're seeing aren't a bubble. They're the new equilibrium, at least until 2027.
Plan accordingly.

