OpenAI's Ladder Pull
I had one of those shower realizations today where several things that didn’t make sense on their own all clicked into place, right between the lather and rinse steps, and created what in hindsight feels like an obvious conclusion. And while I’m sure others have made these same connections, I haven’t heard anyone in the ongoing commentary about the RAMpocalypse actually say any of this out loud.
The fundamental realization is that OpenAI, and others, contractually tying up the entire production capacity of RAM and Storage chips for the next two years aren’t about future growth or actually needing the thousands of servers and GPUs that these chips represent. It’s really about restraining competition now. It’s about keeping anyone else from getting enough hardware to start competing with the existing Frontier model companies.
Cheap RAM, Storage, and especially GPUs allowed OpenAI, Anthropic, and the other top tier players to scale up quickly and produce the impressive models that they have used to capture market share and, most importantly for OpenAI, fund raise on. The last thing they need is a new player entering the space and acquiring enough compute and GPUs to start competing with them. I’m sure Sam wishes that Anthropic hadn’t been able to get itself into the market so quickly once that group left OpenAI. By buying up all the available chips, he ensures that no one else will be able to at any kind of reasonable startup cost.
The idea that they actually needed this many data centers, power, servers, and GPUs never actually made sense. I’ve heard so many people talk about the issues with every step of this plan from actual construction, to providing power, to actually getting the GPUs and servers racked and online. None of it worked in any real world economic setting. These companies already have more GPUs on hand than they can plug in and use.
But if you look at it in the context of locking out competition, it all becomes clear. Why sit on piles of existing GPUs that will be 2+ years old when you can finally use them? So that no one else buys them and competes with you right now. Why sign contracts promising to buy every SODIMM that every manufacturer will produce for the foreseeable future? So that those chips don’t end up in GPUs running in someone else’s data center, training the next big model that may dethrone ChatGPT.
They are using their market position and contracts that they have no intention of actually acting on to tie up resources, raise costs, and freeze out anyone with a good idea but no hardware on hand to execute on it. They don’t care that this also raises costs for every other business, or keeps consumers locked out of computing and especially gaming. This is all just necessary collateral damage.
Eventually these chips will be produced and OpenAI won’t actually need ALL of them, or have the money on hand to pay for them, so they will come onto the market. But by that time OpenAI will have another 12 to 24 months of progress without worrying about any new competitors entering the market.