OpenAI announced a long-term partnership with Amazon Web Services (AWS), a seven-year deal worth $38 billion for cloud and AI infrastructure. The agreement gives OpenAI access to hundreds of thousands of Nvidia GPUs within AWS regions and availability zones, with deployments targeted to reach full scale by late 2026. Official post: AWS and OpenAI partnership.
The new capacity will power everything from ChatGPT’s live interactions and enterprise workloads to training and evaluation for upcoming foundation and reasoning models. The contract includes flexible scaling, letting OpenAI burst capacity during peak periods while aligning spend with utilization and performance milestones.
Why it matters: With demand for AI compute exploding, OpenAI is locking in multi-vendor capacity to reduce risk and improve bargaining power. The $38B, seven-year AWS commitment signals a bet that diversified, hyperscale infrastructure will be essential to ship safer, faster, and more capable models—without single-cloud dependence. It also underlines how cloud providers are competing not only on GPUs but on end-to-end AI platforms that can keep pace with OpenAI’s aggressive product roadmap.