OpenAI will design its own AI accelerators and chips in deal with Broadcom that will start in the second half of 2026 and run through 2029.
According to the companies, the deal covers 10 gigawatts of custom AI accelerators. OpenAI has forged partnerships with AMD and Nvidia as it builds out its infrastructure.
OpenAI and Broadcom will co-develop systems that include custom accelerators and Ethernet networking. These racks will be built using Broadcom components. The rationale behind the partnership is that OpenAI can better optimize its own stack. OpenAI is following the hyperscale cloud playbook for AI that includes big doses of Nvidia as well as custom-made GPUs such as Google’s TPUs and AWS’ Trainium.
The news isn’t a big surprise given that Broadcom noted the strength of its pipeline on its most recent earnings call. It was widely assumed that OpenAI was the key customer.
OpenAI CEO Sam Altman said “developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI.”
Broadcom said that the OpenAI racks will include its portfolio of Ethernet, PCIe and optical connectivity as well as custom accelerators.