Tech giants form industry group to help develop next-generation AI chip components

Intel, Google, Microsoft, Meta and other tech heavyweights are establishing a new industry group, the Ultra Accelerator Link (ULink) Promoter Group, to guide development of the components that link AI accelerator chips at the centers. of data.

Announced on Thursday, UALink Promoter Group, which also counts AMD (but not Arm), Hewlett Packard Enterprise, Broadcom and Cisco among its members, is proposing a new industry standard for connecting AI accelerator chips found in a number every increasing number of servers. Broadly speaking, AI accelerators are chips ranging from GPUs to custom-designed solutions to accelerate the training, tuning, and execution of AI models.

“The industry needs an open standard that can advance very quickly, in an open environment. [format] that allows multiple companies to add value to the overall ecosystem,” Forrest Norrod, general manager of data center solutions at AMD, told reporters in a briefing Wednesday. “The industry needs a standard that allows innovation to advance at a rapid pace and without restrictions by a single company.”

The first version of the proposed standard, UALink 1.0, will connect up to 1,024 AI accelerators (GPU only) into a single computing “module.” (The group defines a sheath such as one or several racks in a server). UALink 1.0, based on “open standards” including AMD’s Infinity Fabric, will enable direct loads and stores between memory connected to AI accelerators and generally increase the speed and reduce latency of data transfer compared to previous specifications. existing interconnection, according to UALink Promoter Group.

Image credits: UAlink Promoter Group

The group says it will create a consortium, the UALink Consortium, in the third quarter to oversee the development of the UALink specification going forward. UALink 1.0 will be available around the same time for companies that join the consortium, with an updated higher bandwidth specification, UALink 1.1, arriving in Q4 2024.

The first UALink products will be launched “in the next few years,” Norrod said.

Conspicuously absent from the list of group members is Nvidia, which is by far the largest producer of AI accelerators, with approximately 80% to 95% of the market. Nvidia declined to comment for this story. But it’s hard to understand why the chipmaker isn’t enthusiastically supporting UALink.

For one, Nvidia offers its own proprietary interconnect technology for linking GPUs within a data center server. The company is probably not very interested in supporting a specification based on rival technologies.

Then there’s the fact that Nvidia operates from a position of enormous strength and influence.

In Nvidia’s most recent fiscal quarter (Q1 2025), the company’s data center sales, which include sales of its artificial intelligence chips, increased more than 400% from the prior-year quarter. If Nvidia continues on its current trajectory, it will overtake Apple as the world’s second most valuable company sometime this year.

So, simply put, Nvidia doesn’t have to play if it doesn’t want to.

As for Amazon Web Services (AWS), the only public cloud giant not contributing to UALink, it could be in “wait and see” mode as it scales back (no pun intended) its various internal accelerator hardware efforts. It could also be that AWS, with a stranglehold on the cloud services market, doesn’t see much strategic sense in opposing Nvidia, which supplies much of the GPUs it serves its customers.

AWS did not respond to TechCrunch’s request for comment.

In fact, the biggest beneficiaries of UALink (besides AMD and Intel) appear to be Microsoft, Meta and Google, which together have spent billions of dollars on Nvidia GPUs to power their clouds and train their ever-growing AI models. . Everyone is looking to move away from a vendor they see as worryingly dominant in the AI ​​hardware ecosystem.

Google has custom chips to train and run AI, TPU, and Axion models. Amazon has several families of AI chips under its belt. Last year Microsoft came to the fore with Maia and Cobalt. And Meta is perfecting its own line of accelerators.

Meanwhile, Microsoft and its close collaborator, OpenAI, plan to spend at least $100 billion on a supercomputer to train AI models that will be equipped with future versions of Cobalt and Maia chips. Those chips will need something to link them, and maybe it will be UALink.

Leave a Comment