Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
Nvidia has unveiled the next generation of its artificial intelligence chips, marking a bet that the arrival of “reasoning” AI systems such as DeepSeek will spur even greater demand for computing power.
Jensen Huang, Nvidia’s chief executive, reassured investors at the chipmaker’s annual GTC conference on Tuesday that the spending spree on AI infrastructure over the past two years, and appetite for ever-faster AI chips and software, would continue to grow.
Nvidia said its new Vera Rubin AI chips, named after the American astronomer who discovered dark matter, can be developed into clusters of millions of units, meaning they can be used to train much larger AI models and serve up more sophisticated responses to greater numbers of users.
AI models launched this year by Chinese start-up DeepSeek spurred fears among investors that AI could advance without the need for multibillion-dollar investments in data centres that had turned Nvidia into one of the world’s most valuable companies.
“Almost the entire world got it wrong,” Huang said in his keynote address at the event in San Jose, California. “The computation requirement, the scaling law of AI, is more resilient and in fact hyper-accelerated.”
Huang said purchases of graphics processing units by the four largest US cloud computing providers have surged this year, underlining tech companies’ thirst for more computing power. Its most recent Blackwell AI chip was first released late last year.
Huang said Rubin would be available in the second half of 2026, followed by an “ultra” version the following year. One Rubin configuration would link 576 individual GPUs, which in effect would act as one chip, Huang said. The current Blackwell chip clusters 72 GPUs in the company’s NVL72 supercomputer.
Rubin will also have advanced memory and networking capacity alongside a new custom-designed central processing unit.
“Basically everything is brand new, except for the chassis,” said Huang, who later told reporters that Nvidia was working at “the limits of physics”.
The Silicon Valley company also unveiled a range of new products, including PC-style workstations that would put its high-end GPUs, normally found in cloud computing facilities, on to the desks of AI researchers and scientists.
Nvidia also revealed a new optical networking system that Huang said would remove a bottleneck to building ever-larger AI data centres.
AI companies such as OpenAI, Anthropic and Elon Musk’s xAI are looking to build facilities housing hundreds of thousands of GPUs under one roof, which they hope will allow them to build more sophisticated AI models.
The optical network “sets us up to be able to scale up to these multi-hundred thousand GPUs and multimillion GPUs”, Huang said.
Nvidia’s current Blackwell chip will be upgraded in the second half of this year. It also showed off its new Dynamo “operating system” for AI data centres, which Huang said would significantly boost the performance of Nvidia’s chips.
Nvidia shares were down about 3 per cent on Tuesday following Huang’s keynote speech, which saw him joined on stage by Blue, a small Star Wars-inspired robot. The cameo was a nod to a partnership with Google DeepMind and Disney Research to develop an open-source physics engine for robotics simulation, dubbed “Newton”.
“The time has come for robots,” Huang said. “Everybody pay attention to this space: this could very well likely be the largest industry of all.”
Nvidia shed almost $600bn of value in a single day when panic over DeepSeek’s breakthrough gripped investors in January, and its shares are down 15 per cent since the start of the year. Concerns over the potential impact of US tariffs on the global economy have dragged down stocks across the technology sector.
While Nvidia’s staggering growth has slowed, revenue was still up almost 80 per cent year on year in its fourth quarter, which ended in January.