At the end of Nvidia CEO Jensen Huang’s unscripted two-hour keynote on Tuesday, his message was clear: Get the fastest chips that the company makes.
Speaking at Nvidia’s GTC conference, Huang said that questions clients have about the cost and return on investment the company’s graphics processors, or GPUs, will go away with faster chips that can be digitally sliced and used to serve artificial intelligence to millions of people at the same time.
“Over the next 10 years, because we could see improving performance so dramatically, speed is the best cost-reduction system,” Huang said in a meeting with journalists shortly after his GTC keynote.
The company dedicated 10 minutes during Huang’s speech to explain the economics of faster chips for cloud providers, complete with Huang doing envelope math out loud on each chip’s cost-per-token, a measure of how much it costs to create one unit of AI output.
Huang told reporters that he presented the math because that’s what’s on the mind of hyperscale cloud and AI companies.
The company’s Blackwell Ultra systems, coming out this year, could provide data centers 50 times more revenue than its Hopper systems because it’s so much faster at serving AI to multiple users, Nvidia says.
Investors worry about whether the four major cloud providers — Microsoft, Google, Amazon and Oracle — could slow down their torrid pace of capital expenditures centered around pricey AI chips. Nvidia doesn’t reveal prices for its AI chips, but analysts say Blackwell can cost $40,000 per GPU.
Already, the four largest cloud providers have bought 3.6 million Blackwell GPUs, under Nvidia’s new convention that counts each Blackwell as 2 GPUs. That’s up from 1.3 million Hopper GPUs, Blackwell’s predecessor, Nvidia said Tuesday.
The company decided to announce its roadmap for 2027’s Rubin Next and 2028’s Feynman AI chips, Huang said, because cloud customers are already planning expensive data centers and want to know the broad strokes of Nvidia’s plans.
“We know right now, as we speak, in a couple of years, several hundred billion dollars of AI infrastructure” will be built, Huang said. “You’ve got the budget approved. You got the power approved. You got the land.”
Huang dismissed the notion that custom chips from cloud providers could challenge Nvidia’s GPUs, arguing they’re not flexible enough for fast-moving AI algorithms. He also expressed doubt that many of the recently announced custom AI chips, known within the industry as ASICs, would make it to market.
“A lot of ASICs get canceled,” Huang said. “The ASIC still has to be better than the best.”
Huang said his is focus on making sure those big projects use the latest and greatest Nvidia systems.
“So the question is, what do you want for several $100 billion?” Huang said.
WATCH: CNBC’s full interview with Nvidia CEO Jensen Huang