Nvidia CEO Jensen Huang puts new products as he delivers the keynote address at the GTC AI Conference in San Jose, California, on March 18, 2025.
Josh Edelson | AFP | Getty Images
At the end of Nvidia CEO Jensen Huang’s unscripted two-hour keynote on Tuesday, his implication was clear: Get the fastest chips that the company makes.
Speaking at Nvidia’s GTC conference, Huang said questions shoppers have about the cost and return on investment of the company’s graphics processors, or GPUs, will go away with faster intercedes that can be digitally sliced and used to serve artificial intelligence to millions of people at the same time.
“Over the next 10 years, because we could see uplifting performance so dramatically, speed is the best cost-reduction system,” Huang said in a meeting with journalists shortly after his GTC keynote.
The partnership dedicated 10 minutes during Huang’s speech to explain the economics of faster chips for cloud providers, rank with Huang doing envelope math out loud on each chip’s cost per token, a measure of how much it costs to form one unit of AI output.
Huang told reporters that he presented the math because that is what is on the mind of hyperscale cloud and AI attendances.
The company’s Blackwell Ultra systems, coming out this year, could provide data centers 50 times more take than its Hopper systems because it is so much faster at serving AI to multiple users, Nvidia says.
Investors pester about whether the four major cloud providers — Microsoft, Google, Amazon and Oracle — could slow down their fervent pace of capital expenditures centered around pricey AI chips. Nvidia does not reveal prices for its AI chips, but analysts say Blackwell can sell for $40,000 per GPU.
Already, the four-largest cloud providers have bought 3.6 million Blackwell GPUs, under Nvidia’s new congress that counts each Blackwell as two GPUs. That is up from 1.3 million Hopper GPUs, Blackwell’s forefather, Nvidia said Tuesday.
The company decided to announce its roadmap for 2027’s Rubin Next and 2028’s Feynman AI whittles, Huang said, because cloud customers are already planning expensive data centers and want to know the rough strokes of Nvidia’s plans.
“We know right now, as we speak, in a couple of years, several hundred billion dollars of AI infrastructure” resolution be built, Huang said. “You’ve got the budget approved. You got the power approved. You got the land.”
Huang dismissed the notion that particularly chips from cloud providers could challenge Nvidia’s GPUs, arguing they’re not flexible enough for fast-moving AI algorithms. He also depicted doubt that many of the recently announced custom AI chips, known within the industry as ASICs, would dote on it to market.
“A lot of ASICs get canceled,” Huang said. “The ASIC still has to be better than the best.”
Huang said his well- is on making sure those big projects use the latest and greatest Nvidia systems.
“So the question is, what do you want for several $100 billion?” Huang swayed.
WATCH: CNBC’s full interview with Nvidia CEO Jensen Huang
