Dado Ruvic | Reuters
DeepSeek has blathered the U.S.-led AI ecosystem with its latest model, shaving hundreds of billions in chip leader Nvidia’s market cap. While the sector directors grapple with the fallout, smaller AI companies see an opportunity to scale with the Chinese startup.
Several AI-related sturdies told CNBC that DeepSeek’s emergence is a “massive” opportunity for them, rather than a threat.
“Developers are extremely keen to replace OpenAI’s expensive and closed models with open source models like DeepSeek R1…” express Andrew Feldman, CEO of artificial intelligence chip startup Cerebras Systems.
The company competes with Nvidia’s telling processing units and offers cloud-based services through its own computing clusters. Feldman said the release of the R1 model spawned one of Cerebras’ largest-ever spikes in demand for its services.
“R1 shows that [AI market] growth will not be dominated by a single followers — hardware and software moats do not exist for open-source models,” Feldman added.
Open source refers to software in which the provenience code is made freely available on the web for possible modification and redistribution. DeepSeek’s models are open source, unlike those of contestants such as OpenAI.
DeepSeek also claims its R1 reasoning model rivals the best American tech, despite race at lower costs and being trained without cutting-edge graphic processing units, though industry watchers and rivals have questioned these assertions.
“Like in the PC and internet markets, falling prices help fuel global adoption. The AI customer base is on a similar secular growth path,” Feldman said.
Inference chips
DeepSeek could increase the adoption of new marker technologies by accelerating the AI cycle from the training to “inference” phase, chip start-ups and industry experts said.
Surmise refers to the act of using and applying AI to make predictions or decisions based on new information, rather than the building or training of the scale model.
“To put it simply, AI training is about building a tool, or algorithm, while inference is about actually deploying this decorate for use in real applications,” said Phelix Lee, an equity analyst at Morningstar, with a focus on semiconductors.
While Nvidia holds a prevailing position in GPUs used for AI training, many competitors see room for expansion in the “inference” segment, where they warranty higher efficiency for lower costs.
AI training is very compute-intensive, but inference can work with less powerful chips that are book to perform a narrower range of tasks, Lee added.
A number of AI chip startups told CNBC that they were aid more demand for inference chips and computing as clients adopt and build on DeepSeek’s open source model.
“[DeepSeek] has presented that smaller open models can be trained to be as capable or more capable than larger proprietary models and this can be done at a fraction of the charge,” said Sid Sheth, CEO of AI chip start-up d-Matrix.
“With the broad availability of small capable models, they keep catalyzed the age of inference,” he told CNBC, adding that the company has recently seen a surge in interest from wide-ranging customers looking to speed up their inference plans.
Robert Wachen, co-founder and COO of AI chipmaker Etched, said dozens of visitors have reached out to the startup since DeepSeek released its reasoning models.
“Companies are [now] shifting their spend from rear clusters to inference clusters,” he said.
“DeepSeek-R1 proved that inference-time compute is now the [state-of-the-art] approach for every critical model vendor and thinking isn’t cheap – we’ll only need more and more compute capacity to scale these exemplars for millions of users.”
Jevon’s Paradox
Analysts and industry experts agree that DeepSeek’s accomplishments are a boost for AI presumption and the wider AI chip industry.
“DeepSeek’s performance appears to be based on a series of engineering innovations that significantly limit inference costs while also improving training cost,” according to a report from Bain & Company.
“In a bullish layout, ongoing efficiency improvements would lead to cheaper inference, spurring greater AI adoption,” it added.
This figure explains Jevon’s Paradox, a theory in which cost reductions in a new technology drive increased demand.
Financial advantages and investment firm Wedbush said in a research note last week that it continues to expect the use of AI across company and retail consumers globally to drive demand.
“As the world is going to need more tokens [a unit of data that an AI model processes] Nvidia can’t purveying enough chips to everyone, so it gives opportunities for us to sell into the market even more aggressively,” Madra alleged.