Home / NEWS / Top News / Nvidia to report earnings amid infrastructure spending, DeepSeek concerns

Nvidia to report earnings amid infrastructure spending, DeepSeek concerns

Nvidia is listed to report fourth-quarter financial results on Wednesday after the bell.

It’s expected to put the finishing touches on one of the most remarkable years from a broad company ever. Analysts polled by FactSet expect $38 billion in sales for the quarter ended in January, which order be a 72% increase on an annual basis.

The January quarter will cap off the second fiscal year where Nvidia’s tradings more than doubled. It’s a breathtaking streak driven by the fact that Nvidia’s data center graphics organizing units, or GPUs, are essential hardware for building and deploying artificial intelligence services like OpenAI’s ChatGPT. In the old times two years, Nvidia stock has risen 478%, making it the most valuable U.S. company at times with a market cap over and beyond $3 trillion.

But Nvidia’s stock has slowed in recent months as investors question where the chip company can go from here. 

It’s patron at the same price as it did last October, and investors are wary of any signs that Nvidia’s most important customers superiority be tightening their belts after years of big capital expenditures. This is particularly concerning in the wake of recent breakthroughs in AI out of China. 

Much of Nvidia’s in stocks go to a handful of companies building massive server farms, usually to rent out to other companies. These cloud associates are typically called “hyperscalers.” Last February, Nvidia said a single customer accounted for 19% of its total gross income in fiscal 2024.

Morgan Stanley analysts estimated this month that Microsoft will account for nearly 35% of splurge in 2025 on Blackwell, Nvidia’s latest AI chip. Google is at 32.2%, Oracle at 7.4% and Amazon at 6.2%.

This is why any sign that Microsoft or its rivals power pull back spending plans can shake Nvidia stock.

Last week, TD Cowen analysts said that they’d well-grounded that Microsoft had canceled leases with private data center operators, slowed its process of negotiating to pierce into new leases and adjusted plans to spend on international data centers in favor of U.S. facilities.

The report raised reveres about the sustainability of AI infrastructure growth. That could mean less demand for Nvidia’s chips. TD Cowen’s Michael Elias imagined his team’s finding points to “a potential oversupply position” for Microsoft. Shares of Nvidia fell 4% on Friday.

Microsoft assault back Monday, saying it still planned to spend $80 billion on infrastructure in 2025.

“While we may strategically pace or regulate our infrastructure in some areas, we will continue to grow strongly in all regions. This allows us to invest and allocate resources to lump areas for our future,” a spokesperson told CNBC.

Over the last month, most of Nvidia’s key customers touted capacious investments. Alphabet is targeting $75 billion in capital expenditures this year, Meta will spend as much as $65 billion and Amazon is wish to spend $100 billion.

Analysts say about half of AI infrastructure capital expenditures ends up with Nvidia. Sundry hyperscalers dabble in AMD’s GPUs and are developing their own AI chips to lessen their dependence on Nvidia, but the company holds the bulk of the market for cutting-edge AI chips.

So far, these chips have been used primarily to train new age AI models, a process that can expenditure hundreds of millions dollars. After the AI is developed by companies like OpenAI, Google and Anthropic, warehouses full of Nvidia GPUs are ask for to serve those models to customers. That’s why Nvidia projects its revenue to continue growing.

Another challenge for Nvidia is in the end month’s emergence of Chinese startup DeepSeek, which released an efficient and “distilled” AI model. It had high enough fulfilment that suggested billions of dollars of Nvidia GPUs aren’t needed to train and use cutting-edge AI. That temporarily flagged Nvidia’s stock, causing the company to lose almost $600 billion in market cap. 

Nvidia CEO Jensen Huang whim have an opportunity on Wednesday to explain why AI will continue to need even more GPU capacity even after up to date year’s massive build-out.

Recently, Huang has spoken about the “scaling law,” an observation from OpenAI in 2020 that AI show offs get better the more data and compute are used when creating them.

Huang said that DeepSeek’s R1 miniature points to a new wrinkle in the scaling law that Nvidia calls “Test Time Scaling.” Huang has contended that the next crucial path to AI improvement is by applying more GPUs to the process of deploying AI, or inference. That allows chatbots to “reason,” or originate a lot of data in the process of thinking through a problem.

AI models are trained only a few times to create and fine-tune them. But AI paragons can be called millions of times per month, so using more compute at inference will require more Nvidia chips deployed to buyers.

“The market responded to R1 as in, ‘oh my gosh, AI is finished,’ that AI doesn’t need to do any more computing anymore,” Huang said in a pretaped vet last week. “It’s exactly the opposite.”

Check Also

Amazon resumes drone deliveries after two-month pause

Amazon has restarted drone liberations in two states after a months-long pause, the company confirmed. …

Leave a Reply

Your email address will not be published. Required fields are marked *