Nvidia (NVDA -4.10%) has soared over the last two years, thanks to its dominance in artificial intelligence (AI) -- a market set to reach $1 trillion by the end of the decade from about $200 billion today. Investors have piled into this leader, betting on the company's ability to win in this fast-growing market and deliver both earnings growth and share performance over time.

But news this week prompted investors to doubt Nvidia's prospects in the high-growth market, and Nvidia stock tumbled nearly 17% in one trading session. Chinese start-up DeepSeek announced the release of a large language model (LLM) that it trained for only two months and at a cost of less than $6 million. The idea here is that customers may not need Nvidia's most expensive and top-performing chips to train their LLMs -- and that could result in a drop in revenue for the tech giant.

Before jumping to conclusions, though, it's important to take a closer look at this story and consider the possible outcomes for Nvidia. Let's do that -- and figure out whether Nvidia is a sell or a buy on the DeepSeek news.

An investor works on a laptop.

Image source: Getty Images.

The most sought-after AI chips

First, a bit of background on Nvidia. The company sells the world's most sought-after -- and expensive -- AI chips and a variety of other AI products and services that have generated double-digit and triple-digit revenue growth in recent quarters. Revenue has reached records, into the billions of dollars, and profitability on sales is high, too, with Nvidia maintaining a gross margin of more than 70%. And the pace hasn't seemed ready to let up.

Nvidia has spoken of "staggering" demand for its new Blackwell architecture, and in recent quarters, big-tech customers have talked about increasing their spending in AI. For example, Meta Platforms, one of Nvidia's customers, has spoken about the need to lift its AI infrastructure spending this year. And last fall, Oracle co-founder Larry Ellison even said he and Tesla chief Elon Musk had "begged" Nvidia for more of its chips -- otherwise known as graphics processing units (GPUs).

All of this demonstrates Nvidia's strength in the AI market. Now it's time to look at the DeepSeek news.

The company, as mentioned, quickly trained its LLM and for a cost that's much lower than the billions of dollars U.S. players have poured into their AI platforms. DeepSeek, on its website, says its R1 model rivals OpenAI's model o1 -- both involve reasoning, spending time to think out a problem.

DeepSeek, as a Chinese company, didn't have access to Nvidia's latest chips, due to export controls. The U.S. government has restricted the export of the highest-performance chips to China for security concerns. As a result, DeepSeek says it used chips Nvidia designed specifically to adhere to the export rules.

Did DeepSeek only spend $6 million?

What this means for Nvidia?

At first, it may seem U.S. companies have overspent and now may turn to cheaper chips -- from Nvidia or others -- for the training of models. But there are a few things to remember, and these companies probably will consider them before making a move.

Though DeepSeek says it's spent less than $6 million, investors don't know if that's the reality. Development of these models requires many experiments and steps, which are all costly and may not be included in the announced figure. We also can't be 100% sure that DeepSeek hasn't used more powerful AI chips -- in spite of the export ban -- or more AI chips at some point in the process.

So it's difficult to apply the DeepSeek strategy to every LLM training project out there. Also, each LLM and AI project is different, meaning companies can't use a "one-size-fits-all" technique.

The training of models are just one part of the AI story. There's also inferencing, or the ability of that model to reason and make decisions, and the application of AI to the real world through the creation of AI agents, for example. All of this involves GPUs and other products and services sold by Nvidia.

An AI chip is shown.

Image source: Getty Images.

Nvidia's optimism

Nvidia doesn't seem overly worried about the DeepSeek news. In a statement, the company called DeepSeek's work "an excellent AI advancement" and said it showed how new models could be developed by "leveraging widely available models and compute that is fully export control compliant." This suggests these Nvidia chips have been successful, and there should be need for more of them as DeepSeek or others advance new projects.

On top of this, Nvidia emphasized that inferencing "requires significant numbers of Nvidia GPUs and high-performance networking." This also suggests the need for Nvidia's products.

The worst, best, and most likely outcomes

What does all of this mean for Nvidia? The very worst outcome would be a sharp drop in demand for Nvidia chips, but that seems extremely unlikely, considering the points I've made above.

The more logical worst outcome would be the following: Some customers may turn to lower-priced chips to follow DeepSeek's success story. But that actually could result in higher volume for Nvidia as more companies -- which hesitated earlier due to the cost -- will launch AI projects. Nvidia's dominance in the field means these customers surely will come to the company for at least some of their needs.

In the best situation, Nvidia will gain volume from smaller players, as I mentioned above -- and from business in China -- as these customers get in on Nvidia's lower-priced chips. At the same time, big tech companies will carry on with current plans.

Finally, I think the most likely scenario is business will continue as usual for Nvidia. I don't see the DeepSeek news as something that will prompt every major customer to overhaul spending and change strategy.

Nvidia's top GPUs still have proven their efficiency, and everyone wants to get to the finish line first in this highly competitive market. That's why it's a great idea to hold onto your shares of Nvidia -- and if you don't yet own the stock, consider buying it on this recent dip.