DeepSeek is an artificial intelligence (AI) research lab based in China. It was spun out of the country's most successful hedge fund, High-Flyer, in 2023, which had been using AI for years to develop trading algorithms.

DeepSeek discovered a way to develop powerful large language models (LLMs) for a tiny fraction of the money being spent by America's leading AI companies. It triggered a panic in the U.S. stock market on Monday as investors considered the impact on chip suppliers like Nvidia (NVDA 0.40%) and prominent developers like OpenAI (which is backed by Microsoft).

This could be a transformational moment in the AI race. Not only are DeepSeek's methods potentially valid, but there is at least one other Chinese AI start-up that seems to have produced similar results. Here's what it could mean for Nvidia and OpenAI.

Two investors looking at a series of computer screens with price charts on them.

Image source: Getty Images.

AI models are quickly becoming commoditized

Ilya Sutskever is one of the co-founders of America's leading AI developer, OpenAI. He once believed data and computing power were the key ingredients to training the best AI models and producing the smartest AI software. This is known as pre-training scaling, and it meant the developers with the most financial resources could build the best data centers, buy the best chips, and win the AI race.

But in November 2024, he told Reuters the results from using that method have plateaued. OpenAI has since developed models with better "reasoning" skills, meaning they spend more time "thinking" to produce the best responses from the ChatGPT chatbot. This is known as test-time scaling, and models that use it (GPT-4o1 to GPT-4o3) are better at problem solving, and bring AI closer to human intelligence on an academic level.

It cost OpenAI around $20 billion to reach this point (money it mostly raised from investors since 2015). But DeepSeek recently released its V3 model, which was created for just $5.6 million, and yet it's competitive with OpenAI's GPT-4o models across several performance benchmarks.

The U.S. government banned Nvidia from selling its latest graphics processors (GPUs) to Chinese AI companies, so DeepSeek developed V3 using less powerful versions like the H100 and the H800. To compensate for the lesser performance, DeepSeek had to innovate on the software side by creating more efficient algorithms and data input methods.

The company also used a technique called distillation to create V3. It involves taking a small model and training it using a successful model like GPT-4o1 to produce a similar final product. This strategy supercharges the speed with which an AI company can train a competitive LLM, and it will potentially lead to commoditization. In other words, there could be hundreds of LLMs on the market in the future with similar capabilities, and they will mostly be interchangeable.

That could be a real threat to OpenAI and even Nvidia. OpenAI could lose the advantage it established thanks to its considerable financial resources, and since less LLM training will be required, Nvidia could suffer from reduced demand for GPUs.

DeepSeek rocked the tech sector with its low costs, but it's not alone

Training is only one side of the equation. There is also the inference process, which involves the AI model turning prompts into accurate responses. But like with any business, lower overall costs can translate into lower prices for customers.

As of this writing, DeepSeek charges just $0.14 per 1 million input tokens, which is 94% cheaper than OpenAI's rate of $2.50 per 1 million input tokens (input tokens are calculated based on the number of words in a user's prompt).

But DeepSeek isn't the only AI lab which seems to have cracked this code. Kai-Fu Lee, who used to run Alphabet's Google operations in China, launched an AI start-up called 01.ai. According to its website, its Yi models perform well against competing models from DeepSeek. The company charges just $0.10 per 1 million input tokens, which is even cheaper than its Chinese rival -- and substantially cheaper than OpenAI.

Today's Change
(0.40%) $0.55
Current Price
$139.40
Arrow-Thin-Down

Key Data Points

Market Cap
$3.4T
Day's Range
$137.93 - $143.44
52wk Range
$66.25 - $153.13
Volume
218,973,373
Avg Vol
244,376,206
Gross Margin
75.86%
Dividend Yield
0.02%

What all of this means for OpenAI and Nvidia

I think OpenAI is in trouble if LLMs continue trending toward commoditization. Plus, its models are closed-source, so developers are locked into the company's ecosystem, which won't be a desirable feature once competitive open-source LLMs are widely available.

DeepSeek does use an open-source approach, which gives developers more freedom to tweak its models as necessary to build AI software. Developers can also download open-source models locally so they never have to share their sensitive data with the creator.

But while OpenAI faces uncertainty, Nvidia might actually benefit from plunging inference costs, which could offset some of the lost GPU demand on the training side.

Think about the progression of the cellphone. When we had to pay a fee every time we sent a text message or browsed the internet, we never used our phones as frequently as we do now. Unlimited plans with uncapped calls, texts, and data enable us to spend hours on our phone each day for a nominal monthly fee -- simply put, when costs came down, usage skyrocketed.

AI could follow a similar path, and as usage increases, companies will need more of Nvidia's GPUs to cover demand for inference. That will be especially true as reasoning capabilities evolve, because more thinking requires substantially more computational power.

The short-term picture is a little less certain. Will some of Nvidia's customers reduce their data center spending as they optimize their training methods like DeepSeek did? It's hard to say, but a new quarterly earnings season just started, so we should receive an update from almost every one of them over the next few weeks.