When it comes to semiconductor stocks, odds are that companies such as Nvidia and Advanced Micro Devices (AMD -4.76%) are the first that come to mind. Both companies specialize in designing advanced chipsets called graphics processing units (GPUs). GPUs are a monumentally important component for training generative AI models and machine learning.

For much of the last two years, Nvidia has emerged as the undisputed leader in the chip realm thanks to its long line of GPUs relative to what AMD has to offer. However, there is another part of Nvidia's compute and networking business that rarely gets spoken about -- and yet it's this very feature that has allowed Nvidia to remain in a market-leading position when it comes to data center GPUs.

Below, I'm going to explain what makes Nvidia such a disruptive force. Moreover, I'll detail AMD's latest move as it seeks to compete more heavily with Nvidia, and how the chip king is responding.

Nvidia has a hidden gem in its business, but...

GPUs are pieces of hardware stored in data centers, constantly running sophisticated computations around the clock. But how do these chipsets work, and what's actually powering them?

For Nvidia, the answer resides in its compute unified device architecture (CUDA). CUDA is a software platform that layers on top of Nvidia's GPUs. Although Nvidia's chipware can technically run on other platforms, developers are going to be limited in what they can accomplish. For this reason, using CUDA in conjunction with Nvidia GPUs is the most optimal solution in order to take advantage of Nvidia's large ecosystem of features. 

Building an end-to-end suite featuring tightly integrated hardware and software has helped Nvidia acquire 90% share of the data center GPU market. Nevertheless, AMD appears to have answer up its sleeve and it could just be the catalyst that leads to more accelerated growth for the Nvidia rival.  

Chess pieces sitting on a chess board

Image Source: Getty Images

... AMD is raising the stakes

While Nvidia undoubtedly has an enormous lead over AMD, there are some subtle signs that the company may be losing its grip on market dominance. As I explained in this piece, sales from Nvidia's data center GPU business are showing some notable signs of deceleration. And at the same time, AMD's data center GPU business has scaled considerably -- now growing at similar rates to that of Nvidia's operation.  

One reason for AMD's sudden surge is the overwhelming success of its MI300X accelerators, which boast Microsoft, Oracle, and Meta Platforms as customers. Although each of these big tech companies are also major Nvidia customers, it's notable that they are diversifying their GPU clusters and beginning to migrate toward lower-cost alternatives offered by AMD.

In addition to AMD's accelerators, the company also offers a software platform called ROCm. While CUDA has been more widely adopted than ROCm to date, I think the differing trends between the data center operations offered by Nvidia and AMD could signal that ROCm is poised for a breakout move as AMD seeks to acquire incremental market share in the GPU landscape. 

Nvidia's latest deal is a chess move for the ages

I don't think I'm the only one that's noticed AMD's rapid growth in the data center GPU market over the last year. In late December, Nvidia closed a $700 million acquisition of a company called Run:ai.

Run:ai is a start-up based in Israel that specializes in "efficient cluster resource utilization for AI workloads". I see the Run:ai acquisition as an incredibly savvy decision, as it underscores Nvidia's approach to remain a tightly integrated ecosystem -- offering customers a "single fabric" on which AI workloads are trained. As a result, I think it's going to become even more challenging for customers to migrate away in lieu of alternatives such as AMD's ROCm.

While I remain bullish on AMD and believe the company is making a lot of important strategic moves, I do not think ROCm is a "checkmate" move against Nvidia. For now, Nvidia remains the king of the GPU realm by a mile.