Following Nvidia‘s impressive forward outlook and CEO Jensen Huang‘s announcement of an anticipated “giant record year,” the company’s stock witnessed a substantial surge in extended trading on Wednesday, edging closer to a market capitalization of $1 trillion.
The notable rise in sales can be attributed to the escalating demand for Nvidia’s graphics processors (GPUs), which serve as the driving force behind artificial intelligence applications utilized by major industry players like Google, Microsoft, and OpenAI.
Driven by the increasing demand for AI chips in data centers, Nvidia has set a sales guidance of $11 billion for the current quarter, surpassing analyst estimates of $7.15 billion.
In an interview with CNBC, Nvidia CEO Jensen Huang highlighted the significant role played by generative AI, stating that the slowdown in CPU scaling and the emergence of accelerated computing as the way forward were key factors. According to Huang, the arrival of the “killer app” further solidified these trends.
Nvidia holds the belief that it is capitalizing on a distinctive shift in computer architecture, which could potentially result in even greater growth. Huang even suggests that the market for data center components has the potential to reach a remarkable $1 trillion.
Traditionally, the central processor (CPU) held the utmost significance in computers and servers, with Intel reigning over the CPU market while AMD served as its primary competitor.
However, the landscape has shifted with the rise of AI applications that necessitate substantial computing power. This has led to the GPU taking center stage, with the most advanced systems incorporating as many as eight GPUs for every CPU. Nvidia currently enjoys dominance in the market for AI GPUs.
Nvidia CEO Jensen Huang envisions a transformative future where generative data supersedes the previous data center model centered around CPU-driven file retrieval. Huang explains that instead of primarily retrieving data, the focus will shift to generating a significant portion of the data using AI technology.
Continuing his statement, he elaborated, “So rather than having millions of CPUs, there will be a considerable reduction in their number, but they will be linked to millions of GPUs.”
As an illustration, Nvidia’s DGX systems, designed as all-in-one AI computers for training purposes, exemplify this trend. These systems utilize eight of Nvidia’s top-of-the-line H100 GPUs, while employing only two CPUs.
In Google‘s A3 supercomputer configuration, a single high-end Xeon processor manufactured by Intel is combined with eight H100 GPUs provided by Nvidia.
This serves as a contributing factor to Nvidia’s remarkable 14% growth in their data center business during the first quarter of the calendar year. In comparison, AMD’s data center unit experienced flat growth, while Intel’s AI and data center business unit encountered a decline of 39%.
In Google’s A3 supercomputer configuration, a single high-end Xeon processor manufactured by Intel is paired with eight H100 GPUs provided by Nvidia.
This collaboration played a significant role in Nvidia’s data center business achieving a notable growth of 14% during the first quarter of the calendar year, while AMD’s data center unit experienced stagnant growth, and Intel’s AI and data center business unit witnessed a decline of 39%.
Additionally, it’s worth mentioning that Nvidia’s GPUs tend to be priced higher than many central processors. Intel’s latest generation of Xeon CPUs can reach prices as high as $17,000 at the list price. Comparatively, a single Nvidia H100 GPU can be sold for $40,000 in the secondary market.
As the demand for AI chips continues to escalate, Nvidia is expected to encounter heightened competition in the market. AMD poses a strong challenge with its competitive GPU business, particularly in the gaming sector, while Intel also has its own lineup of GPUs.
Furthermore, startups are actively developing novel chip designs tailored specifically for AI applications. Mobile-oriented companies such as Qualcomm and Apple are relentlessly pushing the boundaries of AI technology, aiming for a future where it can be seamlessly integrated into everyday devices like smartphones. This shift would enable AI to operate not just in massive server farms but also in portable devices.
Moreover, major players like Google and Amazon are actively involved in the development of their own AI chips, further intensifying the competitive landscape in this space.
However, when it comes to companies that are currently developing applications like ChatGPT, Nvidia’s high-end GPUs continue to be the preferred choice. These applications require substantial computational power to train on vast amounts of data, resulting in significant expenses. Additionally, during the subsequent phase called “inference,” where the trained model generates text, images, or predictions, running these applications can also be costly.