News & Topics Exploring the Strong Performance of NVIDIA


Exploring the Strong Performance of NVIDIA

On the 5th, the market capitalization of the US semiconductor giant NVIDIA surpassed $3 trillion (approximately 468 trillion yen) for the first time.


NVIDIA’s market capitalization first broke the $2 trillion mark in February this year. The company’s performance has been rapidly expanding, supported by semiconductors for generative AI. It has increased its presence as a driving force in the US stock market, surpassing $3 trillion just over three months after exceeding $2 trillion.


Sales to data centers, including AI, are boosting performance. In the financial results for February-April 2024 announced in May, net profit increased to $14.881 billion, about 7.3 times the same period last year, and sales increased to $26.044 billion, about 3.6 times.


The Strength of NVIDIA: The Background


NVIDIA manufactures GPUs (graphics processing units). GPUs are chips designed for display functions such as video, image, and animation display and have been used to display images smoothly in gaming PCs and other devices.


Now, the demand for GPUs is expanding. The catalysts for this are “data centers” and “generative AI” represented by ChatGPT.


Until now, it was common for data centers to be equipped only with CPUs (Central Processing Units), but with the spread of AI, the trend is for data centers to be equipped with GPUs in addition to CPUs. However, only about 10-20% of data centers are currently equipped with GPUs.


This situation will change with the spread of generative AI.


Generative AI, such as image generation and natural language generation, requires a process called “inference” to run an AI model created by learning and reach a conclusion. When you ask ChatGPT a question, the answer you get is the result of “inference.” The inference process requires more calculations than the learning process. Therefore, it is necessary to also install a GPU, which is suitable for large-scale calculations.


In the future, it is clear that generative AI will become the main task of generating information in most data centers worldwide, and it is said that within 10 years, most data centers in the world will be equipped with GPUs.


NVIDIA’s “CUDA”: The Standard for AI Developers


Unlike CPUs, GPUs are good at performing large amounts of calculations simultaneously in parallel, but to bring out their capabilities, a development environment for GPUs is required.


NVIDIA’s “CUDA” is one such “development environment for GPUs,” and since CUDA has become the de facto standard among neural network researchers, many libraries have been created on top of it. Now, at least when it comes to the learning process, there is almost no choice but to use CUDA.


CUDA is a development environment created by NVIDIA for its own GPUs, so as a result, it has become the standard to “use NVIDIA in the neuron learning process.”


CUDA protects NVIDIA as a competitive “moat” that rivals cannot easily overcome.


Will NVIDIA Have a Near Monopoly for the Time Being?


According to the British research company Omdia, NVIDIA will have a 77% share (2023) of the global market for AI semiconductors for data centers. The company’s cutting-edge GPUs are highly sought after by companies developing AI.


For the time being, NVIDIA is expected to have a near monopoly on the GPU market for data centers. However, we will also be keeping an eye on the movements of other companies that deal with GPUs.