Archive
IDC Japan has released an analysis regarding the demand for hyperscale data centers in Japan. The analysis suggests that by the end of 2045, the demand for hyperscale data centers could reach approximately four times the domestic capacity of 2023.
Note: Hyperscale data centers are massive data centers used by mega-cloud service providers such as AWS, Google, and Microsoft to offer cloud services.
Amid the rapid growth of the cloud service business, several large-scale data centers are already being expanded in Japan, including in Inzai City, Chiba Prefecture. Additionally, the demand for using generative AI functions in cloud services is expanding. High-spec servers for generative AI purposes are often deployed within hyperscale data centers. As a result, the demand for hyperscale data centers is increasing. To meet this demand, data center operators and real estate companies are constructing new data centers to increase capacity supply.
Power Consumption by Data Centers Expected to Increase Sixfold by 2040
Data centers equipped for generative AI have HPC servers and GPU servers installed, which consume large amounts of electricity. Additionally, significant power is required for cooling. Consequently, with the proliferation of generative AI, power consumption is expected to increase explosively.
Regarding global data center power consumption, projections indicate that without energy-saving measures, it will exceed six times the 2022 levels, reaching 2,761 terawatt-hours by 2040.
Urgent Issues: Smooth Power Supply and Securing Renewable Energy Sources
In Japan, there is an urgent need to address the issue of power supply to meet the accelerating demand for data centers. With an increasing number of companies declaring a carbon-zero policy by 2050, more data center operators are securing renewable energy sources through Power Purchase Agreements (PPA). The Electricity and Gas Market Surveillance Commission of the Ministry of Economy, Trade and Industry is also taking action. They are considering measures to guide the location of newly expected data centers, due to the expansion of generative AI usage, to regions rich in renewable energy.
“Ensuring Power Supply” and “Securing Renewable Energy Sources” are crucial issues for the rapidly growing domestic data center market. We will continue to pay attention to measures addressing these issues in the future.
2024.06.26
On the 5th, the market capitalization of the US semiconductor giant NVIDIA surpassed $3 trillion (approximately 468 trillion yen) for the first time.
NVIDIA’s market capitalization first broke the $2 trillion mark in February this year. The company’s performance has been rapidly expanding, supported by semiconductors for generative AI. It has increased its presence as a driving force in the US stock market, surpassing $3 trillion just over three months after exceeding $2 trillion.
Sales to data centers, including AI, are boosting performance. In the financial results for February-April 2024 announced in May, net profit increased to $14.881 billion, about 7.3 times the same period last year, and sales increased to $26.044 billion, about 3.6 times.
The Strength of NVIDIA: The Background
NVIDIA manufactures GPUs (graphics processing units). GPUs are chips designed for display functions such as video, image, and animation display and have been used to display images smoothly in gaming PCs and other devices.
Now, the demand for GPUs is expanding. The catalysts for this are “data centers” and “generative AI” represented by ChatGPT.
Until now, it was common for data centers to be equipped only with CPUs (Central Processing Units), but with the spread of AI, the trend is for data centers to be equipped with GPUs in addition to CPUs. However, only about 10-20% of data centers are currently equipped with GPUs.
This situation will change with the spread of generative AI.
Generative AI, such as image generation and natural language generation, requires a process called “inference” to run an AI model created by learning and reach a conclusion. When you ask ChatGPT a question, the answer you get is the result of “inference.” The inference process requires more calculations than the learning process. Therefore, it is necessary to also install a GPU, which is suitable for large-scale calculations.
In the future, it is clear that generative AI will become the main task of generating information in most data centers worldwide, and it is said that within 10 years, most data centers in the world will be equipped with GPUs.
NVIDIA’s “CUDA”: The Standard for AI Developers
Unlike CPUs, GPUs are good at performing large amounts of calculations simultaneously in parallel, but to bring out their capabilities, a development environment for GPUs is required.
NVIDIA’s “CUDA” is one such “development environment for GPUs,” and since CUDA has become the de facto standard among neural network researchers, many libraries have been created on top of it. Now, at least when it comes to the learning process, there is almost no choice but to use CUDA.
CUDA is a development environment created by NVIDIA for its own GPUs, so as a result, it has become the standard to “use NVIDIA in the neuron learning process.”
CUDA protects NVIDIA as a competitive “moat” that rivals cannot easily overcome.
Will NVIDIA Have a Near Monopoly for the Time Being?
According to the British research company Omdia, NVIDIA will have a 77% share (2023) of the global market for AI semiconductors for data centers. The company’s cutting-edge GPUs are highly sought after by companies developing AI.
For the time being, NVIDIA is expected to have a near monopoly on the GPU market for data centers. However, we will also be keeping an eye on the movements of other companies that deal with GPUs.
2024.06.12