TOPICS & NEWS
The Digital Agency has announced a policy to ease the selection requirements for providers of government clouds (government clouds) shared by the government and local governments.
Government cloud refers to a common infrastructure of cloud services used by the government and local governments. Called the government cloud, the government has set a goal to make systems related to 20 mission-critical tasks handled by municipalities, such as taxes and national pensions, available on the government cloud by the end of fiscal 2025.
The current rules, which require a single company to meet approximately 330 requirements, will be revised to allow for participation by a coalition of companies. This may make it easier for domestic companies to enter government cloud services that rely on foreign capital.
Until now, there have never been many businesses that can satisfy the wide variety of selection requirements on their own. In the 2022 public offering, cloud services from US companies that meet technical requirements such as security and business continuity have demonstrated their presence. In addition to Amazon Web Services (AWS), there are only four companies: Google, Microsoft Japan, and Oracle Japan. selected.
Domestic companies are unable to meet the requirements due to the scale of their business and the nature of their services. Particular hurdles include building a system to provide operational support from system development, using multiple data centers, and providing a development environment where artificial intelligence (AI) can perform machine learning.
Only giant IT (information technology) companies, known as “hyperscalers” such as AWS and Google, are able to achieve this on their own.
The agency is expected to announce new selection requirements for government cloud providers and start public offerings as early as August. The selected businesses will be allowed to provide services jointly with other companies as long as they are responsible for core technologies such as data management and authentication while maintaining most of the current items in the new requirements.
The selection of a government cloud provider is expected to be decided in late October.
Background of Easing Selection Requirements
The background behind this easing of selection requirements is
“We should review the selection criteria for cloud service providers that store and provide government cloud services.
and “the selection criteria for cloud service providers that store and provide government cloud services should be reviewed. It is extremely likely that companies like Sakura Internet and Internet Initiative Japan will seize this new opportunity to enter the domestic cloud market.
Despite these amendments, however, the Digital Agency stated that the right of municipalities to select providers will be maintained. In other words, the changes that the amendments will bring to the actual selection process may only be limited.
Many cloud service providers are requesting joint participation in the government cloud by multiple companies. The results of the selection process after the requirements are eased are likely to attract much anticipation and attention.
In this issue, we will first look at the current status and future intentions of the data center business of Japanese companies.
The company plans to spend more than 1.5 trillion yen over the next five years to expand its data centers. India will be the place where NTT will increase the most, and they plan to increase the number from the current 12 to about 24 by fiscal 2025, where potential demand is expected due to the expansion of major overseas IT companies and population growth. NTT also wants to increase the number of locations in North America from 14 to 23.
SoftBank Corp., in collaboration with U.S. semiconductor giant NVIDIA, aims to build a platform for generative artificial intelligence and 5G/6G applications and introduce it to new AI data centers in Japan.
This application is based on NVIDIA chip technology. And to reduce costs and improve energy efficiency, SoftBank plans to build data centers that can host generative AI and wireless applications on a multi-tenant common server platform.
Kansai Electric Power
Kansai Electric Power Company (KEPCO), in partnership with U.S. data center operator CyrusOne, has begun work on developing a data center in Japan with the ambitious goal of achieving a 900 MW operation. the CyrusOne KEP joint venture is a hyperscale platform company Focusing on the development of new data centers specifically tailored to meet demand, the joint venture aims to enhance resiliency, efficiency, and smart development in the industry by linking data center infrastructure with the broader power grid.
In this way, the data center industry of Japanese domestic companies seems to be invigorating and achieving positive growth.
So what is behind this?
Background of the activation of the data center industry
Behind this is the progress of digitization, such as generative AI (Artificial Intelligence). If we become a data-driven society in which decisions are made based on data, data will accumulate at an accelerated pace.
Akira Shimada, president of NTT, which is focusing on the data center business, said, “We want to develop semiconductors that use light (instead of electrons) after 30 years. We will invest 100 billion yen per year in research and development. As a start, we plan to begin manufacturing related components that use light after 2013. In addition to incorporating them into telecommunications equipment and servers, we also aim to apply them to more general electronic devices.” and suggests that the data center business is closely linked to semiconductor development.
Semiconductors that use light will consume far less electricity, which is in line with the times in terms of sustainability.
Junichi Miyagawa, President and CEO of Softbank, said, “We are entering an era of coexistence with AI, data processing, and rapidly increasing demand for electricity. We aim to provide next-generation social infrastructure to support a digitalized society in Japan.
Today, the development of generation AI (Artificial Intelligence) is remarkable. We may be at a turning point, upgrading the services we deploy regarding our growth strategy.
As the use of the cloud accelerates, it is said that the so-called “return to on-prem” movement, in which in-house systems that were once cut out to the cloud are returned to on-premises, is becoming apparent.
Last year, the Rakuten Group decided to return to on-prem. We are expanding the environment of the private cloud “One Cloud” and promoting the integration of the IT infrastructure used by the various businesses of our group companies. In principle, many systems currently running on public clouds will be shifted to One Cloud. In addition to improving cost efficiency by promoting the consolidation of IT infrastructure into a private cloud for the entire group, the company plans to accumulate IT infrastructure know-how for stable operation and enhanced security.
Private clouds will also be used as the basis for IT services for corporations that are planning to enter the market. The plans include eKYC for identity verification, website access analysis, and electronic payment functions. Both technologies were developed for use in the Group’s business, and preparations are underway to sell them externally as pay-as-you-go public cloud services.
With the advent of cloud first, it is said that opportunities to introduce on-premises servers are definitely decreasing for many companies. However, if you turn your attention to the server market, the movement is still strong. At first glance, it seems contradictory, but what is behind this?
Background of “on-premise regression”
The server market seems to be growing favorably in 2022, with a year-on-year increase of 10-20%.
Even in the early 2000s, when server virtualization began to become popular, it was said that servers would not sell as a result of server consolidation. However, in reality, this is not the case. Virtualization has made it easier to procure servers, and conversely, the introduction of various systems has become more active, leading to the demand for more servers.
Currently, with the tailwind of DX, IT investment is becoming active, and system utilization that was not possible before is spreading. Given the rapid increase in server resources required by the cloud, the expansion of the server market is rather natural.
On the other hand, one of the reasons behind the recent boom in the use of on-premise servers by general companies is that misunderstandings about the cloud have been cleared after actually using it. In retrospect, the cloud has attracted great expectations for its ability to use resources at extremely low cost and to reduce the workload by cutting out operations to the outside.
However, in reality, there are many cases in which unexpectedly high charges are billed as a result of using the cloud without understanding the characteristics of the cloud in terms of cost.
Also, in terms of operation, the hardware amulet is gone, but the operation of the system itself still remains. Cloud management requires different knowledge than on-premises, and in the current situation where many companies have systems on both on-premises and in the cloud, dual management will inevitably occur. This is no small burden for busy IT departments.
There are also security issues. Existing legacy systems that handle highly confidential data cannot be abolished on-premises because they cannot be operated on public clouds. As a result, IT operations become more complex, leading to problems such as increased operational management loads.
As the understanding of these “realities” has progressed, there is a swing back to the style of coexisting with the cloud, returning the systems that were once cut out to the cloud and returning them to the on-premises, starting with those that were judged to be “not suitable”.
On-pre regression will progress in the present progressive form. However, companies that have experienced the cloud know its advantages. Is the conventional on-premise IT infrastructure that such companies should aim for?
There are two approaches to the current “on-premise regression”
There are currently two approaches to on-premise regression. One way is to leave the server-related data in the cloud and return only the key data in DX to the on-premises. Another is to bring the whole system back on-premises. The problem is the latter method.
It is clear that it is not a 3 Tier type (a system configuration in which a group of servers and shared storage are connected with a network fabric) designed with SPOF (single point of failure) that emphasizes only cost. However, It doesn’t mean that the 3-tier model is bad for all proposals. Even after considering appropriate measures to address issues, the larger the scale, the more complex it becomes. No one wants to go back to a situation in which each time a review is made, discussions between the people in charge of servers, storage, and networks lead to long lead times, and issues such as hardware generations cause high replacement costs.
What we should aim for is the adoption of a cloud-like virtualization platform. From that point of view, HCI (Hyper Converged Infrastructure), which realizes a cloud-like system infrastructure, is currently attracting attention. It seems that this HCI has greatly improved the evaluation. HCI, which implements pre-verified server, storage, and network functions in software and stores them in a single box, and is provided together with virtualization middleware, greatly simplifies the configuration of the IT infrastructure. Even without specialized knowledge, resources can be easily expanded by adding nodes, achieving scalability close to that of the cloud.
HCI, which can maintain a simple configuration, is likely to continue to attract attention in the future, in contrast to the 3-tier configuration, which increases in complexity as it grows in scale.
The Kansai region, centered on Osaka, is a rapidly growing market for data center businesses, second only to the Tokyo metropolitan area, which is the largest in the Asia-Pacific region. As corporate DX (digital transformation) progresses, there is a wide range of demand not only from local companies but also from domestic and foreign companies.
New data center rush
In 2019, NTT Communications (NTT Com: NTT Com) opened the largest data center in the Kansai region in Ibaraki City, Osaka Prefecture.
In February 2023, MC Digital Realty, a data center joint venture between Mitsubishi Corporation and Digital Realty, will open a new data center in Osaka.
In the past two months, Asian real estate company ESR Group has started construction of a 19.2MW data center in the Osaka area. Optage Inc. has announced plans to build a 14-story carrier-neutral data center, scheduled to open in January 2026.
Kansai Electric Power Co., Ltd. and U.S. company Sarai One for data center development
U.S. data center developer and operator Cyrus One has partnered with Kansai Electric Power Co., Inc. (KEPCO), a Japanese energy company, to develop new data centers in Japan.
On May 22nd, Kansai Electric Power announced that it would establish a joint venture with Cyrus One to develop data center business in Japan. Invest at least 1 trillion yen ($7 billion) over the next 10 years to build large-scale data centers called “hyperscale” with a “power receiving capacity” of 50,000 kW or more per location, which indicates power consumption, in the Kansai and Tokyo metropolitan areas. We are planning to start development and operation in the summer.
In 10 years, we aim to achieve a total power receiving capacity of 900,000 kilowatts or more, a scale that uses almost as much electricity as a single nuclear power plant.
Representative directors of the new company will be dispatched from both companies. The Kansai Electric Power Group will bring together the strengths of both companies, including know-how related to power supply to data centers and real estate acquisition, and Cyrus One’s sales capabilities to IT (information technology) companies that are data center customers. We have already secured the construction site for the first project in Kansai and plan to start construction as soon as possible.
Attention to the data center industry in the Kansai region
New data centers in the Kansai region continue to see a rush to build new facilities, and demand for these facilities is expected to continue to grow.
Kansai Electric Power’s hyperscale data center operation is likely to attract more attention to trends in the data center industry in the Kansai region.
With an increasing reliance on digital technology, the data center industry is experiencing impressive growth, is relatively immune to continued economic uncertainty, and is being viewed by investors and financial institutions as a strong alternative asset class. It’s getting attention.
Expansion of market size
According to a report published by Global Market Insights Inc., the global cloud data center market is estimated to be worth 20 billion USD in 2022, progressing at a CAGR (compound annual growth rate) of 10% from 2023 to 2032. It is projected to surpass the 70 billion USD valuation by 2032.
Efforts to promote the development of cloud computing technology are expected to be the main driver of market expansion. Projects powered by cloud computing offer integrated management, including automated problem resolution, end-to-end security management, and budgeting based on actual data usage.
Improving cloud computing infrastructure for e-administration practices has become a priority for governments of several countries, including India. These governments have also launched projects to expand their skill sets to advance digitization.
According to deployment models, the public cloud data center market will be valued at over 5 billion USD in 2022 and is expected to grow profitably through 2032.
Remarkable growth, even in Japan
Japanese telecommunications company Nippon Telegraph and Telephone (NTT) has announced plans to invest 8 trillion yen ($59 billion USD) in data centers, artificial intelligence and other “growth areas” over the next five years.
Of that, at least 1.5 trillion yen ($11 billion USD) will be spent on expanding and upgrading data centers, while at least 3 trillion yen ($22 billion USD) will be invested in digital businesses, including AI and robotics.
The company said the spending is expected to boost earnings before interest, taxes, depreciation and amortization for the fiscal year ending March 2028 by around 20% compared to the previous fiscal year, which amounts to about 4 trillion yen ($29.4 billion USD).
The Nikkei Shimbun reports that President Akira Shimada said at a press conference, “We will invest in growth areas and expand our cash-generating capabilities.”
Despite this market expansion and growth forecast, there are also challenges that weigh heavily on us.
Ukraine War, Labor Shortage and Challenges Weighing Down
Supply chain constraints, which have eased from the peak of the pandemic but have not fully resolved, are at risk of further flare-ups due to geopolitical tensions in Europe and the Asia-Pacific region.
For example, the war in Ukraine has limited the supply of neon, which is essential for semiconductor manufacturing. This supply pressure is causing delays in production. This is also due to a labor shortage, which is particularly serious in the data center field, where the problem of lack of skills and human resources is becoming more pronounced.
Employers are not only finding it difficult to find talent, they are also struggling to retain talent, with many reportedly being hired by their peers in a hot labor market. .
Another challenge hindering data center growth is the broader global economic problem. Data centers are among the top three asset classes expected to see the most growth in debt balances over the next 12-24 months, as they rely on debt to finance construction and acquisitions. Profit margins are shrinking as interest rates rise. Higher interest rates and economic instability could make it harder for businesses to make large deals.
Market expansion continues; expectations are high for initiatives in each country
Despite many challenges, the market size of the data center industry continues to expand. There is growing interest in the efforts of countries around the world to see how they can further expand while facing challenges.
As competition in data center investment and development intensifies, DC operators, DC developers and investors involved in data center business (hereinafter referred to as “DC operators, etc.”) It is desirable to utilize the CRE strategy in order to optimize the earnings of the data center business while eliminating competition with other companies as much as possible.
■ What is the CRE strategy?
“CRE” is an abbreviation of “Corporate Real Estate,” which began to attract attention in the United States in the 1960s. It refers to real estate owned by a company, and in addition to real estate such as offices, factories, and warehouses necessary for conducting business, it also includes recreation facilities, company housing, welfare facilities, and idle land.
The CRE strategy is a medium- to long-term management strategy that aims to improve corporate value by effectively utilizing such real estate.
As a specific CRE strategy, we will review current offices, rent surplus offices to third parties, and utilize idle land to develop and construct real estate for various purposes to generate stable long-term rental income. secure, etc.
In 2008, the Ministry of Land, Infrastructure, Transport and Tourism issued the “Guidelines for Implementing CRE Strategy”, which triggered the CRE strategy to attract attention in Japan. The guideline defines the CRE strategy as “a way of thinking about maximizing the efficiency of real estate investment by reviewing corporate real estate from the perspective of management strategy from the perspective of ‘improving corporate value.'” I’m here. Although the data is a little dated, the scale of CRE in Japan is said to be 490 trillion yen*, which is expected to grow further when converted to current real estate prices.
Based on the 2006 Basic Land Survey.
■ CRE strategy past and present
Before the CRE strategy was clearly recognized, it seems that corporate real estate owners chose only schemes of (1) passive abandonment, (2) simple sale, and (3) simple lending (land or building). Part of the reason could be the lack of active use of balance sheet strategies within companies and the fact that corporate property owners are not real estate professionals.
The CRE strategy has gradually been recognized, and up to now, real estate companies (developers, etc.) and construction companies have been actively approaching corporate real estate. We have successfully commercialized a large number of businesses based on CRE strategies through proposals for effective utilization and proposals for joint ventures. It is also noteworthy that the enforcement of the Financial Instruments and Exchange Law (2007) facilitated the liquidation of real estate and the entry of investors into the market.
■ Role of CRE strategy in data center investment and development
By applying the CRE strategy to data center investment and development, DC operators can enjoy the following benefits.
Understanding the intentions of corporate property owners
In addition to knowing whether you want to sell the property, whether you want to rent it, whether you want to make effective use of it yourself, whether you want to do a joint business, etc., you can also grasp the timing, so you can communicate with the owner based on trust. increase.
Avoiding Opportunity Loss and Lost Profits
Compared to real estate for other purposes, it takes a considerable amount of time to conduct a preliminary survey to determine the suitability of a data center site. For example, when participating in a real estate auction, etc., it may not be possible to make a decision within the consideration period. In addition, there may be cases where real estate investment is executed while accepting the risk of unconfirmed items. On the other hand, based on the CRE strategy, the decision-making process of the parties involved will make it easier for DC operators to select properties suitable for data centers, thereby reducing investment risk.
Optimizing investment efficiency
Data center development requires not only land acquisition funds but also huge investment funds for buildings and equipment. By utilizing the CRE strategy, the timing of funding will be advanced through discussions among the parties concerned, making it possible to avoid the problem of inefficient investment funds associated with prior acquisitions. In addition, in the SPC formation scheme for joint investment, it is possible to carefully consider the financing method and loan terms and formulate the optimal financing strategy.
Alignment of Interests of Stakeholders and Exit Strategies
The development of a data center requires a huge amount of investment funds and a considerable period of time from the completion of the DC to its stable operation. By repeating discussions that match the expectations of each of the parties concerned, the CRE strategy realizes the separation of the development party, ownership and management, and enables the construction of a final exit strategy scenario.
As mentioned above, utilizing CRE strategy in data center investment and development can be considered a beneficial approach to maximize profitability while minimizing risk.
In order to realize data center investment as soon as possible, it is important to know and become familiar with the trends of data centers, which will be in increasing demand in the future.
Demand for network communications is expected to grow significantly in the future due to the spread of telework, cloud computing, and the IT revolution, etc. The latest trend is not only in terms of the ability to process huge amounts of data using HPC, etc., and high power consumption, but also in terms of further expanding perspectives.
This time, we will introduce the data center trends of 2023.
・Shift to hyperscale
In recent years, cloud services have been supported by HSDC (Hyperscale Data Centers) instead of conventional DC (Data Centers).
HSDC is a large-scale facility built by a company that requires a huge amount of data communication and storage. You can see many mega cloud companies such as GAFAM.
Megacloud companies demand data centers that are “suitably located”, “large scale” and “uniform quality”.
Until recently, HSDC was a facility created for mega cloud operators to install a large number of servers, but now even smaller SaaS operators such as GAFAM are starting to use it.
As HSDCs increase globally with the proliferation of cloud services, it has been predicted that the increased power consumption by data centers around the world will have a serious impact on the global environment.
However, in 2020, a joint study by Lawrence Berkeley National Laboratory and others reported that while DC processing capacity has increased about sixfold from 2010 to 2018, power consumption has increased only 6 percent overall.
The widespread use of HSDCs, which can process a large amount of data with little power consumption, has reduced the increase in overall DC power consumption.
As a result, HSDCs have high energy-saving performance and contribute to reducing the burden on the environment.
It can be said that HSDCs will become indispensable facilities in the future, when sustainability will be required.
The decarbonization of DC can be divided into two main areas.
The first is to improve the efficiency of power use in facilities, including air conditioning and power supplies, and the second is to shift to renewable energy.
In recent years, global warming caused by CO2 emissions has become an issue, and the shift to renewable energy power is accelerating.
Although the spread of HSDCs has made it possible to somewhat curb the increase in power consumption, the power consumption itself will continue to increase.
It can be said that being a sustainable data center is a prerequisite for survival. In fact, major global DC operators have set a goal of 100% renewable energy deployment.
In Japan, Ishikari City’s zero-emission DC project is underway, taking advantage of its cold climate and proximity to a renewable energy location.
There is no doubt that operating on renewable energy will be the trend for data centers in the future.
・ Edge computer market
Edge computing refers to distributed computing in which data processing and analysis is performed on devices such as IoT terminals and servers installed nearby.
Since data is processed and analyzed at the edge without being sent to the cloud, it has the advantage of high real-time performance and low communication delays due to the distributed load.
In recent years, the evolution of the IoT and AI has driven the need for instantaneous processing of large volumes of data.
The conventional cloud inevitably increases processing lead time when handling large volumes of data, and Edge DC is the answer to this problem.
In the future, it is expected that edge computing will be further developed to avoid processing delays that can become bottlenecks in the cloud due to increased data volume.
Google, Microsoft, and others are also in the process of launching cloud edge solutions and exploring new needs.
HSDCs as well as edge computers are potential investment targets.
Please take a look at the future trends in data centers mentioned above to help you forecast future investments.