TOPICS & NEWS
2023.07.25
As the use of the cloud accelerates, it is said that the so-called “return to on-prem” movement, in which in-house systems that were once cut out to the cloud are returned to on-premises, is becoming apparent.
Last year, the Rakuten Group decided to return to on-prem. We are expanding the environment of the private cloud “One Cloud” and promoting the integration of the IT infrastructure used by the various businesses of our group companies. In principle, many systems currently running on public clouds will be shifted to One Cloud. In addition to improving cost efficiency by promoting the consolidation of IT infrastructure into a private cloud for the entire group, the company plans to accumulate IT infrastructure know-how for stable operation and enhanced security.
Private clouds will also be used as the basis for IT services for corporations that are planning to enter the market. The plans include eKYC for identity verification, website access analysis, and electronic payment functions. Both technologies were developed for use in the Group’s business, and preparations are underway to sell them externally as pay-as-you-go public cloud services.
With the advent of cloud first, it is said that opportunities to introduce on-premises servers are definitely decreasing for many companies. However, if you turn your attention to the server market, the movement is still strong. At first glance, it seems contradictory, but what is behind this?
Background of “on-premise regression”
The server market seems to be growing favorably in 2022, with a year-on-year increase of 10-20%.
Even in the early 2000s, when server virtualization began to become popular, it was said that servers would not sell as a result of server consolidation. However, in reality, this is not the case. Virtualization has made it easier to procure servers, and conversely, the introduction of various systems has become more active, leading to the demand for more servers.
Currently, with the tailwind of DX, IT investment is becoming active, and system utilization that was not possible before is spreading. Given the rapid increase in server resources required by the cloud, the expansion of the server market is rather natural.
On the other hand, one of the reasons behind the recent boom in the use of on-premise servers by general companies is that misunderstandings about the cloud have been cleared after actually using it. In retrospect, the cloud has attracted great expectations for its ability to use resources at extremely low cost and to reduce the workload by cutting out operations to the outside.
However, in reality, there are many cases in which unexpectedly high charges are billed as a result of using the cloud without understanding the characteristics of the cloud in terms of cost.
Also, in terms of operation, the hardware amulet is gone, but the operation of the system itself still remains. Cloud management requires different knowledge than on-premises, and in the current situation where many companies have systems on both on-premises and in the cloud, dual management will inevitably occur. This is no small burden for busy IT departments.
There are also security issues. Existing legacy systems that handle highly confidential data cannot be abolished on-premises because they cannot be operated on public clouds. As a result, IT operations become more complex, leading to problems such as increased operational management loads.
As the understanding of these “realities” has progressed, there is a swing back to the style of coexisting with the cloud, returning the systems that were once cut out to the cloud and returning them to the on-premises, starting with those that were judged to be “not suitable”.
On-pre regression will progress in the present progressive form. However, companies that have experienced the cloud know its advantages. Is the conventional on-premise IT infrastructure that such companies should aim for?
There are two approaches to the current “on-premise regression”
There are currently two approaches to on-premise regression. One way is to leave the server-related data in the cloud and return only the key data in DX to the on-premises. Another is to bring the whole system back on-premises. The problem is the latter method.
It is clear that it is not a 3 Tier type (a system configuration in which a group of servers and shared storage are connected with a network fabric) designed with SPOF (single point of failure) that emphasizes only cost. However, It doesn’t mean that the 3-tier model is bad for all proposals. Even after considering appropriate measures to address issues, the larger the scale, the more complex it becomes. No one wants to go back to a situation in which each time a review is made, discussions between the people in charge of servers, storage, and networks lead to long lead times, and issues such as hardware generations cause high replacement costs.
What we should aim for is the adoption of a cloud-like virtualization platform. From that point of view, HCI (Hyper Converged Infrastructure), which realizes a cloud-like system infrastructure, is currently attracting attention. It seems that this HCI has greatly improved the evaluation. HCI, which implements pre-verified server, storage, and network functions in software and stores them in a single box, and is provided together with virtualization middleware, greatly simplifies the configuration of the IT infrastructure. Even without specialized knowledge, resources can be easily expanded by adding nodes, achieving scalability close to that of the cloud.
HCI, which can maintain a simple configuration, is likely to continue to attract attention in the future, in contrast to the 3-tier configuration, which increases in complexity as it grows in scale.
dil_admin
TOPICS & NEWS