How Cloud Technologies Affect Data Centers



    When it comes to changes in the field of cloud computing and data centers, the trends speak for themselves.

    We at 1cloud are working on our own cloud service and simply cannot ignore the changes that are taking place. The volume of cloud use continues to grow - according to Gartner, in 2016, cloud resources will account for a significant part of the spending of all IT budgets.

    Moreover, as notesCisco, by 2018, 78% of all workloads will be handled by cloud data centers. Also by 2018, 59% of the total cloud load will fall on SaaS. Nowadays, it is impossible to imagine the work of a large bank or telecommunications operator without processing a huge amount of data that needs to be stored somewhere, somehow processed and transmitted. All this leads to significant and rapid changes in the work of data centers.

    The starting point of a new era can be considered virtualization, which has become a key factor in improving the efficiency of equipment use. As it soon became clear, almost everything can be virtualized: servers, data storage systems, telephone and mail services.

    The first organizations that moved from understanding the matter to concrete actions were developing Internet companies, which at that time had to cope with the unthinkable growth rate of the client base and ensure high demands on infrastructure.

    Then came the first cloud service providers, whose products became virtualized servers and software available for rent. The third wave of virtualization was approached by the corporate market, for which affordable and proven solutions had already appeared by then.

    In this regard, new technological approaches to the organization of the IT infrastructure of data centers began to develop. The first thing worth mentioning is the emergence of management systems and orchestration of infrastructure services, which turn a set of virtualized servers into cloud service delivery systems, for example SaaS, PaaS, IaaS. All this to a certain extent increased the degree of utilization of servers and network connections in the data center.

    However, the main driving force behind the data center market is the converged infrastructure (CI) model, in other words, converged infrastructure. Such an infrastructure should be called not so much a driving force as a probable form factor of the future data centers. At its core, converged infrastructure is a variant of the consolidated organization of various IT components into an optimized computing solution. In general, it includes: servers, network equipment, data storage systems, and the necessary management software.

    CI enables a business to centralize IT resource management, consolidate systems and reduce costs. The implementation of all these goals is achieved by creating a pool of computing nodes and other resources distributed between applications. If earlier a separate computing resource was meant by each service or application, then the convergent model optimizes their use.

    The catalyst for the development of converged infrastructure can be considered technologies related to flash memory.

    A natural reaction of developers of data center architectures to the emergence of a large number of non-volatile technologies (the read speed of which is many times higher than that of disk drives) was the creation of a cluster of flash memory chips under the control of a chip that emulates a disk controller. SSD replaces hard drives in critical locations - this process has been going on in data centers for several years now.



    In the server industry, SSDs began to crowd out small, low-latency, integrated SAS drives. For the rest of the server, large flash pools have become an alternative or interface for large, high-capacity disk arrays. There are also periodic suggestions to replace the storage of “cold” data — slow, high-capacity disks where rarely used data is stored — with flash arrays.

    It should be understood that the clouds have a serious impact on network solutions. Cloud technologies have increased the dependence of many companies on the work of data centers, because modern data centers should operate without service failures and interruptions, providing availability at a level close to 100%.

    According toStatista.com, just a couple of years ago, in 2012, the annual volume of data center traffic was 1 exabyte, and in 2015 it approached 3 exabytes. According to experts, in 2019 data center traffic will be 8.6 exabytes per year.



    In this regard, data center architects need to be able to correctly predict the load at the construction stage - this is a completely non-trivial task. Initially, too much infrastructure at first will not be profitable to operate and will simply warm the air in the data center.
     
    Architects must understand that working with converged systems and multi-tenant platforms imposes specific requirements for building power and cooling, which must be easily scalable and cost-effective. It is the need to maintain a low temperature that forces global companies to open data centers in the most unusual places.

    The Swedish provider Bahnhof AB has built a data center in Stockholm in a former bunker at a depth of 30 meters. The building cut out in the rock after the reconstruction remained intact as far as possible, although it was slightly expanded to fit the office there.



    The Ice Cube project, located next to the Amundsen-Scott polar station, not far from the South Pole, can argue with it. This data center is considered the southernmost in the world and serves to process large volumes of data generated by various research devices of the station.

    Natural anomalies are generally very often used in order to achieve the maximum cooling effect during the operation of data centers. For example, the Green Mountain Center, located in the Norwegian fjord, is cooled by air flows generated in narrow openings between the mountains. This approach is quite environmentally friendly and can significantly save energy consumed by cooling plants.

    In addition to underground data centers, there are other unusual solutions. American entrepreneurs have decided to launch a network of floating data centers. Arnold Magcale (Arnold Magcale) and Daniel Kekai (Daniel Kekai) believe that such data centers will be better protected from natural disasters such as earthquakes, moreover, in case of an unforeseen situation, they can be moved from place to place. For 6 years, the startup was engaged in the construction of a test data center: the project included the registration of a patent for a cooling system using water, along which the data center floats.

    These unusual solutions, when the forces of nature help the work of data centers, provide unprecedented flexibility and scalability, as it becomes possible to increase the capacity of the system without investing additional funds in the purchase or improvement of cooling systems.

    In conclusion, I want to note that all the changes taking place with the infrastructure are interconnected, and very often a breakthrough in one of the areas, for example, storage devices, entails a series of other events. Ultimately, all efforts are aimed at improving the competitiveness and financial performance of data centers that provide cloud services to their end users, allowing them to upgrade without stopping the service and implement optimal solutions.

    PS Additional materials about our work:


    Also popular now: