Actual innovations: what to expect from the data center market in 2019?
The construction of data centers is considered one of the fastest growing industries. Progress in this area is enormous, but whether any breakthrough technological solutions will appear on the market in the near future is a big question. Today we will try to consider the main innovative trends in the development of world data center construction in order to answer it.
Course on Hyperscale
The development of information technology has led to the need to build very large data centers. Mostly hyperscale infrastructure is needed by cloud service providers and social networks: Amazon, Microsoft, IBM, Google and other major players. In April 2017, there were 320 such data centers in the world , and in December there were already 390. By 2020, the number of hyper-scalable data centers should grow to 500, according to forecasts of Synergy Research specialists. Most of these data centers are located in the United States, and this trend has continued, despite the fast pace of construction in the Asia-Pacific region, noted by analysts at Cisco Systems.
All hyper-scalable data centers are corporate and do not rent out rack space. They are used to create public clouds related to the Internet of things and technologies of artificial intelligence of services, as well as in other niches where processing of huge amounts of data is required. Owners are actively experimenting with increasing power density per rack, unpackaged servers, liquid cooling, increasing room temperature and a variety of specialized solutions. Given the increasing popularity of cloud services, Hyperscale will in the foreseeable future become the main driver of industry growth: here you can expect the emergence of interesting technological solutions from leading manufacturers of IT equipment and engineering systems.
Another noticeable trend is the exact opposite: in recent years, a huge number of micro-data centers have been built. Research and Markets forecasts this market will growfrom $ 2 billion in 2017 to $ 8 billion by 2022. They connect this with the development of the Internet of things and the industrial Internet of things. Large data centers are too far away from field automation systems. They are engaged in tasks for which the readings of each of the millions of sensors are not required. The primary processing of data is best done where it is generated, and only then send useful information along long routes to the cloud. To designate this phenomenon, they came up with a special term - boundary computing or Edge computing. In our opinion, this is the second most important trend in the development of data center construction, which leads to the emergence of innovative products on the market.
Battle for the PUE
Large data centers consume a huge amount of electricity and generate heat, which must be disposed of somehow. Conventional cooling systems account for up to 40% of the facility’s energy consumption, and refrigeration compressors are considered the main enemy in the fight to reduce energy costs. Gaining popularity allowing you to completely or partially abandon their use solutions with the so-called. freecooling. In the classical scheme, chiller systems with water or aqueous solutions of polyhydric alcohols (glycols) are used as a heat carrier. In the cold season, the compressor-condenser unit of the chiller does not turn on, which significantly reduces energy consumption. More interesting solutions are based on a dual-circuit air-to-air circuit with rotary heat exchangers and with or without adiabatic cooling section. Experiments are also being conducted with direct cooling by outside air, but these solutions are hardly innovative. Like classical systems, they assume air cooling of IT equipment, and the technological limit of the effectiveness of such a scheme has already been practically reached.
Further reduction in PUE (the ratio of total energy consumption to energy consumption of IT equipment) will come from the growing popularity of liquid cooling schemes. It is worth recalling the project launched by Microsoft to create modular underwater data centers, as well as the concept of floating data centers Google. The ideas of technological giants are still far from industrial implementation, but less fantastic liquid cooling systems are already working at various facilities from supercomputers from Top500 to micro-data centers.
With contact cooling, special heat sinks are installed in the equipment, inside which liquid circulates. Immersion cooling systems use a dielectric working fluid (usually mineral oil) and can be made either in the form of a common sealed container, or in the form of individual cases for computing modules. At first glance, boiling (two-phase) systems are similar to submersible ones. They also use dielectric fluids in contact with electronics, but there is a fundamental difference - the working fluid begins to boil at temperatures around 34 ° C (or slightly higher). From the course of physics, we know that the process proceeds with the absorption of energy, the temperature stops growing, and with further heating, the liquid evaporates, i.e., a phase transition occurs. In the upper part of the sealed container, the fumes come into contact with the radiator and condense, and the droplets return to the common tank. Liquid cooling systems allow achieving fantastic PUE values (around 1.03), but require serious modifications of computing equipment and cooperation between manufacturers. Today they are considered the most innovative and promising.
Many interesting technological approaches have been invented to create modern data centers. Manufacturers offer integrated hyperconverged solutions, software-defined networks are built, and even data centers themselves become software-defined. To increase the efficiency of facilities, not only innovative cooling systems are installed, but also hardware and software solutions of the DCIM class, which allow optimizing the operation of the engineering infrastructure based on data from many sensors. Some innovations do not live up to their expectations. Modular container solutions, for example, could not replace traditional data centers made of concrete or prefabricated metal structures, although they are actively used where computing power needs to be deployed quickly. At the same time, traditional data centers themselves become modular, but on a completely different level. Progress in the industry is very fast, albeit without technological leaps - the innovations we mentioned first appeared on the market several years ago. 2019 in this sense will not be an exception and will not bring obvious breakthroughs. In the age of numbers, even the most fantastic invention is quickly becoming a common technical solution.