"Program" data center

Good day to all. Today we would like to discuss technologies and tools that provide “software determinability” of the network part of data centers.

First of all, when building an SDDC, you should think about macro-virtualization technology. It implies a working tandem of a virtual machine management system (VM), DCIM (Data Center Infrastructure Management) and SNMP adapters (Simple Network Management Protocol). DCIM with the help of adapters collects and aggregates information on the state of the engineering infrastructure of data centers, the availability of free space, as well as rack space. The system allows you to get the most complete picture of what is happening in the data center of the provider company "here and now." If you add information from the VM control system to the DCIM data, this will allow you to identify problem points in the data center (exceeding threshold temperature values, lack of power supply, etc. ) and move the load on the processor power to a particular physical zone of the data center (or to another data center). The company will be able to migrate highly loaded virtual machines closer to the appropriate elements of the engineering infrastructure. And in the areas of “server-side passions” it will be the coldest.

Thus, the provider’s physical data centers, combined with macrovirtualization technology, are being transformed into a single “ecosystem” with, among other things, increased reliability. The proverb “You can’t break a broom and break the bars one by one” is appropriate here: with a low level of reliability of the engineering infrastructure of individual data centers, they all provide complete mutual redundancy.


Naturally, one can not do with one macrovirtualization in case of SDDC. The concept of a software-defined data center organically lays down a steady tendency to reduce the physical complexity of the infrastructure and transfer complexity to a virtual, software environment. One manifestation of this approach is the idea of ​​Network Function Virtualization (NFV). Now there are both Open Source and commercial implementations in a virtual environment of network devices - switches, routers, load balancers, firewalls, etc. In the foreseeable future, we can expect that only servers (they are also distributed storage systems) and high-performance switches with a limited set of functions will remain in the data center. And such an infrastructure will provide the vast majority of the requirements of modern software,

Why is this trend steady and what are the benefits of NFV? This is how the CEO of 220 companies from various sectors of the economy polled by the SDNCentral portal answers this question (see Fig. 1).


Fig. 1. Advantages of NVF (source - SDNCentral Report on Network Virtualization 2014)

From the survey data, it can be seen that the flexibility indicator is in the lead with almost a three-fold margin: a virtual server is much easier to acquire, scale, etc., than a physical device. And the indicators of capital and operating costs, although noted by the respondents, are not in the leading positions.

If you specify the survey indicators, the following advantages can be noted:
  • the ability to create a personal network environment for each application or subscriber in the cloud infrastructure;
  • reduction in the range of devices;
  • Reduce maintenance costs and ensure the functioning of IT infrastructure. Virtual network devices “live” in a virtualized environment on standard servers of the same type; therefore, their maintenance does not require a large range of spare parts and a separate rack space;
  • reduction of delivery time: NFV is software and licenses, the delivery of which is much simpler than the supply of equipment;
  • simplicity of replication and scaling, while they are quite easy to automate;
  • short time and ease of recovery. Backups of virtual network device configurations or just images of their virtual machines can be restored in minutes on other servers.

Thanks to the use of NFV, the provider can provide the subscriber with the required number and range of network devices in a matter of minutes, which he will independently configure in accordance with his unique requirements. By the way, the ability to fully manage the settings of their rented network devices qualitatively distinguishes the service from the option when the provider sets up the network: this does not give the customer control over the settings, many times complicates the operation of the network for the operator, and as a result, it costs much more to everyone.

In a situation where there is a junction between the physical and virtual networks, the problem of managing and configuring such a “hybrid” network immediately arises. Traditional network management methods, as a rule, do not suit either the functionality or the point of view of the fleet of supported equipment. Therefore, the IT industry began to look for "workarounds." Indeed, instead of reconfiguring the physical network in each individual case, why not leave it alone by configuring it once and demanding only one thing - reliability? As a result, a new type of network was created - Overlay Networks, or logical networks created over physical networks. From a network point of view, this has been done before: the well-known standard IEEE 802.1q (VLAN) falls under this definition. The difference is that overlay protocols work in routed networks and provide significantly more labels for identifying networks (usually 16 million). In general, an overlay network is created using software or hardware switches (gateways) and tunneling protocols (VXLAN, NVGRE, STT, GENEVE).

Thanks to overlay networks, the virtual environment administrator configures tunnels between virtual machines without having to configure any physical switches. Overlay network allows you to provide the necessary services for applications on top of any reliable network infrastructure. Its advantages:
  • providing connectivity of virtual machines in various network segments and even in different data centers;
  • improving the stability of the network due to the ability to use a routed network as a transport;
  • the ability to work together in a virtual computing infrastructure, interact with NFV devices;
  • all overlay protocols work on the principle of encapsulation and use the standard Ethernet frame format, so full compatibility with the existing network infrastructure is ensured.

The disadvantages of the technology include the use of Multicast (Multicast support is required in the physical network infrastructure) and the growth of server utilization due to the need for encapsulation-decapsulation of overlay data.

One of the building blocks of a software-defined network environment is also Software Defined Networking (SDN) technology. Designed to use the NFV approach for high-performance network switches that integrate servers into a single infrastructure, SDN is finding ever new supporters.


Fig. 2. Software Defined Network Architecture

The main idea of ​​SDN is the separation of control and transport functions of the network infrastructure. All the "intelligence" is concentrated on a separate hardware / software base - a dedicated control SDN controller: it determines the operation of the network based on the given rules. At the same time, switches perform elementary actions with packets and lose most of their intellectual functions. Traffic management - the interaction of the managing controller and switches - is carried out using special protocols (the most promising and actively developing - OpenFlow), operating with the concept of "flow". Through them, various actions with traffic are carried out - prohibition, permission, redirection, etc. SDN provides the flexibility to manage the network and greatly simplifies its administration.

But the main “highlight” of SDN is still different: the SDN controller should have means of integration with orchestration systems and, in the future, with applications. This will ensure the management of network resources based on relevant requests from information systems. For example, the network will dynamically allocate a wider bandwidth for the duration of a video conferencing session, and then redistribute it in favor of other applications. Understanding that on the network there are not only packages and streams, but also applications, is one of the key features of SDN networks that provides them with a future in the corporate world. And it is the centralized SDN architecture that makes this task possible.

Motion limiters


The technologies SDN, NFV, DCIM, etc., are of interest to many Russian companies - providers of IT services, but there are still no full-fledged SDDC implementations in our country. There are several fundamental reasons for this.

Thus, there are no ready-made solutions on the market that allow integration of the DCIM system and virtual machine management software. The company will have to independently resolve this issue with the help of its IT specialists or the team of a partner - a system integrator. And the choice of DCIM itself causes certain difficulties. Currently, this software is offered by two groups of manufacturers. The first includes vendors that have historically specialized in creating solutions for data center engineering infrastructure. When building a data center physical asset management system, they go “from the bottom” - from collecting detailed data about the state of the “engineer” components.

The second group consists of manufacturers developing solutions for the integrated management of IT infrastructure. Such systems have wide functionality and are intended for conducting detailed inventories, planning the placement of equipment in the data center, forecasting, operational monitoring of power consumption, etc. That is, they solve the problem "from above." In this case, the choice of a solution depends on specific conditions. The company must decide which system it needs specifically for the state of the infrastructure and development plans, conduct an RFI, develop a KPI, create a short list and perform test tests. All this translates into significant time and labor costs. It is logical that many providers put this off the bat, pushing back the transition to a software-defined environment.

If we talk about the technologies of NFV, SDN, Overlay Networks, then their distribution is constrained by their novelty and the lack of complete, ready-to-implement solutions. In the data centers of companies lives the familiar, long-familiar network hardware, IT specialists know its pros and cons, in contrast to the behavior of virtualized network devices. A paradigm shift requires additional financial investments, but companies have already spent on building a traditional network. At the same time, one does not really have to rely on the possibility of using Open Source SDN controllers: Open Source SDN software is for the most part a “workpiece” that needs to be developed, first of all, by programmers, which not every company can afford. Market expectations about the "cheap" SDN-switches are not justified at the moment: TCAM (Ternary Content Addressable Memory) to support the required number of threads is an expensive component, therefore, affects the cost. In addition, vendors, for natural reasons, make decisions far from “Open”: none of the serious manufacturers will miss the opportunity to bind the customer company with proprietary improvements.

On the other hand, hardware solutions will sooner or later require replacement due to physical and moral depreciation, therefore, when planning the further development of the IT infrastructure of a company, it is worth taking into account the prospect of expanding its virtual level. It is necessary to prepare for this process beforehand, checking various components and options for implementing solutions.

Also popular now: