What we know about high density servers

    RLX Technologies Server Blade

    Comments on some habr posts made me wonder if people have an understanding of high-density servers and their capabilities. The purpose of writing this post is to introduce certainty on this issue. It is also planned that this post will be the first in a series of articles on the topic of HPC ( high performance computing ).

    High-density servers are most in demand in technologies for building cluster-type supercomputers, virtualization and cloud organization systems, parallel access systems to storage systems, analytical calculation systems, search engines, etc. Their use is primarily due to the inability to fulfill all the requirements using other technologies . Consider the solutions, their pros and cons.

    Blade Server


    In the West, server locations in data centers have long been lacking. Therefore, it is not surprising that high-density servers first appeared there. The pioneer was RLX Technologies, which in 2000 developed a system that fits in 3U 24 blades. The main customers for these first Blade servers were warriors and NASA. Further, this startup was purchased by HP. But the most important thing was done - a high density server was created.

    The pioneers were followed by giants: Intel, IBM, HP. Next - DELL, SUN, Supermicro, Fujitsu, Cisco, HDS, etc.

    The main differences between Blade systems and RACK servers, in addition to high density, is the integration of servers with related infrastructure: networks, monitoring, management, cooling and power supply. All this is located in one box and, if possible, has elements of fault tolerance. The unifying element is BackPlane - the motherboard, usually passive. All elements of the Blade system are connected to it. The space occupied by the cabinet varies from 3U to 10U. The highest-density solutions are HP Blade and DELL PowerEdge - 3.2 servers per 1U. Almost all manufacturers make servers only on x86 / x64 family processors. But there are also solutions on RISC, MIPS and ARM processors.

    By the way, in the RLX Technologies solution, server density was higher. This is due to the fact that it used single-core Celeron processors, which are now mainly used only for desktop thin clients. It is clear that the heat dissipation of modern processors is much higher, and it is precisely this that does not allow increasing the density in modern solutions.

    What are the benefits of blade servers? Let's highlight the main points:
    1. Everything is located in one chassis.
    2. The monitoring and control system has advanced features compared to RACK servers.
    3. The presence of several types of networks in each blade server. It can be: Ethernet (100Mb / s, 1Gb / s, 10Gb / s), FiberChannel (2Gb / s, 4Gb / s, 8Gb / s, 16Gb / s), InfiniBand (SDR, DDR, QDR, FDR).
    4. The built-in cooling and power elements have fault tolerance elements.
    5. Hot swappable replacement parts.
    6. The ability to organize an integrated disk storage system for all installed blade servers.
    7. Cabinet density.

    What are the weaknesses? The main disadvantages that I see as significant:
    1. The high price of an incomplete kit. Only when reaching a filling of about 70% we get close prices with RACK'ovymi analogues.
    2. Limit on expanding blade server configurations.
    3. The inability to reject the server blade as an independent unit.
    4. Limited use of network interfaces at the same time.
    5. Inability, in some cases, to organize a non-blocking network between blade servers and the outside world.
    6. Restriction in the use of components by thermal package (for example, you cannot install the most top-end processors due to overheating).
    7. Proprietary technology. Having bought equipment from one manufacturer, you will buy only from him.
    8. Increased requirements for engineering infrastructure (power and cooling).

    Let's consider the structure of the Blade system using the example of a solution from Dell This is the Dell PowerEdge M1000e.

    Dell PowerEdge M1000e High Density Server

    Blade servers can have from two to four processors. Depending on the number and type of processors, from 8 to 32 server blades can be installed in one chassis. Each server blade can have 1GbE, 10GbE, 8Gb / s FC, IB DSR, DDR, QDR, FDR interfaces. Basically, there are 1GbE ports.

    Depending on the size of the blades, the number of installed mezzanine interface modules can be one or two. Each of the mezzanine modules can have four 1GbE ports or two ports of any other interfaces.

    To organize a fault-tolerant circuit in the chassis, switches are installed in pairs. It is possible to install three pairs of switches. Each pair should consist of the same switches. Accordingly, there may be various combinations:
    • First pair (A) 1GbE;
    • The second pair (B) 1GbE, 10GbE, 8Gb / s FC, IB (DSR, DDR, QDR, FDR);
    • Third pair (C) 1GbE, 10GbE, 8Gb / s FC, IB (DSR, DDR, QDR, FDR).

    Also for fault tolerance, two remote monitoring and control modules are installed. These modules allow you to remotely control any blade. From turning on, BIOS settings, selecting a boot source, installing the OS both from the internal media and from the local administrator media to providing full remote access to KVM.

    One of the download options is downloading from an SD card. You can install two such cards in the blade and be able to boot from any. It is also possible to combine them in a mirror.

    The only module that does not have redundancy is the KVM module. But the failure of this module does not negate the ability to connect and control through the network.

    When using the M420 blades, the density of servers per 1U is 3.2 servers.

    M420 Server Blades

    TWIN server


    An alternative in density to existing Blade systems is their younger brothers, TWIN. This technology was developed by Intel and transferred to Supermicro in 2006 to promote the market. The first TWIN servers appeared in 2007. It was a 1U’s two-server design with one power supply, where all the switching connectors were connected to the rear of the servers.

    An alternative in density to existing Blade systems is their younger brothers - TWIN

    This layout over the past six years has gained recognition, and the range has expanded greatly. Now available 1U, 2U and 4U TWIN-servers with the ability to install from 2 to 8 dual-socket servers. Some manufacturers have options with placing instead of two two-socket servers of one four-socket. The main pros and cons are listed below.

    Pros of TWIN servers:
    1. Everything is located in one chassis.
    2. The presence of several types of networks in each server. It can be: Ethernet (100Mb / s, 1Gb / s, 10Gb / s), InfiniBand (SDR, DDR, QDR, FDR).
    3. Built-in cooling and power elements in some models have fail-safe elements.
    4. A number of TWIN servers are hot-swappable for all plug-in components.
    5. Using standard PCI-e expansion cards.
    6. The ability to organize an integrated disk storage system.
    7. Cabinet density.
    8. The price is lower than Blade and RACK server.

    Minuses:
    1. External network switches are required.
    2. The inability to reject the server blade as an independent unit.
    3. In some cases, the restriction in the use of components by thermal package (for example, you cannot install the most top-end processors due to overheating).
    4. When the cabinet is completely blocked by TWIN servers, increased requirements for the engineering infrastructure (power supply and cooling).
    5. Server density is lower than blades.

    As we see from the pluses and minuses, the TWIN server and blade server are more likely not competitors, but an organic complement to each other.

    One of the prominent representatives of TWIN-servers are Dell C6000 series servers. They are a 2U’s construct with two power supplies and the ability to install two, three or four server modules. You can install two or three expansion cards with a PCI-e interface in each server.

    One of the prominent representatives of TWIN-servers are Dell C6000 series servers

    Microserver


    Our story will not be complete if we do not talk about the latest trends in server designs for data centers. It will be about microservers. These are single-socket servers with minimization of the sizes and power consumption. You should not count on serious performance characteristics. One of the representatives of this type of server is the Supermicro server, shown in the figure.

    Microservers may be applicable as an alternative to virtualization

    As you can see from the figure, the density of this solution is already 4 servers per 1U. The emergence of this class of servers is dictated by the low server requirements for most applications used by clients. Microservers may be applicable as an alternative to virtualization. When any application is not recommended for virtualization for one reason or another. Small microservers are also suitable for typical low-load office tasks.

    Conclusion


    I tried not to delve into the details of each individual manufacturer. These details can be studied directly on the websites of these manufacturers.

    It is difficult to make an unambiguous conclusion about which of the solutions described above is best suited for a particular customer. So, for example, the density of blade servers is much higher than that of TWIN servers, which allows you to place more servers at a minimum of space. On the other hand, the latency of 10GbE modules for a blade basket can be higher than the latency of 10GbE PCIe cards for TWIN servers. Or, the peak performance of processors in high-density Blade servers is lower than that of processors in TWIN solutions. But to argue about the advantages of one high-density solution over another can only be based on a specific task. We are ready to share their opinion with those interested, considering their specific tasks.

    Also popular now: