NVMe over Fabric, Fiber Channel and others

    imageAbout the impending death of Fiber Channel, they say only a little less than about the death of tape drives. Even when the speed was limited to 4 Gbps, then the new-fashioned iSCSI was expected to replace the FC (even if the sane budget is only 1 Gbps, but 10 is already nearby). Time passed, and 10Gbit ethernet remained too expensive a pleasure and also could not provide low latency. ISCSI as a protocol for communication between servers and disk systems, although it has received significant distribution, was unable to completely supplant FC.

    The past years have shown that the Fiber Channel infrastructure continues to develop rapidly, the interface speed is growing and talking about the impending demise is clearly premature. And in the spring of this (2016) year, the Gen 6 standard was announced, which doubled the maximum speed from 16GFC to 32GFC. In addition to the traditional increase in productivity, the technology has received a number of other innovations.

    The standard allows you to combine 4 FC lines into one 128GFC channel for connecting switches to each other through a high-speed ISL link. Error correction (Forward Error Correction, FEC) was already available in the fifth generation FC products as an option, but in Gen 6 its support became mandatory. At such high speeds, not only does the probability of errors increase (BER for Gen 6 is 10-6), but the impact of errors on performance also increases even more due to the need to resend frames. FEC allows the receiving side to correct errors without having to make repeated requests to resend the frame. As a result, we get a more “even” data transfer rate. Energy efficiency was also not neglected - to reduce power consumption, copper ports can be completely disabled, and optical ports can reduce power by up to 60%.

    Still, the strength of FC technology is its low latency (which is 70% lower compared to the widely used 8 Gbps standard). It is the combination of low latency and high performance that makes 32GFC a suitable solution for connecting All-Flash arrays. On the horizon, we are increasingly seeing NVMe systems that place the highest demands on the storage network infrastructure and 32GFC has every chance of winning a worthy place.

    FC Gen 6 chips, adapters, and the Brocade G620 switch were announced in spring along with the standard itself, and not so long ago, new directors (chassis switches) of the Brocade X6 Director family were announced. In its maximum configuration (8 slots), it supports up to 384 32GFC ports + 32 128GFC ports with a total bandwidth of 16Tbit. Depending on the chassis, you can install 8 or 4 FC32-48 line cards (48 32GFC ports), or SX6 multi-protocol cards (16 32GFC ports, 16 1 / 10GbE ports and two 40GbE ports). SX6 blades allow you to use IP networks to connect switches. Unfortunately, the chassis could not be upgraded, and the good old DCX-8510 cannot be upgraded to 32GFC, but support for the Gen 7 standard card is declared for the X6 line.

    Considerable attention is paid not only to hardware capabilities, but also to the control system. Brocade Fabric Vision with IO Insight technology enables proactive monitoring of the entire I / O channel, including not only to physical servers, but also from individual virtual machines to specific LUNs on storage. In conditions when many different applications are consolidated on one storage system, the analysis of the performance of the entire complex is quite complicated and the collection of metrics at the switch level can greatly simplify the search for a problem. Custom alerts help you quickly respond to potential problems and prevent key application performance degradation.

    But of course we are not living in a single Fiber Channel, and Mellanox announced the upcoming release of the BlueField chip family. These are systems on a chip (SoC) with support for NVMe over Fabric and an integrated ConnecX-5 controller. The chip supports Infiniband up to EDR speeds (100Gb / s), as well as 10/25/40/50 / 100Gb Ethernet. BlueField aims to be used both in NVMe AllFlash arrays and in servers for connecting NVMe over Fabric. It is expected that the use of such specialized devices will provide an opportunity to increase the efficiency of servers, which is very important for HPC. Use as a network controller for NVMe storage eliminates PCI express switches and powerful processors. Someone may say that such specialized devices run counter to the ideology of software defined storage and the use of commodity hardware. I think that since we get the opportunity to lower the price of the solution and optimize performance, this is the right approach. The first deliveries of BlueField are promised in early 2017.

    image

    In the short term, the number of NVMe storage systems will increase steadily. Connecting servers via a PCI-express switch, although it provides maximum speed, has a number of drawbacks, so the published version 1.0 of the “NVM Express over Fabrics” standard arrived in time. FC or RDMA factory can be used as transport, the latter, in turn, can be physically implemented based on Infiniband, iWARP or RoCE.

    RDMA transport through Infiniband will prevail more likely in HPC systems, as well as where there is the opportunity to make a hand-made “homemade”. There is no negative in this phrase - Fiber Channel has been a recognized corporate standard for many years and the probability of running into problems is much lower than when using RDMA. This concerns both compatibility issues with a wide range of application software and ease of management. All this has a price that the corporate market is closely monitoring.

    At one time, some manufacturers predicted the great success of FCoE technology, which made it possible to unify the storage network with a conventional data transfer network, but in fact they could not achieve significant success in conquering the market. Now the topic of NVMe storage with an Ethernet connection and transmitting NVMe over Fabric data via RoCE (RDMA over Converged Ethernet) is quite actively developing. There is a possibility that here success will be more significant than with the introduction of FCoE to the masses, but I am sure that we will see more than one generation of Fiber Channel devices. And now it is very early to say that “finally you can do only ethernet” - yes, it is often possible, but far from the fact that it will be cheaper.

    Today, if the FC network has already been deployed, it rarely makes sense to introduce alternative solutions - it is better to upgrade equipment to Gen 6 or Gen 5 standards - the effect will be even with a partial upgrade. Despite the fact that the available storage system does not support the maximum speed, updating the storage network often allows you to reduce latency and increase the integrated performance of the entire complex.

    Trinity engineers will be happy to advise you on server virtualization, storage, workstations, applications, networks.

    Visit the popular Trinity Tech Forum or request a consultation .

    Other Trinity articles can be found on the Trinity blog and hub. Subscribe!

    Also popular now: