HP Networking Convergence. Part 1

    Hewlett-Packard Networking Convergence
    Part 1 - a theoretical review.

    “... The main reason for the emergence of convergence is the desire to
    reduce the cost of creating certain very complex and
    expensive objects with the acquisition of a new quality of the
    final product or service or the expansion of their spectrum”

    Leonid Kolpachev

    Today, networks are divided into two large blocks - these are storage networks and local area networks or data networks, so to speak, historically. What is convergence and what is its purpose? The purpose of convergence is to combine the two infrastructures into one and make a common network. What for? To cut costs. Both capital (less equipment is needed) and operational (since the equipment is smaller and uniform, it is easier and cheaper to maintain). Are there options for not consolidating the network? Of course have. If the issue of cost reduction is not on the agenda, then you can continue to develop two infrastructures in parallel, technologically both networks today meet modern requirements.
    Let's talk about technologies for building converged network solutions and about different options for building converged networks in data centers based on HP equipment. In the first part, I briefly recall the theory of what FC and FCoE are.

    FC is a high-speed connection protocol between servers and various storage systems, designed to provide reliable, bi-directional data transfer. It allows you to transfer data over long distances, up to 10 km and supports encapsulation of SCSI, FICON and TCP / IP protocols.
    The main functions of FCs are to control the flow of storage traffic. To ensure guaranteed delivery of lossless traffic, B2B (Buffer-to-Buffer) Credit Mechanism is used. Simplified and briefly, the operation of this mechanism can be described as follows: the switch assigns a certain number of credits to the traffic source during the initialization of the connection, which then decreases as the traffic is transmitted. The traffic transfer stops if the number of credits assigned at the source becomes equal to zero until the switch transfers a packet of the R_RDY type to the server or storage system. In this case, the number of credits on the transmitting device increases and the transmission of traffic resumes. Device addressing in FC / FCoE is performed using WWN (World Wide Name) as a unique identifier for the FC device and FC_ID, issued by the factory when registering the device at the factory and which are in the FC packet headers and the traffic in the FC factory is routed through them. Dynamic routing in FC is done using the FSPF protocol (like OSPF in IP), it supports multipath routing and works only inside the factory. Access control in FC is carried out on the basis of VSAN, the so-called virtual SAN network, you can draw an analogy with VLANs in Ethernet networks and based on zoning, which allows you to manage access to resources in much the same way as access control lists (ACLs) in Ethernet It supports multipath routing and only works inside the factory. Access control in FC is carried out on the basis of VSAN, the so-called virtual SAN network, you can draw an analogy with VLANs in Ethernet networks and based on zoning, which allows you to manage access to resources in much the same way as access control lists (ACLs) in Ethernet It supports multipath routing and only works inside the factory. Access control in FC is carried out on the basis of VSAN, the so-called virtual SAN network, you can draw an analogy with VLANs in Ethernet networks and based on zoning, which allows you to manage access to resources in much the same way as access control lists (ACLs) in Ethernet
    Now a few words about FCoE, what it is. This is the technology for encapsulating FC frames in Ethernet. FCoE is the protocol on which the convergence of networks in the data center is based, it is an attempt to “reuse” the existing standards of local area networks and storage networks to meet the needs of both SPD (data transfer networks) and storage systems (data storage networks).
    In order to ensure lossless data transfer, FCoE must work on top of a fundamentally different transport, so the so-called Lossless Ethernet (or sometimes called Converged Enhanced Ethernet, CEE) was invented, which carries the mechanisms of identification and flow control (Policy-based Flow Control), allows you to manage the priorities of traffic processing (Enhanced Transmission Selection), to manage network congestion (Congestion Notification). In part, Lossless Ethernet allows you to provide the required reliability in the way that a standard mechanism in FC can do. Without it, FCoE can also work, but it is much more difficult to provide the required level of reliability in this case.
    The FC protocol operation scheme is simple, the application generates SCSI commands, they go to the FC stack, are “wrapped” into the protocol and transferred to the Host Bus Adapter, go to the FC network, where the FC_ID-based switch transfers them to the corresponding storage system. Similarly, this works in the opposite direction. But the FCoE working scheme is slightly different from FC, here the application-level traffic is immediately divided into two parts, requests for access to the storage system go through SCSI commands in FC, and access to network resources goes through the TCP / IP stack, on a converged adapter these the two traffic converge, wrap themselves in Ethernet, and pass on to the network. Further, normal Ethernet network traffic is processed in a standard way, and FCoE comes to the switch with FCoE support,
    Now briefly about the main types of ports that are used in FC / FCoE: the ports between the factory switches are called E-ports, the ports between the factory and the consumers / traffic generators are F and N ports, respectively, the ports between the factory and the proxy switch are NP ports. In FCoE, ports are named in a similar way, only the letter V (from the word Virtual) is added - VN, VE, VNP.
    I summarize briefly some basic concepts of FC / FCoE:
    • When a device is connected to the FC network, the factory registers it and gives it FC_ID, through which traffic from this N port will then be switched, this is the process of the so-called login to the factory. In this case, the B2B (buffer-to-buffer) credits mechanism is initialized.
    • VSAN - used for logical separation of the factory based on physical ports, in fact for virtualization. As I said, this is, in fact, an analogue of VLAN in Ethernet.
    • Zoning is an access control mechanism in FC / FCoE, an analog of a bi-directional ACL that allows you to isolate devices from each other.
    • By analogy with Ethernet, VSAN is a virtual network, and Zoning is an ACL, an access control list that delimits access on this virtual interface inside VSAN.
    • Routing in FC / FCoE is done using the FSPF protocol, which is essentially the same as OSPF in IP. It works only on factory ports (E-ports).

    In order for FCoE to work normally (and FCoE is the data plane protocol), you need a control plane, which is implemented in FCoE using the FIP (FCoE Initialization Protocol), which implements search and login services to the factory, etc. . It must be remembered that these are two different protocols, although they are defined in the same FC-BB-5 standard.
    FIP Snooping Bridge is a switch that stands between the factory and the end devices (Node) and monitors the process of connecting Node to the factory (for example, looks at the VLANs with which FC-MAP the frames go and does it match what the factory assigned) .
    FCF is, in fact, a factory that implements all FC services (on which node logs in, receives FC_ID, etc.) and which transfers traffic between Node. The differences between FCF and FCB FSB are obvious, I already said them - the factory implements all FC services and switches FC traffic in accordance with FC_ID and settings. FSB FCB listens for traffic, supports Lossless Ethernet standards and checks the process of connecting Node to the factory. He cannot provide work without a factory; he definitely needs a factory on upstream.
    Concluding the theoretical part, let's talk about the important mechanisms of NPV and NPIV - what is NPIV, NPV and why is this needed. The switch in NPV mode is a proxy that allows you to hide the allocation of several FC_IDs to one N (Node) port. At the same time, the NP port is connected to the F port and functions as a proxy for the N ports of the NPV switch, which is especially important when the number of FC switches in the domain is limited. The mechanism that allows the allocation of multiple FC_IDs to a single N-Port is called NPIV. The N-port corresponds to the N-Port-ID and there is a one-to-one correspondence between WWPN and N-Port-ID. Where and why is it needed - first of all, it is needed where there are several applications that use access to the FC factory and you need to share one Host Bus Adapter for them and to differentiate access to resources. Most often, NPV switches are ToR switches that concentrate rack traffic or blade switches. They log in to the factory, and logins (FLOGI) from direct nodes are replaced by FDISC and thus proxy FC traffic. This saves Domain ID, as the switch is one, which allows you to better scale the network. In addition, this mechanism allows the switch to interact with equipment from other manufacturers.
    A few words about how to assemble a complete converged solution from HP equipment. Hewlett-Packard has an extensive portfolio of data center switches supporting FC / FCoE technology, and most of all, it is a 5900CP converged switch with full FC / FCoE support. This switch is not new (“run-in”), with the ability to change the direction of the flow of airflow, low latency on ports, high performance, support for 40G uplink ports and stacking in an IRF factory (up to 9 pieces, while the bandwidth of the stack connection is 320 Gbps /from). The stack allows you to fully realize the concept of Pay as you grow, i.e. you add equipment to the stack as your needs grow, and do not pay the full amount at once. The switch supports converged transceivers, which can operate in two modes - in Ethernet mode and FC / FCoE,
    This diagram shows what your converged data center might look like - the 5900v virtual switch is launched in the blade chassis, which connects to the 5900 series ToR switch, then the ToR connects to the data center switching core - 12500, 12900 or 11900. Traffic goes through and between the sites through HSR 6600 or 6800 series routers.
    Finally, I’ll remind once again the key point of the HP Networking licensing policy - the switches come with full-featured software and do not require a license to activate the FC / FCoE functionality, as well as TRILL, SPB, DCB, etc.

    Also popular now: