FCoE: The Future of Fiber Channel

    image
    The noise around the plans for Fiber Channel over Ethernet (FCoE), the announcement of its support by almost every manufacturer in our industry, it looks like this transport intends to completely supplant existing Fiber Channel networks in the near future. Standardization efforts culminated in ratification in 2009, so many manufacturers, including (and one of the first) NetApp, and our long-standing Cisco partner, are actively launching FCoE products on the market.

    If you are using Fiber Channel technology today, you should understand and prepare for this new technology. In this article, I will try to answer some important questions that may arise about FCoE technology, such as:
    • What is FCoE?
    • Why exactly FCoE?
    • What are the features of FCoE?
    • What should we prepare for in the future?

    What is FCoE?


    Fiber Channel over Ethernet , or FCoE , is a new protocol (transport) defined by the standard on the T11 committee . (T11 is a committee of the International Committee for Information Technology Standards — INCITS — responsible for Fiber Channel.) FCoE transfers Fiber Channel frames over Ethernet, encapsulating Fiber Channel frames in Ethernet jumbo frames. The standard was fully ratified in 2009.

    Why FCoE?


    The prerequisites behind the creation of FCoE were the idea of ​​consolidating I / O, and thereby making it possible to safely coexist different types of traffic in one "wire", which will reduce the nomenclature and simplify cable management, reduce the number of required adapters per host and reduce power consumption. Figure 1) Reducing complexity using FCoE.




    The strength that leads FCoE forward is the need to reduce total cost of ownership (TCO), while maintaining existing infrastructure and backward compatibility, as well as familiar procedures and processes. With the convergence of Fiber Channel and Ethernet, and eliminating the need to use different network technologies, FCoE promises a significant reduction in the complexity of the network structure, and taking into account the rapid cheapening of the 10Gb Ethernet infrastructure elements, it also reduces the cost.

    Initially, most FCoE deployments were done at the host and switch level, while storage systems continued to use the native Fiber Channel instead of FCoE. This helps maintain the large infrastructure investments that have been made at FC over the years.
    The great advantage of FCoE is that it provides a smooth migration from FC, as an interface, to Ethernet (while maintaining FC as a protocol). You can expand or replace part of your FC network with Ethernet switches, allowing you to transition from one network technology (FC) to another (Ethernet) as it becomes necessary.
    In the long run, if FCoE is successful, you can choose when upgrading your infrastructure, or building a new data center, a storage system that natively supports FCoE. NetApp has announced native support for its protocol and target HBA FCoE systems and will continue to support Fiber Channel support on all of its systems.
    And recently, NetApp and Cisco announced the completion of the certification process for the industry's first full FCoE solution, from host to storage, for server virtualization systems running VMware vSphere.

    FCoE implementation


    There are two possible ways to deploy FCoE in your IT system:
    • Use a hardware initiator using a converged network adapter (CNA), and a hardware target, which is similar to the existing Fiber Channel model.


    Figure 2) CNA supports both FC and Ethernet on a single device, reducing the number of network devices needed.

    CNA manufacturers are usually companies familiar to us from FC HBA, such as Qlogic, Emulex, and Brocade, but most likely, traditional manufacturers of traditional Ethernet NICs, such as Intel and Broadcom, will also appear among them. Both of them actively participate in the T11 working group (FC-BB-5), which is developing FCoE standards.
    • Use software initiator and target with regular 10 Gigabit Ethernet (10GbE) NIC.

    In December 2007, Intel released a software initiator to help develop FCoE solutions for Linux. Various Linux distributions are expected to come with the FCoE initiator software . The idea that Linux distributions will ship with FCoE ready is similar to how all OSs today are iSCSI ready.
    I think that such a software solution will be fast enough for a relatively low price, compared to a purely hardware solution. Since, in practice, we buy a “growth” server with a certain margin in performance, based on a certain growth of our tasks and applications, we usually always have some CPU power reserve on them. The iSCSI market confirms this theory, including in virtual infrastructures.


    Figure 3) Stack of FCoE software initiator.

    What is saved?


    For those who already use Fiber Channel, when using FCoE, there will still be a need to configure the zoning and mapping of LUNs, as well as the usual tasks in the factory, such as registered state change notification (RSCN) and link state path selection (FSPF). This means that migrating to FCoE will be relatively simple and familiar. Any changes are painful, but the transition to the new protocol, when it can use the established procedures, processes and know-how, makes such a transition to Ethernet easier, and will be a big advantage of FCoE.

    How is FCoE different from iSCSI?


    FCoE does not use TCP / IP, which uses iSCSI, and therefore has a number of differences, such as:
    • Using the pause frame
    • Using Priority Pause
    • Lack of TCP retries (timeouts)
    • Lack of IP routing capability
    • Lack of "broadcast storms" (ARP not used)



    Figure 4) Comparison of various block protocols.

    Since FCoE does not completely use the IP layer, this means that FCoE is not routable. However, this does not mean that it cannot be routed at all. FCoE routing can be accomplished, if necessary, with punctures such as FCIP.

    ISCSI can be used on a packet loss network, and not necessarily requiring 10GbE. FCoE requires exactly 10GbE, and a network without packet loss, with infrastructure components that correctly handle pause frame and per priority pause flow control (PFC) requests based on different traffic classes corresponding to different priorities. The idea behind PFC is to provide high-priority traffic with an advantage in transmission at times of high channel load, while low-priority traffic will be delayed in favor of high-priority pause frame.
    10GbE switches will also require support for Data Center Ethernet (DCE), an Ethernet extension that includes classes of service, better congestion control, and improved management capabilities. FCoE also requires Jumbo Frame support, since the FC packet is 2112 bytes in size and cannot be split during transmission; iSCSI does not require the use of Jumbo Frames.

    Choose between FCoE and iSCSI


    The stricter FCoE-specific infrastructure requirements compared to iSCSI may affect which protocol you choose. In some cases, the choice of protocol is determined by which one is supported by the software manufacturer.
    Other than that, you might prefer iSCSI if your goals are:
    • Low cost
    • Ease of use

    iSCSI will run on your current infrastructure, with minimal changes. Network infrastructure requirements for FCoE may mean that you will need converged network adapter (CNA) devices and new switches that support DCE. (In the future, you may be able to use existing NICs in combination with a software initiator, just as it does for iSCSI today.)
    Since iSCSI runs on top of TCP / IP, managing such a network will be more familiar, making it easier to install and manage. FCoE does not use TCP / IP. Its administration is more like administering traditional FC SANs, which can be quite difficult if you are unfamiliar with FC SAN administration.
    In other words, you might prefer FCoE if you already have significant experience with Fiber Channel SANs (FC SANs), especially if your requirements include:
    • Support mission-critical applications
    • High data availability
    • The highest possible performance

    This does not mean that iSCSI does not meet these requirements. However, Fiber Channel has already proven itself over a long period of use in such systems; FCoE offers an identical set of capabilities and is fully compatible with existing FC SANs. It just replaces the physical layer of Fiber Channel with 10GbE.
    FCoE performance advantages over iSCSI still require confirmation. Both use 10GbE, but TCP / IP can increase latency for iSCSI, possibly giving FCoE a slight edge in a similar environment.
    These principles are consistent with current application practices for both iSCSI and FC SAN. So far, the most profitable application for iSCSI has been the storage consolidation of Windows environments, mainly on 1GbE. The use of iSCSI is usually for auxiliary and backup data centers in large organizations, in the main data centers of smaller companies, and in remote offices.
    Fiber Channel systems dominate large data centers, large organizations, and are typically used for mission-critical applications on UNIX and Windows systems. Examples of common usage fields include data warehouses such as data warehouse, data mining, enterprise resource planning, and OLTP.

    What will happen to Fiber Channel?


    With all this noise around FCoE, what will happen to Fiber Channel? Will there be a switch to 16Gb FC technology, or will only FCoE now steer ? Will Ethernet continue further development (40GbE and 100GbE)? As you can see in the current roadmaps, 16Gb FC is scheduled for 2011. FCIA's latest press releases claim strong 16Gb FC development support along with FCoE support. I think that 16Gb FC will undoubtedly appear, but the big question is how quickly it will be accepted by the market, regarding FCoE. To date, the 8Gb FC that has been existing for several years is clearly not universally replacing 4Gb FC. Many equipment manufacturers as well as customers with large FC networks are already actively reorienting themselves to FCoE, as a more promising future and more economically viable solution.

    What do you need to do?


    What you need to do depends on your situation. If you have invested a lot in Fiber Channel, and you don’t need to upgrade in the next few years, it’s probably best not to do anything. If you plan to upgrade in the next year or two, then pay serious attention to FCoE. Apparently, the current manufacturers of FC switches intend to transfer their users to Ethernet, and may stop creating their own FC switches.

    Technologies can solve many problems, but the issues of interaction between groups in large organizations is clearly not where they will help. One of the problems that, for example, were encountered when implementing iSCSI in large companies, was the conflict of responsibility between groups of network administrators and network and storage administrators. In the traditional FC infrastructure, the guys from the group of admins of the storage network are fully responsible for the FC-fabric and own all rights to it, in the case of iSCSI it is managed by the group of network admins of the company. If FCoE is successful, the groups will need to get closer, they will have to work closer to each other than ever, and paradoxically, this may turn out to be the biggest problem that FCoE faces in the IT infrastructure of companies.

    conclusions


    Although FСoE creates certain difficulties in deciding where and how you can apply it, its long-term prospects and advantages are clear. By consolidating your networks on one Ethernet fabric, you can significantly reduce both capital and administration costs, without sacrificing the ability to choose the protocol that best suits the needs of your applications.
    No matter what you choose, iSCSI, FCoE, or a combination of them, NetApp storage systems support all of these storage protocols at the same time, on the same storage system. Choosing NetApp storage systems can provide security and investment protection if you need to further develop your IT infrastructure.
     

    Also popular now: