
FlexPod DataCenter: Direct-Attached Storage
In a previous article, I talked about the “non-FlexPod DC” architecture , which can be supported from one source through the Cisco Solution Support for Critical Infrastructure (SSCI) program. Its main feature is that it does not have Nexus series switches, and if you add them there, this architecture can become a full FlexPod DataCenter.
Here we will talk about a new network design for FlexPod DataCenter, with the direct inclusion of NetApp storage in the UCS domain. The difference from the standard FlexPod DataCenter architecture is that Nexus switches are not located between UCS and NetApp, but "on top" of UCS.
Despite the fact that before NetApp FAS series storagecould be connected directly to Fabric Interconnect (FI), officially the architecture of FlexPod DataCenter did not provide for such a design. Direct-on-line design is now supported and sorted out like FlexPod DataCenter architecture.

The overall design of the FC and FCoE network with direct connection.
For Ethernet traffic, together with direct connection and iSCSI protocol , and direct connection and FCP protocol - with the help of multipassing built into these protocols, there are no problems in setting fault tolerance and balancing by links.
But for NAS protocols, with direct connection ( NFS v2 / NFS v3 and CIFS v1 / CIFS v2), due to the lack of load balancing and multipasing inside these protocols, some other, lower protocols, such as LACP and vPC ( FI does not support vPC), thus the fault tolerance for an Ethernet network will have to be built somehow differently. For example, fault tolerance for Ethernet can be done at the virtual switch level (which may have problems with the performance of such a switch) or by using active-passive switching of the aggregated network link without LACP (which will not allow balancing traffic across all available links), for this aggregated link ifgrp , on the storage side , must be set to single-mode .
The issue of direct inclusion for NAS protocols is not so acute for NFS v4 and CIFSv3.0, but requires support for these protocols on the client side and storage systems (all FAS systems with cDOT support NFS v4 and CIFS v3.0), since both protocols finally prevailed a kind of multipassing.
The fact is that in their new solutions, NetApp often adheres to the ideology of "better to overtake than to non-finish." So, when it came to the world of Clustered ONTAP OS for FAS storage , it was mandatory for a cluster interconnect to have dedicated Nexus 5k switches (for $ 1) exclusively for this task. Over time, this configuration has been revised, tested, and the ability to switch-less configurations has been added . In the same way, it took time to test and debug direct inclusion of storage systems, first we added new topologies of Cisco UCS Manager based FC zoning with direct connection of direct storage systems to the UCS domain, it became available with the firmware versionRelease 2.1 (1a), and then it appeared in the FlexPod architecture.
The configuration with direct inclusion of storage in FI , from the point of view of architectural differences, in the case of using block protocols for access FC / FCoE / iSCSI , is the least different from the original FlexPod design, where Nexus switches acted as a bridge between the storage and the UCS domain. However, in the new design, the Nexus series switches are still an essential component of the architecture:
The difference between the SAN and NAS designs is that in the case of block protocols, multipassing and balancing mechanisms are performed at the FC / FCoE / iSCSI protocol level , and in the case of using NFS / CIFS ( SMB ) protocols, these mechanisms are absent. Multipassing and balancing functions must be performed at the Ethernet level using vPC and LACP , i.e. by means of the switch, which is also the determining factor of its presence in such designs.
Despite the fact that Nexus switches are an indispensable component of the FlexPod DataCenter architecture, direct connection reduces the cost of such a solution at the first stage of commissioning the complex, i.e. will allow not to buy Nexus switches at the beginning. In other words, you can first build non-FlexPod DC . And then buy the switches over time, spreading the budget overhead with a thinner layer and getting a more scalable FlexPod DataCenter architecture when necessary.
Subsequently, the network design can be converted to a more scalable one, and due to the duplication of components, this can be done without stopping the complex.

The limitation of the network design shown in Fig. 2 is 2 links from one storage controller to the switches when it goes aroundsimultaneously FCoE and Ethernet traffic . If you need to increase the number of links from the storage system, you will have to separate FCP and Ethernet traffic into separate, dedicated ports and links.
Benefits of FlexPod configurations.
New documents have been released on the implementation of the data center architecture : FlexPod Express, Select and DataCenter with Clustered Data ONTAP (cDOT):
Please send messages about errors in the text to the LAN .
Comments and additions on the contrary please comment
Here we will talk about a new network design for FlexPod DataCenter, with the direct inclusion of NetApp storage in the UCS domain. The difference from the standard FlexPod DataCenter architecture is that Nexus switches are not located between UCS and NetApp, but "on top" of UCS.
Despite the fact that before NetApp FAS series storagecould be connected directly to Fabric Interconnect (FI), officially the architecture of FlexPod DataCenter did not provide for such a design. Direct-on-line design is now supported and sorted out like FlexPod DataCenter architecture.

The overall design of the FC and FCoE network with direct connection.
Description of the switching circuit in the image above
Simultaneous FC and FCoE connections are shown for two reasons:
The Ethernet connection between the two NetApp FAS controllers is shown for two reasons:
The FC link from FI to Nexus switch is depicted for two reasons:
- So you can really do it and it will work
- To show that you can FC and / or FCoE .
The Ethernet connection between the two NetApp FAS controllers is shown for two reasons:
- To show that these are two nodes of one ON system (if there are more nodes, there would be cluster switches in the picture).
- An external cluster link is a required accessory for the Clustered DataONTAP operating system.
The FC link from FI to Nexus switch is depicted for two reasons:
- For the future. When we need to switch NetApp to Nexus switches and FI got access to their LUNs. After which the scheme will become more scalable, it will be possible to add more UCS domains.
- In order to take resources from storage from other servers that are not included in the UCS domain. For example, UCS Rack servers ( UCS C series) not connected to FI or servers of other manufacturers.
For Ethernet traffic, together with direct connection and iSCSI protocol , and direct connection and FCP protocol - with the help of multipassing built into these protocols, there are no problems in setting fault tolerance and balancing by links.
But for NAS protocols, with direct connection ( NFS v2 / NFS v3 and CIFS v1 / CIFS v2), due to the lack of load balancing and multipasing inside these protocols, some other, lower protocols, such as LACP and vPC ( FI does not support vPC), thus the fault tolerance for an Ethernet network will have to be built somehow differently. For example, fault tolerance for Ethernet can be done at the virtual switch level (which may have problems with the performance of such a switch) or by using active-passive switching of the aggregated network link without LACP (which will not allow balancing traffic across all available links), for this aggregated link ifgrp , on the storage side , must be set to single-mode .
The issue of direct inclusion for NAS protocols is not so acute for NFS v4 and CIFSv3.0, but requires support for these protocols on the client side and storage systems (all FAS systems with cDOT support NFS v4 and CIFS v3.0), since both protocols finally prevailed a kind of multipassing.
configure FCoE and CIFS / NFS traffic on top of one link
Next, go through the settings:
From the NetApp storage side, you need to put the ports in CNA state (you need CNA ports, regular Ethernet 1 / 10Gbs ports do not support this) using the ucadmin command on the storage (you will need to restart the storage ). The system will display independently “virtual” Ethernet ports and “virtual” FC ports, separately (although a physical port will be used for one such “virtual” Ethernet and one “virtual” FC ). Ports are configured separately, like regular physical ports.
On FI you need to enable FCmode to the “Switching mode” state, in the Fabric A / B settings on the “Equipment” tab. This setting will require a restart of FI .
If you want to transfer some ports to FC mode, then in the Fabric A / B settings on the Equipment tab, select Configure Unified Ports and in the wizard select the required number of FC ports (they are selected on the right). FI reboot again.
After rebooting FI, on the Equipment tab, convergent or FC ports will need to be switched to Appliance port mode, after a few seconds the port will go online. Then reconfigure the port in the FCoE Storage Port mode or in the FC Storage Port mode, in the right pane you will see the type of port " Unified Storage ". Now it will be possible to choose VSANand VLAN for such a port. And the important point created earlier by VSAN must have “FC zoning” enabled on FI in order to perform zoning.
Zoning configuration on FI:
SAN-> Storage Cloud-> Fabric X-> VSANs-> Create “NetApp-VSAN-600” ->
VSAN ID: 600
FCoE VLAN ID: 3402
FC Zonning Settings: FC Zonning -> Enabled
SAN-> Policies-> vHBA Templates-> Create “vHBA-T1” -> VSAN “NetApp-VSAN-600”
SAN-> Policies-> Storage Connection Policies-> Create “My-NetApp-Conn” -> Zoning Type-> Sist ( or Simt if necessary) -> Create->
FC Target Endpoint: “NetApp LIF's WWPN” (starts at 20 :)
SAN-> Policies-> SAN Connectivity Policies->
Create “iGroup1” -> Select vHBA Initiators “vHBA-T1”
Select Storage Connectivity Policy: “My-NetApp-Conn”
When creating a Server Profile, use the created policies and vHBA template.
- Firstly, you need Cisco UCS firmware version 2.1 or higher.
- Secondly, you need storage with 10GB CNA / UTA ports
Next, go through the settings:
From the NetApp storage side, you need to put the ports in CNA state (you need CNA ports, regular Ethernet 1 / 10Gbs ports do not support this) using the ucadmin command on the storage (you will need to restart the storage ). The system will display independently “virtual” Ethernet ports and “virtual” FC ports, separately (although a physical port will be used for one such “virtual” Ethernet and one “virtual” FC ). Ports are configured separately, like regular physical ports.
On FI you need to enable FCmode to the “Switching mode” state, in the Fabric A / B settings on the “Equipment” tab. This setting will require a restart of FI .
If you want to transfer some ports to FC mode, then in the Fabric A / B settings on the Equipment tab, select Configure Unified Ports and in the wizard select the required number of FC ports (they are selected on the right). FI reboot again.
After rebooting FI, on the Equipment tab, convergent or FC ports will need to be switched to Appliance port mode, after a few seconds the port will go online. Then reconfigure the port in the FCoE Storage Port mode or in the FC Storage Port mode, in the right pane you will see the type of port " Unified Storage ". Now it will be possible to choose VSANand VLAN for such a port. And the important point created earlier by VSAN must have “FC zoning” enabled on FI in order to perform zoning.
Zoning configuration on FI:
SAN-> Storage Cloud-> Fabric X-> VSANs-> Create “NetApp-VSAN-600” ->
VSAN ID: 600
FCoE VLAN ID: 3402
FC Zonning Settings: FC Zonning -> Enabled
SAN-> Policies-> vHBA Templates-> Create “vHBA-T1” -> VSAN “NetApp-VSAN-600”
SAN-> Policies-> Storage Connection Policies-> Create “My-NetApp-Conn” -> Zoning Type-> Sist ( or Simt if necessary) -> Create->
FC Target Endpoint: “NetApp LIF's WWPN” (starts at 20 :)
SAN-> Policies-> SAN Connectivity Policies->
Create “iGroup1” -> Select vHBA Initiators “vHBA-T1”
Select Storage Connectivity Policy: “My-NetApp-Conn”
When creating a Server Profile, use the created policies and vHBA template.
Why hasn't there been direct inclusion in FlexPod DataCenter before?
The fact is that in their new solutions, NetApp often adheres to the ideology of "better to overtake than to non-finish." So, when it came to the world of Clustered ONTAP OS for FAS storage , it was mandatory for a cluster interconnect to have dedicated Nexus 5k switches (for $ 1) exclusively for this task. Over time, this configuration has been revised, tested, and the ability to switch-less configurations has been added . In the same way, it took time to test and debug direct inclusion of storage systems, first we added new topologies of Cisco UCS Manager based FC zoning with direct connection of direct storage systems to the UCS domain, it became available with the firmware versionRelease 2.1 (1a), and then it appeared in the FlexPod architecture.
Why are Nexus switches needed?
The configuration with direct inclusion of storage in FI , from the point of view of architectural differences, in the case of using block protocols for access FC / FCoE / iSCSI , is the least different from the original FlexPod design, where Nexus switches acted as a bridge between the storage and the UCS domain. However, in the new design, the Nexus series switches are still an essential component of the architecture:
- firstly, because FlexPod must be somehow integrated into the existing infrastructure and provide access to customers
- secondly, direct inclusion will restrain the scalability of architecture in the future
- thirdly, in contrast to direct connection with SAN ( FC / FCoE / iSCSI ), the design of an Ethernet network for a NAS requires an intermediate link between the storage system and the UCS domain to perform load balancing on network links, and mainly fault tolerance functions.
- fourthly, external Ethernet client access to the FlexPod should be fault tolerant.
Availability of switches for NAS
The difference between the SAN and NAS designs is that in the case of block protocols, multipassing and balancing mechanisms are performed at the FC / FCoE / iSCSI protocol level , and in the case of using NFS / CIFS ( SMB ) protocols, these mechanisms are absent. Multipassing and balancing functions must be performed at the Ethernet level using vPC and LACP , i.e. by means of the switch, which is also the determining factor of its presence in such designs.
How to reduce costs in the first stage
Despite the fact that Nexus switches are an indispensable component of the FlexPod DataCenter architecture, direct connection reduces the cost of such a solution at the first stage of commissioning the complex, i.e. will allow not to buy Nexus switches at the beginning. In other words, you can first build non-FlexPod DC . And then buy the switches over time, spreading the budget overhead with a thinner layer and getting a more scalable FlexPod DataCenter architecture when necessary.
Subsequently, the network design can be converted to a more scalable one, and due to the duplication of components, this can be done without stopping the complex.

The limitation of the network design shown in Fig. 2 is 2 links from one storage controller to the switches when it goes aroundsimultaneously FCoE and Ethernet traffic . If you need to increase the number of links from the storage system, you will have to separate FCP and Ethernet traffic into separate, dedicated ports and links.
Benefits of FlexPod configurations.
New configurations and designs
New documents have been released on the implementation of the data center architecture : FlexPod Express, Select and DataCenter with Clustered Data ONTAP (cDOT):
- Nexus 6000/9000: FlexPod Datacenter with VMware vSphere 5.1U1 and Cisco Nexus 6000 Series Switch Design Guide , FlexPod Datacenter with VMware vSphere 5.5 and Cisco Nexus 9000 Series Switches
- MetroCluster on cDOT (MCC): MetroCluster in Clustered Data ONTAP 8.3 Verification Tests Using Oracle Workloads
- All-Flash FAS (AFF): FlexPod Datacenter with NetApp All-Flash FAS and VMware Horizon (with View) , FlexPod Datacenter with VMware vSphere 5.5 Update 1 and All-Flash FAS .
- FlexPod Express: FlexPod Express with VMware vSphere 5.5: Large Configuration , FlexPod Express with VMware vSphere 5.5 Update 1: Small and Medium configs Implementation Guide .
- Cisco Security: FlexPod Datacenter with Cisco Secure Enclaves
- Cisco ACI : FlexPod Data Center with Microsoft SharePoint 2013 and Cisco Application Centric Infrastructure (ACI) Design Guide , FlexPod Datacenter with VMware vSphere 5.1 U1 and Cisco ACI Design Guide , FlexPod and Cisco ACI
- FlexPod Select: FlexPod Select for High-Performance Oracle RAC NVA Design
- FlexPod + Veeam Install Guide: How Veeam provides availability for Cisco and NetApp converged infrastructures
Please send messages about errors in the text to the LAN .
Comments and additions on the contrary please comment