
Installing the Cisco Nexus 1000v in VMware vSphere 5.x
- Transfer
- Recovery mode
This article is a translation of two articles by AJ Cruz:
Nexus 1000v Part 1 of 2 (Theory)
Nexus 1000v Part 2 of 2 (Installation & Operation)
Partially overlaps with the articles:
Installing Nexus 1000V on vSphere 5.1 (Part One)
Installing Nexus 1000V on vSphere 5.1 ( Part two)
But there is a different presentation style and other features, for example, installing VEM in the core of the hypervisor through VMware Update Manager.
Nexus 1000v Part 1 of 2 (Theory)
This will be the first of two publications about my research on the Nexus 1000v switch.
Here I will talk about the Nexus 1000v theory. And in the second part I will describe installation and operation.
While I will continue my post, I can mention various terminology that needs to be defined. Therefore, I will immediately indicate some terms:
All of the above is how vSphere Distributed Switch is usually mentioned on the Internet and in various kinds of documentation.
They all mean the same thing and apply to the virtual switch as a whole.
The Cisco Nexus 1000v is one of the vDS implementations. vSphere also has built-in vDS, with which I do not have much experience with.
Let's find out what vDS is, namely, the Nexus 1000v. First, let's take a look at what we are all familiar with. The image below is the Nexus 7004 switch:

Pretty standard. We have a pair of Supervisor Engines in slots 1 and 2, Sup in slot 1 is active. We also have a pair of input-output modules (IOM, line cards, Ethernet modules, interface cards) in slots 3 and 4. The LAN matrix lines are designed to show how we could connect to the rest of our infrastructure.
Now look at the vDS graphical representation:

We have the same four modules, but now they work as software under VMware (“V” in vDS) and are distributed in two different ESXi hosts (“D” in vDS).
VSM - Virtual Machine Appliance (Virtual Management Module) A virtual machine that represents the “brain” (like the Supervisor Engine) vDS. VSM can be run both on the ESX host and on specialized hardware.
VEM - Virtual Ethernet Module. Software on each ESXi host participating in vDS. An analog of a line card in a standard modular switch.
SVS- Switch server verticalization. In SVS, the configuration defines how VEMs communicate with their parent VSM. In general terms, it seems (at least in my head) to trace the backbones in standard modular switches.
Let's dig a little deeper and see how virtual machines “connect” to vDS and how vDS connects to the rest of the world. I have another image as an illustration:

It doesn’t matter if port profiles belong to VEM or VSM , I install them on VEM to make the work more understandable. What is a port profile?
In a virtual switch (and this is logical), we do not perform an operation on a physical interface. In fact, the virtual interface to which the VM connects does not even exist until the VM adapter has been configured. So what we need to do is create a container where we all put it. The container contains configuration information (vlan, QoS (Quality of Service) policy, etc.) and acts as a sort of funnel or aggregation point for virtual machines.
When we edit the settings of the virtual machine and select the connection to the network of its adapter, we will see a drop-down list (using the image above) with points “B” and “A”.
Why don't we see two other port profiles (X or Y)?
The vethernet port profile is the type of port profile that is used to connect the virtual machine. Available in the drop-down list of connections to the network settings of the Virtual Machine adapter.
Ethernet Port Profile - A type of port profile that is used to physically connect from an ESX server. Not displayed in VM adapter settings.
Thus, although virtual machines are assigned to the vethernet port profile, physical network adapters are assigned to the ethernet port profile. How do vethernet port profiles know which ethernet port profile to use for outgoing traffic?
At 1000v, the spanning tree is not running. Ethernet port profiles must be configured with unique VLANs. In other words, a specific VLAN must be mapped to a single ethernet port profile. 1000v will allow you to configure multiple profiles of Ethernet ports with the same VLAN, but in the future this will lead to problems.
This does not mean that we cannot have uplink redundancy. As you can see in the last image: two network adapters are assigned to the same ethernet port rule. Redundancy can be achieved using LACP or vPC host mode (mac pinning). I don't want to go too deep into the mac pinning process, but it basically works the same way it sounds. The MAC address of a specific virtual machine is attached to one of the physical output ports. The separation of VMs between the physical network adapters (occurs automatically through the VPC host mode) provides a measure of load distribution.
vPC Host Mode(vPC Host Mode) - NOT VPC !!! Get the vPC out of your head. vPC host mode = MAC pinning. If you have already worked with VMware, this is the same as “Route Based on the ID of the Source Virtual Port”.
This is a load balancing method that does not require a special switch configuration. As already mentioned, MAC addresses simply bind to one physical port or to another.
Next, I would like to learn a little more about SVS connections. Initially, VEM (ESXi host) and VSM must be adjacent to Layer 2. Recent versions of 1000v support Layer 3 deployments and this is the recommended deployment method. The connection between VEM and VSM is IP. In fact, when deploying in Layer 3, all traffic is sent through UDP port 4785. However, this does not happen automatically. We need to configure the vethernet vmkernel port profile using the " capability l3control " command . This is what instructs VEM to encapsulate everything associated with the control plane in UDP 4785 and send it to VSM.
Now we see that we are faced with the problem of “chicken or egg”, especially if VSM runs on VEM.
System VLAN - allows end-to-end access to the configured traffic VLAN traffic, which means immediate access to the network in the absence of a VSM connection; vmkernel ports must be configured with a system VLAN. In addition, the "system vlan <#>" command is set on both the vethernet profile port and the ethernet profile port.
The last thing I want to mention is how boundary ports appear in the Nexus 1000v. I mentioned that virtual interfaces do not exist until a VM is connected. After the VM is assigned a vethernet port profile within its network settings and it is in online status, a veth interface is created on the Nexus 1000v. The Veth interface is a physical border host port on a common physical switch. The Veth interface inherits the configuration of the parent port of the vethernet profile.
Ethernet port profiles are tied to physical network adapters in the vSphere GUI (see the next publication for more on this). Within the Nexus 1000v, they are shown as multi-level Ethernet interfaces: the first is module #, the second is the port number (vmnic). Using the last image as a link, vmnic 3 will be displayed in 1000v as E3 / 4. Three because VEM is module 3 and 4 because it is the fourth vmnic in the block. However, we still do not perform any configuration on these interfaces, everything is done in the port profile.
Final note on VEM numbering. VEM will always be modules 1 and 2. By default, VEM will be numbered sequentially as ESXi hosts are added and will be placed online.
Now we have a good base for the theory of 1000v, we are ready to move on to the second publication in this series.
Nexus 1000v Part 2 of 2 (Installation and Operation)
In part 2 of this series, we will go through the installation of the Cisco Nexus 1000v switch and consider some basic operations.
I assume that the reader has basic knowledge of VMware networking and the architecture and operations of 1000v.
If you would like to study the theory of 1000v first, see part one: “Nexus 1000v Part 1 of 2 (Theory)”
Note that I am starting with the ESXi workspace, including vCenter. If you want to see how I configured everything, then see my post “My VMware Lab”.
I start with my ESXi network settings on two standard vSwitch. vSwitch0 for VMkernel management / connections (including one group of VM ports for VLAN management) and vSwitch1 for VM traffic.
I have four network cards, vmnic0 and vmnic1 are assigned to the vSwitch0 switch, vmnic2 and vmnic3 are assigned to the vSwitch1 switch:

Here are some details about my setup:
My installation goal is to replace vSwitch1 with Nexus1000v and make sure my vCenter and VM Win2008R2 are still pinging vDS.
Installing the Nexus 1000v consists of five basic steps:
We can either install 1000v manually or using the Cisco Java Installer. Since the Java Installer is the recommended method, I will use it in the demo. The Java Installer performs steps 1-3, and potentially 4 (this means that we have completed the installation).
Steps 1-3:
Go to the VSM installation applications and double-click the installer icon:

Select the complete installation and press the radio button for custom selection:

Click Next, enter the vCenter Server IP address and credentials, and click Next.
The installation program will install two VSMs, it doesn’t matter if you have one or more ESXi hosts. If you have only one - see information for host 2. Fill the installer with all the necessary information:

You can fill in all the fields yourself or select the options using the [Browse] button, just make sure that if you fill in yourself, then you are not making mistakes. The installation program does not check the input until you leave the screen. So if you suddenly made a mistake, you have to start again.
“-1” and “-2” for VSM 1 and 2, respectively, will be automatically added to the name of the virtual machine.
Select the .OVA file from the Nexus installation directory. Please note that ova with the numbers “1010” in the name is for Nexus 1010.
We will introduce layer 3, which is the preferred method.
When deploying Layer 3, the Management port and the Packet port are ignored. I will simply assign them all to VLAN management.
After you have completed, click [Next]
While everything is moving it's time to arrange the windows. I would like to split the screen into two parts, leaving vCenter on the left side of the screen to see what is happening in vSphere while the 1000v installer is doing its job.
You will see the whole result on the screen. If everything looks good, click Next. Take a break and let ESXi do its wonders during the installation process. Upon completion of the installation, a confirmation screen will be displayed:

Do not close this window yet - we will once again check all the steps that have already been completed at the moment. The installation program has just completed steps 1 to 3. Let's check it out on our own. First, we can see in the image above that there are now two new virtual machines, N1Kv-1 and N1Kv-2. Thus, we know that step 1 is complete.
To check step 2 in vCenter, click “Plug-ins” in the menu and then “Manage Plug-ins”.

We see that a new plugin has been installed for the Nexus 1000V. Close this window.
To verify step 3, we will look at both 1000v and inside vSphere.
SSH to 1000v and go to the end of the configuration file:
We see the svs connection vcenter configuration here. The setup program has configured a connection for us. Now let's take a look at vCenter to make sure that it has created a new vDS.
In vCenter, press CTRL + SHIFT + N or go to Home-> Inventory-> Networking.
Expand the tree to verify that the 1000v and vDS communication was successful:

Now let's move on to step 4 - installing the VEM software on the ESXi. If we look at our installer, we will see that we have the option “install VIB and add a module”. What is a VIB?
VIB stands for vSphere Installation Bundle. This is just a piece of software or application that we can install on ESXi. In our case, this is Nexus 1000v VEM software.
One caveat - the 1000v installer works with VMware Update Manager (VUM). Since I do not have it, I cannot use the installer. So for now, close the installation program - we will do everything manually.
Step 4
First we need to copy the VIB file to ESX. You can do this in absolutely any way. SCP is working. I usually just copy it to my data warehouse using vCenter. To do this, go to vCenter and press CTRL + SHIFT + H to get back to Home-> Inventory-> Hosts & Clusters. Go to the “Configuration” section, then “Storage”. Right-click on the data store and click on “Browse Datastore.” At the root level, click the “Upload files to this datastore” button and click “Upload File”.
Browse to the VIB file, select it (I just chose the latest version), and click [Open].

Enable SSH and ESXi shell on the ESX host and SSH directly on ESXi.
If you are not familiar with this process, see: Using the Shell ESXi in ESXi 5.x
in the CLI go to the data warehouse and copy the vib file to / tmp / so we can work with it.
Now go to / tmp and install vib:
At the moment, we have half completed step 4. Now we need to connect the VEM module to the network. To do this, create a VMkernel port group with the option "capability l3control". Before that, let's check our status on 1000v:
We see two VSM modules: 1 and 2. Currently, no VEM is online.
Let's first create an ethernet port profile to control the uplink VLAN.
But first, let's move our windows again. I like to split the screen to watch what happens in vCenter while I work on 1000v. In vCenter, press CTRL + SHIFT + N to return to the network. With the window open, let's create a port profile:
As soon as we install “state enable”, the port profile is immediately added to vCenter.

Now let's create a vethernet profile port for traffic control. On this port we will add “capability l3control”

Now at the final stage, we need to move vmnic to the ethernet profile port, and move one of our vmkernel connections to the vethernet profile port.
Right-click on vDS and click “Add Host”:

Select the ESXi host and vmnic to move. I choose one of the two network adapters on the vSwitch0 switch so as not to lose connection with ESXi, plus my iSCI is tied to vmnic0, so I can’t move it right now. On the right, select the MGMT-Uplink profile port in the drop-down list and click [Next]:

On the next screen, select the vmkernel port to transfer to 1000v. I am going to use VMK0 (Management) and in the drop-down list I select the vethernet profile port “VLAN101”, then click [Next]:

For now, do not worry about migrating virtual machines, click the [Next] button.
The following screen shows a visual representation of vDS. Click [Finish].
Go back to the 1000v terminal and you should see:
When vCenter is done, let's check the 1000v console:
Done. Module 3 online, step 4 completed.
Step 5.
Now let's create our vlan VM, the ethernet port profile for boot, and the vethernet port profile for our VMs:

Next, we need to move the physical NIC to the VM-Uplink ethernet profile port. To make this change as painless as possible, I will move one NIC, migrate the virtual machines, then mix the other NIC.
Now that the ESXi host has already been added to vDS, we can right-click on the N1Kv switch and click on “Manage Hosts” instead of adding hosts.
Select one of the vmnic s on vSwitch1, select the VM-Uplink port group and click [Next]

Click [Next] two more times, then [Finish].
Now we are ready to transfer the VM. I'm going to start pinging my Win2008R2 VM.
Press CTRL + SHIFT + H to return to Home-> Inventory-> Hosts & Clusters (Home -> Catalog-> Hosts and Clusters).
I right-click on my VM and go to “Edit Settings”. Then I select the VLAN102 network vethernet port profile from the drop-down list and click [OK].

We see a delay, but I have not lost anything.
I do the same for my vCenter appliance and any other virtual machines until my standard vSwitch 1 is empty:

Our final configuration step is to move vmnic2 to the 1000v VM-Uplink profile port, but before I do this, let's configure the VM-Uplink profile port for load balancing; otherwise we will run into problems.
I don’t want to make any specific settings on my switch (LACP), so I’ll make a mac pinning.
For demo purposes, I'm going to move vmnic2 in another way. Instead of right-clicking on vDS and then selecting Manage Hosts ", let's click here on" vSphere Distributed Switch. "
We see the location of our vDS. In the upper right corner, click on" Manage Physical Adapters ... "(" Managing physical adapters ... ").

Scroll down to the VM-Uplink port group and click "" ("<Click to add NIC>").

Select the physical adapter you want to add and click [OK].

Click the [Yes] button to remove vmnic2 from the vSwitch1 switch and connect it to NIKv.
Click [OK] and after a while vmnic will be added:

As a final step, I'm going to click on the standard vSwitch and remove vSwitch 1.
Some more useful information. In the NIKv console, you can select " show interface status ", just like on a regular switch and see all 1000v ports.
You can select " show interface virtual " and you will see all the veth ports and which hosts they are on.
That's all. Enjoy exploring the Nexus 1000v.
Nexus 1000v Part 1 of 2 (Theory)
Nexus 1000v Part 2 of 2 (Installation & Operation)
Partially overlaps with the articles:
Installing Nexus 1000V on vSphere 5.1 (Part One)
Installing Nexus 1000V on vSphere 5.1 ( Part two)
But there is a different presentation style and other features, for example, installing VEM in the core of the hypervisor through VMware Update Manager.
Nexus 1000v Part 1 of 2 (Theory)
This will be the first of two publications about my research on the Nexus 1000v switch.
Here I will talk about the Nexus 1000v theory. And in the second part I will describe installation and operation.
While I will continue my post, I can mention various terminology that needs to be defined. Therefore, I will immediately indicate some terms:
- vDS
- vSphere Distributed Switch ( vSphere Distributed Switch )
- The Distributed Switch the Virtual (Virtual Distributed Switch)
- DVS
- The Virtual Switch the Distributed (Distributed Switch)
All of the above is how vSphere Distributed Switch is usually mentioned on the Internet and in various kinds of documentation.
They all mean the same thing and apply to the virtual switch as a whole.
The Cisco Nexus 1000v is one of the vDS implementations. vSphere also has built-in vDS, with which I do not have much experience with.
Let's find out what vDS is, namely, the Nexus 1000v. First, let's take a look at what we are all familiar with. The image below is the Nexus 7004 switch:

Pretty standard. We have a pair of Supervisor Engines in slots 1 and 2, Sup in slot 1 is active. We also have a pair of input-output modules (IOM, line cards, Ethernet modules, interface cards) in slots 3 and 4. The LAN matrix lines are designed to show how we could connect to the rest of our infrastructure.
Now look at the vDS graphical representation:

We have the same four modules, but now they work as software under VMware (“V” in vDS) and are distributed in two different ESXi hosts (“D” in vDS).
VSM - Virtual Machine Appliance (Virtual Management Module) A virtual machine that represents the “brain” (like the Supervisor Engine) vDS. VSM can be run both on the ESX host and on specialized hardware.
VEM - Virtual Ethernet Module. Software on each ESXi host participating in vDS. An analog of a line card in a standard modular switch.
SVS- Switch server verticalization. In SVS, the configuration defines how VEMs communicate with their parent VSM. In general terms, it seems (at least in my head) to trace the backbones in standard modular switches.
Let's dig a little deeper and see how virtual machines “connect” to vDS and how vDS connects to the rest of the world. I have another image as an illustration:

It doesn’t matter if port profiles belong to VEM or VSM , I install them on VEM to make the work more understandable. What is a port profile?
In a virtual switch (and this is logical), we do not perform an operation on a physical interface. In fact, the virtual interface to which the VM connects does not even exist until the VM adapter has been configured. So what we need to do is create a container where we all put it. The container contains configuration information (vlan, QoS (Quality of Service) policy, etc.) and acts as a sort of funnel or aggregation point for virtual machines.
When we edit the settings of the virtual machine and select the connection to the network of its adapter, we will see a drop-down list (using the image above) with points “B” and “A”.
Why don't we see two other port profiles (X or Y)?
The vethernet port profile is the type of port profile that is used to connect the virtual machine. Available in the drop-down list of connections to the network settings of the Virtual Machine adapter.
Ethernet Port Profile - A type of port profile that is used to physically connect from an ESX server. Not displayed in VM adapter settings.
Thus, although virtual machines are assigned to the vethernet port profile, physical network adapters are assigned to the ethernet port profile. How do vethernet port profiles know which ethernet port profile to use for outgoing traffic?
At 1000v, the spanning tree is not running. Ethernet port profiles must be configured with unique VLANs. In other words, a specific VLAN must be mapped to a single ethernet port profile. 1000v will allow you to configure multiple profiles of Ethernet ports with the same VLAN, but in the future this will lead to problems.
This does not mean that we cannot have uplink redundancy. As you can see in the last image: two network adapters are assigned to the same ethernet port rule. Redundancy can be achieved using LACP or vPC host mode (mac pinning). I don't want to go too deep into the mac pinning process, but it basically works the same way it sounds. The MAC address of a specific virtual machine is attached to one of the physical output ports. The separation of VMs between the physical network adapters (occurs automatically through the VPC host mode) provides a measure of load distribution.
vPC Host Mode(vPC Host Mode) - NOT VPC !!! Get the vPC out of your head. vPC host mode = MAC pinning. If you have already worked with VMware, this is the same as “Route Based on the ID of the Source Virtual Port”.
This is a load balancing method that does not require a special switch configuration. As already mentioned, MAC addresses simply bind to one physical port or to another.
Next, I would like to learn a little more about SVS connections. Initially, VEM (ESXi host) and VSM must be adjacent to Layer 2. Recent versions of 1000v support Layer 3 deployments and this is the recommended deployment method. The connection between VEM and VSM is IP. In fact, when deploying in Layer 3, all traffic is sent through UDP port 4785. However, this does not happen automatically. We need to configure the vethernet vmkernel port profile using the " capability l3control " command . This is what instructs VEM to encapsulate everything associated with the control plane in UDP 4785 and send it to VSM.
Now we see that we are faced with the problem of “chicken or egg”, especially if VSM runs on VEM.
System VLAN - allows end-to-end access to the configured traffic VLAN traffic, which means immediate access to the network in the absence of a VSM connection; vmkernel ports must be configured with a system VLAN. In addition, the "system vlan <#>" command is set on both the vethernet profile port and the ethernet profile port.
The last thing I want to mention is how boundary ports appear in the Nexus 1000v. I mentioned that virtual interfaces do not exist until a VM is connected. After the VM is assigned a vethernet port profile within its network settings and it is in online status, a veth interface is created on the Nexus 1000v. The Veth interface is a physical border host port on a common physical switch. The Veth interface inherits the configuration of the parent port of the vethernet profile.
Ethernet port profiles are tied to physical network adapters in the vSphere GUI (see the next publication for more on this). Within the Nexus 1000v, they are shown as multi-level Ethernet interfaces: the first is module #, the second is the port number (vmnic). Using the last image as a link, vmnic 3 will be displayed in 1000v as E3 / 4. Three because VEM is module 3 and 4 because it is the fourth vmnic in the block. However, we still do not perform any configuration on these interfaces, everything is done in the port profile.
Final note on VEM numbering. VEM will always be modules 1 and 2. By default, VEM will be numbered sequentially as ESXi hosts are added and will be placed online.
Now we have a good base for the theory of 1000v, we are ready to move on to the second publication in this series.
Nexus 1000v Part 2 of 2 (Installation and Operation)
In part 2 of this series, we will go through the installation of the Cisco Nexus 1000v switch and consider some basic operations.
I assume that the reader has basic knowledge of VMware networking and the architecture and operations of 1000v.
If you would like to study the theory of 1000v first, see part one: “Nexus 1000v Part 1 of 2 (Theory)”
Note that I am starting with the ESXi workspace, including vCenter. If you want to see how I configured everything, then see my post “My VMware Lab”.
I start with my ESXi network settings on two standard vSwitch. vSwitch0 for VMkernel management / connections (including one group of VM ports for VLAN management) and vSwitch1 for VM traffic.
I have four network cards, vmnic0 and vmnic1 are assigned to the vSwitch0 switch, vmnic2 and vmnic3 are assigned to the vSwitch1 switch:

Here are some details about my setup:
VLAN101 - 10.1.1.0/24 и VLAN102 - 10.1.2.0/24
vCenter Appliance - 10.1.2.50
ESXi host- 10.1.1.52
Шлюзы по умолчанию - .254
My installation goal is to replace vSwitch1 with Nexus1000v and make sure my vCenter and VM Win2008R2 are still pinging vDS.
Installing the Nexus 1000v consists of five basic steps:
- 1. Install / Provide VSM Virtual Machines
- 2. Register the Nexus 1000v plugin in vCenter
- 3. Configure VSM-vCenter communication (SVS connection)
- 4. Install VEM software on each ESXi host and set the module status to Online
- 5. Configure host network configuration
We can either install 1000v manually or using the Cisco Java Installer. Since the Java Installer is the recommended method, I will use it in the demo. The Java Installer performs steps 1-3, and potentially 4 (this means that we have completed the installation).
Steps 1-3:
Go to the VSM installation applications and double-click the installer icon:

Select the complete installation and press the radio button for custom selection:

Click Next, enter the vCenter Server IP address and credentials, and click Next.
The installation program will install two VSMs, it doesn’t matter if you have one or more ESXi hosts. If you have only one - see information for host 2. Fill the installer with all the necessary information:

You can fill in all the fields yourself or select the options using the [Browse] button, just make sure that if you fill in yourself, then you are not making mistakes. The installation program does not check the input until you leave the screen. So if you suddenly made a mistake, you have to start again.
“-1” and “-2” for VSM 1 and 2, respectively, will be automatically added to the name of the virtual machine.
Select the .OVA file from the Nexus installation directory. Please note that ova with the numbers “1010” in the name is for Nexus 1010.
We will introduce layer 3, which is the preferred method.
When deploying Layer 3, the Management port and the Packet port are ignored. I will simply assign them all to VLAN management.
After you have completed, click [Next]
While everything is moving it's time to arrange the windows. I would like to split the screen into two parts, leaving vCenter on the left side of the screen to see what is happening in vSphere while the 1000v installer is doing its job.
You will see the whole result on the screen. If everything looks good, click Next. Take a break and let ESXi do its wonders during the installation process. Upon completion of the installation, a confirmation screen will be displayed:

Do not close this window yet - we will once again check all the steps that have already been completed at the moment. The installation program has just completed steps 1 to 3. Let's check it out on our own. First, we can see in the image above that there are now two new virtual machines, N1Kv-1 and N1Kv-2. Thus, we know that step 1 is complete.
To check step 2 in vCenter, click “Plug-ins” in the menu and then “Manage Plug-ins”.

We see that a new plugin has been installed for the Nexus 1000V. Close this window.
To verify step 3, we will look at both 1000v and inside vSphere.
SSH to 1000v and go to the end of the configuration file:
N1Kv# sh run
!Command: show running-config
!Time: Sat Aug 31 04:57:40 2013
version 4.2(1)SV2(1.1a)
svs switch edition essential
-----output omitted------
svs-domain
domain id 1
control vlan 1
packet vlan 1
svs mode L3 interface mgmt0
svs connection vcenter
protocol vmware-vim
remote ip address 10.1.2.50 port 80
vmware dvs uuid "8f 99 26 50 21 ce f8 b2-97 7e 6d 49 a2 b6 9f d8" datacenter-name MYDC
admin user n1kUser
max-ports 8192
connect
vservice global type vsg
tcp state-checks invalid-ack
tcp state-checks seq-past-window
no tcp state-checks window-variation
no bypass asa-traffic
vnm-policy-agent
registration-ip 0.0.0.0
shared-secret **********
log-level
N1Kv#
We see the svs connection vcenter configuration here. The setup program has configured a connection for us. Now let's take a look at vCenter to make sure that it has created a new vDS.
In vCenter, press CTRL + SHIFT + N or go to Home-> Inventory-> Networking.
Expand the tree to verify that the 1000v and vDS communication was successful:

Now let's move on to step 4 - installing the VEM software on the ESXi. If we look at our installer, we will see that we have the option “install VIB and add a module”. What is a VIB?
VIB stands for vSphere Installation Bundle. This is just a piece of software or application that we can install on ESXi. In our case, this is Nexus 1000v VEM software.
One caveat - the 1000v installer works with VMware Update Manager (VUM). Since I do not have it, I cannot use the installer. So for now, close the installation program - we will do everything manually.
Step 4
First we need to copy the VIB file to ESX. You can do this in absolutely any way. SCP is working. I usually just copy it to my data warehouse using vCenter. To do this, go to vCenter and press CTRL + SHIFT + H to get back to Home-> Inventory-> Hosts & Clusters. Go to the “Configuration” section, then “Storage”. Right-click on the data store and click on “Browse Datastore.” At the root level, click the “Upload files to this datastore” button and click “Upload File”.
Browse to the VIB file, select it (I just chose the latest version), and click [Open].

Enable SSH and ESXi shell on the ESX host and SSH directly on ESXi.
If you are not familiar with this process, see: Using the Shell ESXi in ESXi 5.x
in the CLI go to the data warehouse and copy the vib file to / tmp / so we can work with it.
~ #
~ # ls
altbootbank dev local.tgz proc store usr vmupgrade
bin etc locker productLocker tardisks var
bootbank lib mbr sbin tardisks.noauto vmfs
bootpart.gz lib64 opt scratch tmp vmimages
~ #
~ # cd /vmfs
/vmfs #
/vmfs # ls
devices volumes
/vmfs #
/vmfs # cd volumes
/vmfs/volumes #
/vmfs/volumes # ls
2c12e47f-6088b41c-d660-2d3027a4ae4d 521e1a46-2fa17fa3-cb7d-000c2956018f datastore1 (1)
3d762271-7f5b622d-3cfa-b4a79357ee70 521e1a4d-e203d122-8854-000c2956018f shared
521b4217-727150b0-5b58-000c2908bf12 521e1a4e-8824f799-5681-000c2956018f
/vmfs/volumes #
/vmfs/volumes # cd shared
/vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 #
/vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # ls
6001.18000.080118-1840_amd64fre_Server_en-us-KRMSXFRE_EN_DVD.iso
Cisco_bootbank_cisco-vem-v152-esx_4.2.1.2.1.1a.0-3.1.1.vib
N1Kv-1
N1Kv-2
NSC
Win2008R2_1
cross_cisco-vem-v152-4.2.1.2.1.1a.0-3.1.1.vib
nexus-1000v.4.2.1.SV2.1.1a.iso
vCenter Appliance
/vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # cp cross_cisco-vem-v152-4.2.1.2.1.1a.0-3.1.1.vib /tmp/
/vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 #
Now go to / tmp and install vib:
/vmfs/volumes/521b4217-727150b0-5b58-000c2908bf12 # cd /tmp
/tmp #
/tmp # esxcli software vib install -v /tmp/*.vib
Installation Result
Message: Operation finished successfully.
Reboot Required: false
VIBs Installed: Cisco_bootbank_cisco-vem-v152-esx_4.2.1.2.1.1a.0-3.1.1
VIBs Removed:
VIBs Skipped:
/tmp #
At the moment, we have half completed step 4. Now we need to connect the VEM module to the network. To do this, create a VMkernel port group with the option "capability l3control". Before that, let's check our status on 1000v:
N1Kv# sh mod
Mod Ports Module-Type Model Status
--- ----- -------------------------------- ------------------ ------------
1 0 Virtual Supervisor Module Nexus1000V active *
2 0 Virtual Supervisor Module Nexus1000V ha-standby
Mod Sw Hw
--- ------------------ ------------------------------------------------
1 4.2(1)SV2(1.1a) 0.0
2 4.2(1)SV2(1.1a) 0.0
Mod MAC-Address(es) Serial-Num
--- -------------------------------------- ----------
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
Mod Server-IP Server-UUID Server-Name
--- --------------- ------------------------------------ --------------------
1 10.1.1.10 NA NA
2 10.1.1.10 NA NA
* this terminal session
N1Kv#
We see two VSM modules: 1 and 2. Currently, no VEM is online.
Let's first create an ethernet port profile to control the uplink VLAN.
But first, let's move our windows again. I like to split the screen to watch what happens in vCenter while I work on 1000v. In vCenter, press CTRL + SHIFT + N to return to the network. With the window open, let's create a port profile:
N1Kv# conf t
Enter configuration commands, one per line. End with CNTL/Z.
N1Kv(config)# port-profile type ethernet MGMT-Uplink
N1Kv(config-port-prof)# vmware port
N1Kv(config-port-prof)# switchport mode access
N1Kv(config-port-prof)# switchport access vlan 101
N1Kv(config-port-prof)# no shutdown
N1Kv(config-port-prof)# system vlan 101
N1Kv(config-port-prof)# state enable
N1Kv(config-port-prof)#
As soon as we install “state enable”, the port profile is immediately added to vCenter.

Now let's create a vethernet profile port for traffic control. On this port we will add “capability l3control”
N1Kv(config-port-prof)# port-profile type vethernet VLAN101
N1Kv(config-port-prof)# vmware port
N1Kv(config-port-prof)# switchport mode access
N1Kv(config-port-prof)# switchport access vlan 101
N1Kv(config-port-prof)# no shutdown
N1Kv(config-port-prof)# system vlan 101
N1Kv(config-port-prof)# capability l3control
Warning: Port-profile 'VLAN101' is configured with 'capability l3control'. Also configure the corresponding access vlan as a system vlan in:
* Port-profile 'VLAN101'.
* Uplink port-profiles that are configured to carry the vlan
N1Kv(config-port-prof)# 2013 Aug 31 06:56:59 N1Kv %MSP-1-CAP_L3_CONTROL_CONFIGURED: Profile is configured with capability l3control. Also configure the corresponding VLAN as system VLAN in this port-profile and uplink port-profiles that are configured to carry the VLAN to ensure no traffic loss.
N1Kv(config-port-prof)# state enable
N1Kv(config-port-prof)#

Now at the final stage, we need to move vmnic to the ethernet profile port, and move one of our vmkernel connections to the vethernet profile port.
Right-click on vDS and click “Add Host”:

Select the ESXi host and vmnic to move. I choose one of the two network adapters on the vSwitch0 switch so as not to lose connection with ESXi, plus my iSCI is tied to vmnic0, so I can’t move it right now. On the right, select the MGMT-Uplink profile port in the drop-down list and click [Next]:

On the next screen, select the vmkernel port to transfer to 1000v. I am going to use VMK0 (Management) and in the drop-down list I select the vethernet profile port “VLAN101”, then click [Next]:

For now, do not worry about migrating virtual machines, click the [Next] button.
The following screen shows a visual representation of vDS. Click [Finish].
Go back to the 1000v terminal and you should see:
N1Kv(config-port-prof)# 2013 Aug 31 07:19:28 N1Kv %VEM_MGR-2-VEM_MGR_DETECTED: Host esx2 detected as module 3
2013 Aug 31 07:19:28 N1Kv %VEM_MGR-2-MOD_ONLINE: Module 3 is online
When vCenter is done, let's check the 1000v console:
N1Kv(config-port-prof)# sh mod
Mod Ports Module-Type Model Status
--- ----- -------------------------------- ------------------ ------------
1 0 Virtual Supervisor Module Nexus1000V active *
2 0 Virtual Supervisor Module Nexus1000V ha-standby
3 248 Virtual Ethernet Module NA ok
Mod Sw Hw
--- ------------------ ------------------------------------------------
1 4.2(1)SV2(1.1a) 0.0
2 4.2(1)SV2(1.1a) 0.0
3 4.2(1)SV2(1.1a) VMware ESXi 5.1.0 Releasebuild-799733 (3.1)
Mod MAC-Address(es) Serial-Num
--- -------------------------------------- ----------
1 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
2 00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 NA
3 02-00-0c-00-03-00 to 02-00-0c-00-03-80 NA
Mod Server-IP Server-UUID Server-Name
--- --------------- ------------------------------------ --------------------
1 10.1.1.10 NA NA
2 10.1.1.10 NA NA
3 10.1.1.52 564d33c8-ba44-2cce-c463-65954956018f 10.1.1.52
* this terminal session
N1Kv(config-port-prof)#
Done. Module 3 online, step 4 completed.
Step 5.
Now let's create our vlan VM, the ethernet port profile for boot, and the vethernet port profile for our VMs:
N1Kv(config-port-prof)# vlan 102
N1Kv(config-vlan)# name SERVERS
N1Kv(config-vlan)# port-profile type ethernet VM-Uplink
N1Kv(config-port-prof)# vmware port
N1Kv(config-port-prof)# switchport mode access
N1Kv(config-port-prof)# switchport access vlan 102
N1Kv(config-port-prof)# no shutdown
N1Kv(config-port-prof)# state enable
N1Kv(config-port-prof)#
N1Kv(config-port-prof)# port-profile type vethernet VLAN102
N1Kv(config-port-prof)# vmware port
N1Kv(config-port-prof)# switchport mode access
N1Kv(config-port-prof)# switchport access vlan 102
N1Kv(config-port-prof)# no shutdown
N1Kv(config-port-prof)# state enable
N1Kv(config-port-prof)#

Next, we need to move the physical NIC to the VM-Uplink ethernet profile port. To make this change as painless as possible, I will move one NIC, migrate the virtual machines, then mix the other NIC.
Now that the ESXi host has already been added to vDS, we can right-click on the N1Kv switch and click on “Manage Hosts” instead of adding hosts.
Select one of the vmnic s on vSwitch1, select the VM-Uplink port group and click [Next]

Click [Next] two more times, then [Finish].
Now we are ready to transfer the VM. I'm going to start pinging my Win2008R2 VM.
Press CTRL + SHIFT + H to return to Home-> Inventory-> Hosts & Clusters (Home -> Catalog-> Hosts and Clusters).
I right-click on my VM and go to “Edit Settings”. Then I select the VLAN102 network vethernet port profile from the drop-down list and click [OK].

C:\Users\acruz>ping -t 10.1.2.21
Pinging 10.1.2.21 with 32 bytes of data:
Reply from 10.1.2.21: bytes=32 time=20ms TTL=127
Reply from 10.1.2.21: bytes=32 time=18ms TTL=127
Reply from 10.1.2.21: bytes=32 time=16ms TTL=127
Reply from 10.1.2.21: bytes=32 time=13ms TTL=127
Reply from 10.1.2.21: bytes=32 time=13ms TTL=127
Reply from 10.1.2.21: bytes=32 time=17ms TTL=127
Reply from 10.1.2.21: bytes=32 time=123ms TTL=127
Reply from 10.1.2.21: bytes=32 time=11ms TTL=127
Reply from 10.1.2.21: bytes=32 time=11ms TTL=127
Reply from 10.1.2.21: bytes=32 time=19ms TTL=127
Reply from 10.1.2.21: bytes=32 time=19ms TTL=127
Reply from 10.1.2.21: bytes=32 time=17ms TTL=127
Ping statistics for 10.1.2.21:
Packets: Sent = 16, Received = 16, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 11ms, Maximum = 123ms, Average = 33ms
C:\Users\acruz>
We see a delay, but I have not lost anything.
I do the same for my vCenter appliance and any other virtual machines until my standard vSwitch 1 is empty:

Our final configuration step is to move vmnic2 to the 1000v VM-Uplink profile port, but before I do this, let's configure the VM-Uplink profile port for load balancing; otherwise we will run into problems.
I don’t want to make any specific settings on my switch (LACP), so I’ll make a mac pinning.
N1Kv(config-port-prof)# port-profile VM-Uplink
N1Kv(config-port-prof)# channel-group auto mode on mac-pinning
N1Kv(config-port-prof)#
For demo purposes, I'm going to move vmnic2 in another way. Instead of right-clicking on vDS and then selecting Manage Hosts ", let's click here on" vSphere Distributed Switch. "
We see the location of our vDS. In the upper right corner, click on" Manage Physical Adapters ... "(" Managing physical adapters ... ").

Scroll down to the VM-Uplink port group and click "" ("<Click to add NIC>").

Select the physical adapter you want to add and click [OK].

Click the [Yes] button to remove vmnic2 from the vSwitch1 switch and connect it to NIKv.
Click [OK] and after a while vmnic will be added:

As a final step, I'm going to click on the standard vSwitch and remove vSwitch 1.
Some more useful information. In the NIKv console, you can select " show interface status ", just like on a regular switch and see all 1000v ports.
You can select " show interface virtual " and you will see all the veth ports and which hosts they are on.
That's all. Enjoy exploring the Nexus 1000v.
Only registered users can participate in the survey. Please come in.
Where would you post this article?
- 91.3% Habrahabr 21
- 8.6% Geektimes 2