Using Open vSwitch with DPDK to Transfer Data Between Virtual Machines with Network Function Virtualization

Original author: Ashok Emani
  • Transfer
The Data Plane Development Kit (DPDK) provides high-performance package processing libraries and user space drivers. Starting with Open vSwitch (OVS) version 2.4, we are able to use the vHost path optimized using DPDK in OVS. DPDK support has been available for OVS since version 2.2.

Using DPDK in OVS provides significant performance benefits. As in other applications using DPDK, network bandwidth (the number of transmitted network packets) increases dramatically with a significant reduction in latency. In addition, with the help of DPDK package processing libraries, the performance of some of the most important OVS segments has been optimized.

In this document, we will look at configuring OVS with DPDK for use between virtual machines. Each port will be connected to a separate virtual machine. Then we run a simple iperf3 bandwidth test and compare the performance with the OVS configuration without DPDK to evaluate the benefits we get in OVS with DPDK.



Open vSwitch can be installed using standard package installers on common Linux * distributions. DPDK support is not enabled by default, so you need to build Open vSwitch with DPDK before proceeding.

For detailed instructions on installing and using OVS with DPDK, see here . In this document, we will look at the main steps and, in particular, the scenario for using user-defined DPDK ports vhost-user.

Requirements for OVS and DPDK


Before compiling DPDK and OVS, make sure that all necessary requirements are met .

Development tool packages on standard Linux distributions typically meet most of these requirements. For example, on yum-based (or dnf-based) distributions, you can use the following command to install:

yum install "@Development Tools" automake tunctl kernel-tools "@Virtualization Platform" "@Virtualization" pciutils hwloc numactl

Also, make sure that qemu version v2.2.0 or later is installed on your system as described in the DPDK vhost-user Prerequisites document .

Build the DPDK target environment for OVS


To build OVS with DPDK, you need to download the DPDK source code and prepare the destination environment. For more information about using DPDK see. Here . The main actions are shown in the following code fragment:

curl -O http://dpdk.org/browse/dpdk/snapshot/dpdk-2.1.0.tar.gz
tar -xvzf dpdk-2.1.0.tar.gz
cd dpdk-2.1.0
export DPDK_DIR=`pwd`
sed 's/CONFIG_RTE_BUILD_COMBINE_LIBS=n/CONFIG_RTE_BUILD_COMBINE_LIBS=y/' -i config/common_linuxapp
make install T=x86_64-ivshmem-linuxapp-gcc
cd x86_64-ivshmem-linuxapp-gcc
EXTRA_CFLAGS="-g -Ofast" make -j10

Build OVS with DPDK


If you have a DPDK destination environment built, you can download the latest OVS source code and build with DPDK enabled. Standard documentation for building OVS with DPDK is available at . Here we will consider only the basic steps.

git clone https://github.com/openvswitch/ovs.git
cd ovs
export OVS_DIR=`pwd`
./boot.sh
./configure --with-dpdk="$DPDK_DIR/x86_64-ivshmem-linuxapp-gcc/" CFLAGS="-g -Ofast"
make 'CFLAGS=-g -Ofast -march=native' -j10

So, we have a complete assembled OVS with enabled DPDK support. All standard OVS utilities are in the $ OVS_DIR / utilities / folder, and the OVS database is in the $ OVS_DIR / ovsdb / folder. We use these utilities in future actions.

Creating an OVS Database and Running ovsdb-server


Before starting the main OVS process “ovs-vswitchd”, you need to initialize the OVS database and run ovsdb-server. The following commands show how to flush and create a new OVS database and an ovsdb_server instance.

pkill -9 ovs
rm -rf /usr/local/var/run/openvswitch
rm -rf /usr/local/etc/openvswitch/
rm -f /usr/local/etc/openvswitch/conf.db
mkdir -p /usr/local/etc/openvswitch
mkdir -p /usr/local/var/run/openvswitch
cd $OVS_DIR
./ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db ./vswitchd/vswitch.ovsschema
./ovsdb/ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
./utilities/ovs-vsctl --no-wait init

Configure host and network adapters to use OVS with DPDK


DPDK requires the host operating system to support extra large memory pages, and for network adapters, the polled mode drivers (PMD) of the DPDK user space must be enabled.
To enable extra large pages of memory and use the VFIO user space driver, add the following parameters to GRUB_CMDLINE_LINUX in / etc / default / grub, then run the grub update and reboot the system:

default_hugepagesz=1G hugepagesz=1G hugepages=16 hugepagesz=2M hugepages=2048 iommu=pt intel_iommu=on isolcpus=1-13,15-27
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot

Depending on the amount of available memory in the system, you can configure the number and type of extra-large pages. The isolcpus parameter allows you to isolate specific CPUs from the Linux scheduler, so you can “pin” them to DPDK-based applications.
After rebooting the system, check the kernel command line and selected extra-large pages, as shown below.



Now you should connect the file system of extra-large pages and load the vfio-pci user space driver.

mkdir -p /mnt/huge
mkdir -p /mnt/huge_2mb
mount -t hugetlbfs hugetlbfs /mnt/huge
mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
modprobe vfio-pci
cp $DPDK_DIR/tools/dpdk_nic_bind.py /usr/bin/.
dpdk_nic_bind.py --status
dpdk_nic_bind.py --bind=vfio-pci 05:00.1

The following screenshot shows an example output for the above commands.



If the intended use case concerns only the transfer of data between virtual machines, and physical network adapters are not used, then you can skip the above steps for vfio-pci.

Launch ovs-vswitchd


So, the OVS database is configured, the host is configured to use OVS with DPDK. Now you should start the main ovs-vswitchd process.

modprobe openvswitch
$OVS_DIR/vswitchd/ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 2048 -- unix:/usr/local/var/run/openvswitch/db.sock --pidfile --detach

Creating a bridge and DPDK ports vhost-user for use between virtual machines


In our test sample, we will create a bridge and add two DPDK vhost-user ports. If desired, you can add the physical network adapter vfio-pci, which we configured earlier.

$OVS_DIR/utilities/ovs-vsctl show
$OVS_DIR/utilities/ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
$OVS_DIR/utilities/ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
$OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user1 -- set Interface vhost-user1 type=dpdkvhostuser
$OVS_DIR/utilities/ovs-vsctl add-port br0 vhost-user2 -- set Interface vhost-user2 type=dpdkvhostuser

The following screenshot shows the final OVS configuration.



Using DPDK vhost-user ports with virtual machines


The description of creating virtual machines is beyond the scope of this document. After we have two virtual machines (for example, f21vm1.qcow2 and f21vm2.qcow2), the following commands will show how to use the vhost-user DPDK ports created earlier.

qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/f21vm1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \
-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc
qemu-system-x86_64 -m 1024 -smp 4 -cpu host -hda ~/f21vm2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net none \
-chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user2 \
-netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce \
-device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet1 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc

Simple DPDK vhost-user performance test between virtual machines using iperf3


Log in to the virtual machines and configure static IP addresses for the network adapters on the same subnet. Install iperf3 and run a simple network test.
On the same virtual machine, start iperf3 in server mode iperf3 -s and start the iperf3 client. An example of the result is shown in the following screenshot.



Repeat performance test for standard OVS assembly (without DPDK)


In the previous sections, we created and used the OVS-DPDK assembly directly in the $ OVS_DIR folder without installing it on the system. To repeat the test for the standard OVS build (without DPDK), you can simply perform the installation using standard distribution installers. For example, on yum-based (or dnf-based) systems, you can use the following command to install:

pkill -9 ovs
yum install openvswitch
rm -f /etc/openvswitch/conf.db
mkdir -p /var/run/openvswitch
ovsdb-tool create /etc/openvswitch/conf.db /usr/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach
ovs-vsctl --no-wait init
ovs-vswitchd unix:/var/run/openvswitch/db.sock --pidfile --detach
ovs-vsctl add-br br0
ovs-vsctl show

At this stage, we have a fresh OVS database configured and a running ovs-vswitchd process without DPDK.
For instructions on setting up two virtual machines with listening devices for an OVS bridge without DPDK (br0), see the instructions . Then start the virtual machines using the same images that we already used before, for example:

qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda ~/f21vm1c1.qcow2 -boot c -enable-kvm -no-reboot -nographic -net nic,macaddr=00:11:22:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown
qemu-system-x86_64 -m 512 -smp 4 -cpu host -hda ~/f21vm1c2.qcow2 -boot c -enable-kvm -no-reboot -nographic -net nic,macaddr=00:11:23:EE:EE:EE -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown

Repeat the simple iperf3 performance test we performed earlier. The following is an example of the results; Actual results on your system may vary depending on its configuration.



As you can see in the above figure, when using OVS with DPDK, a significant increase in performance is observed. Both performance tests were performed in the same system, the only difference was that in one case the standard OVS assembly was used, and in the other - OVS with DPDK.

Conclusion


Open vSwitch version 2.4 implements DPDK support, which means a very significant performance boost. In this article, we showed how to build and use OVS with DPDK. We looked at setting up a simple OVS bridge with vhost-user DPDK ports for use between virtual machines. We demonstrated improved performance with the iperf3 benchmark by comparing OVS with DPDK and without DPDK.

Also popular now: