Data Plane Development Kit
The DPDK is a set of libraries and drivers for fast packet processing and runs mostly in Linux userland. It is a set of libraries that provide the so called "Environment Abstraction Layer" (EAL). The EAL hides the details of the environment and provides a standard programming interface. Common use cases are around special solutions for instance network function virtualization and advanced high-throughput network switching. The DPDK uses a run-to-completion model for fast data plane performance and accesses devices via polling to eliminate the latency of interrupt processing at the tradeoff of higher cpu consumption. It was designed to run on any processors. The first supported CPU was Intel x86 and it is now extended to IBM Power 8, EZchip TILE-Gx and ARM.
Ubuntu currently supports DPDK version 2.2 and provides some infrastructure to ease its usability.
Prérequis
This package is currently compiled for the lowest possible CPU requirements. Which still requires at least SSE3 to be supported by the CPU.
The list of upstream DPDK supported network cards can be found at supported NICs. But a lot of those are disabled by default in the upstream Project as they are not yet in a stable state. The subset of network cards that DPDK has enabled in the package as available in Ubuntu 16.04 is:
Intel
Chelsio
-
cxgbe (Terminator 5)
Cisco
-
enic (UCS Virtual Interface Card)
Paravirtualization
-
virtio-net (QEMU)
Autres
On top it experimentally enables the following two PMD drivers as they represent (virtual) devices that are very accessible to end users.
Paravirtualization
-
xenvirt (Xen)
Autres
-
pcap (file or kernel driver)
Cards have to be unassigned from their kernel driver and instead be assigned to uio_pci_generic of vfio-pci. uio_pci_generic is older and usually getting to work more easily.
The newer vfio-pci requires that you activate the following kernel parameters to enable iommu.
iommu=pt intel_iommu=on
On top for vfio-pci you then have to configure and assign the iommu groups accordingly.
Note: In virtio based environment it is enough to "unassign" devices from the kernel driver. Without that DPDK will reject to use the device to avoid issues with kernel and DPDK working on the device at the same time. Since DPDK can work directly on virtio devices it is not required to assign e.g. uio_pci_generic to those devices.
Manual configuration and status checks can be done via sysfs or with the tool dpdk_nic_bind
dpdk_nic_bind --help Usage: ------ dpdk_nic_bind [options] DEVICE1 DEVICE2 .... where DEVICE1, DEVICE2 etc, are specified via PCI "domain:bus:slot.func" syntax or "bus:slot.func" syntax. For devices bound to Linux kernel drivers, they may also be referred to by Linux interface name e.g. eth0, eth1, em0, em1, etc. Options: --help, --usage: Display usage information and quit -s, --status: Print the current status of all known network interfaces. For each device, it displays the PCI domain, bus, slot and function, along with a text description of the device. Depending upon whether the device is being used by a kernel driver, the igb_uio driver, or no driver, other relevant information will be displayed: * the Linux interface name e.g. if=eth0 * the driver being used e.g. drv=igb_uio * any suitable drivers not currently using that device e.g. unused=igb_uio NOTE: if this flag is passed along with a bind/unbind option, the status display will always occur after the other operations have taken place. -b driver, --bind=driver: Select the driver to use or "none" to unbind the device -u, --unbind: Unbind a device (Equivalent to "-b none") --force: By default, devices which are used by Linux - as indicated by having routes in the routing table - cannot be modified. Using the --force flag overrides this behavior, allowing active links to be forcibly unbound. WARNING: This can lead to loss of network connection and should be used with caution. Examples: --------- To display current device status: dpdk_nic_bind --status To bind eth1 from the current driver and move to use igb_uio dpdk_nic_bind --bind=igb_uio eth1 To unbind 0000:01:00.0 from using any driver dpdk_nic_bind -u 0000:01:00.0 To bind 0000:02:00.0 and 0000:02:00.1 to the ixgbe kernel driver dpdk_nic_bind -b ixgbe 02:00.0 02:00.
DPDK Device configuration
The package dpdk provides init scripts that ease configuration of device assignment and huge pages. It also makes them persistent accross reboots.
The following is an example of the file /etc/dpdk/interfaces configuring two ports of a network card. One with uio_pci_generic and the other one with vfio-pci
# <bus> Currently only "pci" is supported # <id> Device ID on the specified bus # <driver> Driver to bind against (vfio-pci or uio_pci_generic) # # Be aware that the two DPDK compatible drivers uio_pci_generic and vfio-pci are # part of linux-image-extra-<VERSION> package. # This package is not always installed by default - for example in cloud-images. # So please install it in case you run into missing module issues. # # <bus> <id> <driver> pci 0000:04:00.0 uio_pci_generic pci 0000:04:00.1 vfio-pci
Cards are identified by their PCI-ID. If you are unsure you might use the tool dpdk_nic_bind to show the current available devices and the drivers they are assigned to.
dpdk_nic_bind --status Network devices using DPDK-compatible driver ============================================ 0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2' drv=uio_pci_generic unused=ixgbe Network devices using kernel driver =================================== 0000:02:00.0 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eth0 drv=tg3 unused=uio_pci_generic *Active* 0000:02:00.1 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eth1 drv=tg3 unused=uio_pci_generic 0000:02:00.2 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eth2 drv=tg3 unused=uio_pci_generic 0000:02:00.3 'NetXtreme BCM5719 Gigabit Ethernet PCIe' if=eth3 drv=tg3 unused=uio_pci_generic 0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2' if=eth5 drv=ixgbe unused=uio_pci_generic Other network devices ===================== <none>
DPDK HugePage configuration
DPDK makes heavy use of huge pages to eliminate pressure on the TLB. Therefore hugepages have to be configured in your system.
The dpdk package has a config file and scripts that try to ease hugepage configuration for DPDK in the form of /etc/dpdk/dpdk.conf. If you have more consumers of hugepages than just DPDK in your system or very special requirements how your hugepages are going to be set up you likely want to allocate/control them by yourself. If not this can be a great simplification to get DPDK configured for your needs.
Here an example configuring 1024 Hugepages of 2M each and 4 1G pages.
NR_2M_PAGES=1024 NR_1G_PAGES=4
As shown this supports configuring 2M and the larger 1G hugepages (or a mix of both). It will make sure there are proper hugetlbfs mountpoints for DPDK to find both sizes no matter what your default huge page size is. The config file itself holds more details on certain corner cases and a few hints if you want to allocate hugepages manually via a kernel parameter.
It depends on your needs which size you want - 1G pages are certainly more effective regarding TLB pressure. But there were reports of them fragmenting inside the DPDK memory alloactions. Also it can be harder to grab enough free space to set up a certain amount of 1G pages later in the lifecycle of a system.
Compile DPDK Applications
Currently there are not a lot consumers of the DPDK library that are stable and released. OpenVswitch-DPDK being an exception to that (see below), but in general it is very likely that you might want / have to compile an app against the library.
You will often find guides that tell you to fetch the DPDK sources, build them to your needs and eventually build your application based on DPDK by setting values RTE_* for the build system. Since Ubunutu provides an already compiled DPDK for you can can skip all that. To simplify setting the proper variables you can source the file /usr/share/dpdk/dpdk-sdk-env.sh before building your application. Here an excerpt building the l2fwd example application delivered with the dpdk-doc package.
sudo apt-get install dpdk-dev libdpdk-dev . /usr/share/dpdk/dpdk-sdk-env.sh make -C /usr/share/dpdk/examples/l2fwd
Depending on what you build it might be a good addition to install all of DPDK build dependencies before the make.
sudo apt-get install build-dep dpdk
OpenVswitch-DPDK
Being a library it doesn't do a lot on its own, so it depends on emerging projects making use of it. One consumer of the library that already is bundled in the Ubuntu 16.04 release is OpenVswitch with DPDK support in the package openvswitch-switch-dpdk.
Here an example how to install and configure a basic OpenVswitch using DPDK for later use via libvirt/qemu-kvm.
sudo apt-get install openvswitch-switch-dpdk sudo update-alternatives --set ovs-vswitchd /usr/lib/openvswitch-switch-dpdk/ovs-vswitchd-dpdk echo "DPDK_OPTS='--dpdk -c 0x1 -n 4 -m 2048 --vhost-owner libvirt-qemu:kvm --vhost-perm 0664'" | sudo tee -a /etc/default/openvswitch-switch sudo service openvswitch-switch restart
Please remember that you have to assign devices to DPDK compatible drivers (see above) before restarting.
The section --vhost-owner libvirt-qemu:kvm --vhost-perm 0664 will set vhost_user ports up with owner/permissions to be compatible with Ubuntus way of running qemu-kvm/libvirt with reduced privileges for more security.
Please note that the section -m 2048 is the most basic numa setup for a single socket system. If you have multiple sockets you might want to define how to split your memory among them, for example -m 1024, 1024. Please be aware that DPDK will try to work only with local memory to the network cards it works with (for performance reasons). That said if you have multiple nodes, but all network cards on one, you should consider spreading your cards. If not at least allocate your memory to the node where the cards reside, for example in a two node all to node #2: -m 0, 2048. You can use the tool lstopo from the package hwloc-nox to see on which socket your cards are located.
The OpenVswitch you now started supports all port types OpenVswitch usually does, plus DPDK port types. Here an example how to create a bridge and - instead of a normal external port - add an external DPDK port to it.
ovs-vsctl add-br ovsdpdkbr0 -- set bridge ovsdpdkbr0 datapath_type=netdev ovs-vsctl add-port ovsdpdkbr0 dpdk0 -- set Interface dpdk0 type=dpdk
The enablement of DPDK in Open vSwitch has changed in version 2.6. So for users of releases >=16.10, but also for users of the Ubuntu Cloud Archive >=neutron the enablement has changed compared to that for users of Ubuntu 16.04. The options formerly passed via DPDK_OPTS are now configured via ovs-vsctl into the Open vSwitch configuration database.
The same example as above would in the new way look like:
# Enable DPDK ovs-vsctl set Open_vSwitch . "other_config:dpdk-init=true" # run on core 0 ovs-vsctl set Open_vSwitch . "other_config:dpdk-lcore-mask=0x1" # Allocate 2G huge pages (not Numa node aware) ovs-vsctl set Open_vSwitch . "other_config:dpdk-alloc-mem=2048" # group/permissions for vhost-user sockets (required to work with libvirt/qemu) ovs-vsctl set Open_vSwitch . \ "other_config:dpdk-extra=--vhost-owner libvirt-qemu:kvm --vhost-perm 0666"
Please see the associated upstream documentation and the man page of the vswitch configuration as provided by the package for more details:
/usr/share/doc/openvswitch-common/INSTALL.DPDK.md.gz
/usr/share/doc/openvswitch-common/INSTALL.DPDK-ADVANCED.md.gz
man ovs-vswitchd.conf.db
OpenVswitch DPDK to KVM Guests
If you are not building some sort of SDN switch or NFV on top of DPDK it is very likely that you want to forward traffic to KVM guests. The good news is, that with the new qemu/libvirt/dpdk/openvswitch versions in Ubuntu 16.04 this is no more about manually appending commandline string. This chapter covers a basic configuration how to connect a KVM guest to a OpenVswitch-DPDK instance.
The Guest has to be backed by shared hugepages for DPDK/vhost_user to work. To ensure in general that libvirt/qemu-kvm finds a proper hugepage mountpoint you can just enable KVM_HUGEPAGES in /etc/default/qemu-kvm. Afterwards restart the service to pick up the changed configuration.
sed -ri -e 's,(KVM_HUGEPAGES=).*,\11,' /etc/default/qemu-kvm service qemu-kvm restart
To let a guest be backed by hugepages is now also supported via recent libvirt, just add the following snippet to your virsh xml (or the equivalent libvirt interface you use). Those xmls can also be used as templates to easily spawn guests with "uvt-kvm create".
<numa> <cell id='0' cpus='0' memory='6291456' unit='KiB' memAccess='shared'/> </numa> [...] <memoryBacking> <hugepages> <page size="2" unit="M" nodeset="0"/> </hugepages> </memoryBacking>
The new and recommended way to get to a KVM guest is using vhost_user. This will cause DPDK to create a socket that qemu will connect the guest to. Here an example how to add such a port to the bridge you created (see above).
ovs-vsctl add-port ovsdpdkbr0 vhost-user-1 -- set Interface vhost-user-1 type=dpdkvhostuser
This will create a vhost_user socket at /var/run/openvswitch/vhost-user-1
To let libvirt/kvm consume this socket and create a guest virtio network device for it add a snippet like this to your guest definition as the network definition.
<interface type='vhostuser'> <source type='unix' path='/var/run/openvswitch/vhost-user-1' mode='client'/> <model type='virtio'/> </interface>
DPDK in KVM Guests
If you have no access to DPDK supported network cards you can still work with DPDK by using its support for virtio. To do so you have to create guests backed by hugepages (see above).
On top of that there it is required to have at least SSE3. The default CPU model qemu/libvirt uses is only up to SSE2. So you will have to define a model that passed the proper feature flag - and of course have a Host system that supportes it. An example can be found in following snippet to your virsh xml (or the equivalent virsh interface you use).
<cpu mode='host-passthrough'>
This example is rather offensive and passes all host features. That in turn makes the guest not very migratable as the target would need all the features as well. A "softer" way is to just add sse3 to the default model like the following example.
<cpu mode='custom' match='exact'> <model fallback='allow'>qemu64</model> <feature policy='require' name='ssse3'/> </cpu>
Also virtio nowadays supports multiqueue which DPDK in turn can exploit for better speed. To modify a normal virtio definition to have multiple queues add the following to your interface definition. This is about enhancing a normal virtio nic to have multiple queues, to later on be consumed e.g. by DPDK in the guest.
<driver name="vhost" queues="4"/>
Tuning Openvswitch-DPDK
DPDK has plenty of options - in combination with Openvswitch-DPDK the two most commonly used are:
ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=2 ovs-vsctl set Open_vSwitch . other_config:pmd-cpu-mask=0x6
The first select how many rx queues are to be used for each DPDK interface, while the second controls how many and where to run PMD threads. The example above will utilize two rx queues and run PMD threads on CPU 1 and 2. See the referred links to "EAL Command-line Options" and "OpenVswitch DPDK installation" at the end of this document for more.
As usual with tunings you have to know your system and workload really well - so please verify any tunings with workloads matching your real use case.
Assistance et dépannage
DPDK is a fast evolving project. In any case of a search for support and further guides it is highly recommended to first check if they apply to the current version.
-
For OpenVswitch-DPDK OpenStack Mailing Lists
-
Known issues in DPDK Launchpad Area
-
Join the IRC channels #DPDK or #openvswitch on freenode.
Issues are often due to missing small details in the general setup. Later on, these missing details cause problems which can be hard to track down to their root cause. A common case seems to be the "could not open network device dpdk0 (No such device)" issue. This occurs rather late when setting up a port in Open vSwitch with DPDK. But the root cause most of the time is very early in the setup and initialization. Here an example how a proper initialization of a device looks - this can be found in the syslog/journal when starting Open vSwitch with DPDK enabled.
ovs-ctl[3560]: EAL: PCI device 0000:04:00.1 on NUMA socket 0 ovs-ctl[3560]: EAL: probe driver: 8086:1528 rte_ixgbe_pmd ovs-ctl[3560]: EAL: PCI memory mapped at 0x7f2140000000 ovs-ctl[3560]: EAL: PCI memory mapped at 0x7f2140200000
If this is missing, either by ignored cards, failed initialization or other reasons, later on there will be no DPDK device to refer to. Unfortunately the logging is spread across syslog/journal and the openvswitch log. To allow some cross checking here an example what can be found in these logs, relative to the entered command.
#Note: This log was taken with dpdk 2.2 and openvswitch 2.5 Captions: CMD: that you enter SYSLOG: (Inlcuding EAL and OVS Messages) OVS-LOG: (Openvswitch messages) #PREPARATION Bind an interface to DPDK UIO drivers, make Hugepages available, enable DPDK on OVS CMD: sudo service openvswitch-switch restart SYSLOG: 2016-01-22T08:58:31.372Z|00003|daemon_unix(monitor)|INFO|pid 3329 died, killed (Terminated), exiting 2016-01-22T08:58:33.377Z|00002|vlog|INFO|opened log file /var/log/openvswitch/ovs-vswitchd.log 2016-01-22T08:58:33.381Z|00003|ovs_numa|INFO|Discovered 12 CPU cores on NUMA node 0 2016-01-22T08:58:33.381Z|00004|ovs_numa|INFO|Discovered 1 NUMA nodes and 12 CPU cores 2016-01-22T08:58:33.381Z|00005|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connecting... 2016-01-22T08:58:33.383Z|00006|reconnect|INFO|unix:/var/run/openvswitch/db.sock: connected 2016-01-22T08:58:33.386Z|00007|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.5.0 OVS-LOG: systemd[1]: Stopping Open vSwitch... systemd[1]: Stopped Open vSwitch. systemd[1]: Stopping Open vSwitch Internal Unit... ovs-ctl[3541]: * Killing ovs-vswitchd (3329) ovs-ctl[3541]: * Killing ovsdb-server (3318) systemd[1]: Stopped Open vSwitch Internal Unit. systemd[1]: Starting Open vSwitch Internal Unit... ovs-ctl[3560]: * Starting ovsdb-server ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=7.12.1 ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=2.5.0 "external-ids:system-id=\"e7c5ba80-bb14-45c1-b8eb-628f3ad03903\"" "system-type=\"Ubuntu\"" "system-version=\"16.04-xenial\"" ovs-ctl[3560]: * Configuring Open vSwitch system IDs ovs-ctl[3560]: 2016-01-22T08:58:31Z|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to /var/run/openvswitch ovs-vswitchd: ovs|00001|dpdk|INFO|No -vhost_sock_dir provided - defaulting to /var/run/openvswitch ovs-ctl[3560]: EAL: Detected lcore 0 as core 0 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 1 as core 1 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 2 as core 2 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 3 as core 3 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 4 as core 4 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 5 as core 5 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 6 as core 0 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 7 as core 1 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 8 as core 2 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 9 as core 3 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 10 as core 4 on socket 0 ovs-ctl[3560]: EAL: Detected lcore 11 as core 5 on socket 0 ovs-ctl[3560]: EAL: Support maximum 128 logical core(s) by configuration. ovs-ctl[3560]: EAL: Detected 12 lcore(s) ovs-ctl[3560]: EAL: VFIO modules not all loaded, skip VFIO support... ovs-ctl[3560]: EAL: Setting up physically contiguous memory... ovs-ctl[3560]: EAL: Ask a virtual area of 0x100000000 bytes ovs-ctl[3560]: EAL: Virtual area found at 0x7f2040000000 (size = 0x100000000) ovs-ctl[3560]: EAL: Requesting 4 pages of size 1024MB from socket 0 ovs-ctl[3560]: EAL: TSC frequency is ~2397202 KHz ovs-vswitchd[3592]: EAL: TSC frequency is ~2397202 KHz ovs-vswitchd[3592]: EAL: Master lcore 0 is ready (tid=fc6cbb00;cpuset=[0]) ovs-vswitchd[3592]: EAL: PCI device 0000:04:00.0 on NUMA socket 0 ovs-vswitchd[3592]: EAL: probe driver: 8086:1528 rte_ixgbe_pmd ovs-vswitchd[3592]: EAL: Not managed by a supported kernel driver, skipped ovs-vswitchd[3592]: EAL: PCI device 0000:04:00.1 on NUMA socket 0 ovs-vswitchd[3592]: EAL: probe driver: 8086:1528 rte_ixgbe_pmd ovs-vswitchd[3592]: EAL: PCI memory mapped at 0x7f2140000000 ovs-vswitchd[3592]: EAL: PCI memory mapped at 0x7f2140200000 ovs-ctl[3560]: EAL: Master lcore 0 is ready (tid=fc6cbb00;cpuset=[0]) ovs-ctl[3560]: EAL: PCI device 0000:04:00.0 on NUMA socket 0 ovs-ctl[3560]: EAL: probe driver: 8086:1528 rte_ixgbe_pmd ovs-ctl[3560]: EAL: Not managed by a supported kernel driver, skipped ovs-ctl[3560]: EAL: PCI device 0000:04:00.1 on NUMA socket 0 ovs-ctl[3560]: EAL: probe driver: 8086:1528 rte_ixgbe_pmd ovs-ctl[3560]: EAL: PCI memory mapped at 0x7f2140000000 ovs-ctl[3560]: EAL: PCI memory mapped at 0x7f2140200000 ovs-vswitchd[3592]: PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3 ovs-vswitchd[3592]: PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x1528 ovs-ctl[3560]: PMD: eth_ixgbe_dev_init(): MAC: 4, PHY: 3 ovs-ctl[3560]: PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x1528 ovs-ctl[3560]: Zone 0: name:<RG_MP_log_history>, phys:0x83fffdec0, len:0x2080, virt:0x7f213fffdec0, socket_id:0, flags:0 ovs-ctl[3560]: Zone 1: name:<MP_log_history>, phys:0x83fd73d40, len:0x28a0c0, virt:0x7f213fd73d40, socket_id:0, flags:0 ovs-ctl[3560]: Zone 2: name:<rte_eth_dev_data>, phys:0x83fd43380, len:0x2f700, virt:0x7f213fd43380, socket_id:0, flags:0 ovs-ctl[3560]: * Starting ovs-vswitchd ovs-ctl[3560]: * Enabling remote OVSDB managers systemd[1]: Started Open vSwitch Internal Unit. systemd[1]: Starting Open vSwitch... systemd[1]: Started Open vSwitch. CMD: sudo ovs-vsctl add-br ovsdpdkbr0 -- set bridge ovsdpdkbr0 datapath_type=netdev SYSLOG: 2016-01-22T08:58:56.344Z|00008|memory|INFO|37256 kB peak resident set size after 24.5 seconds 2016-01-22T08:58:56.346Z|00009|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports recirculation 2016-01-22T08:58:56.346Z|00010|ofproto_dpif|INFO|netdev@ovs-netdev: MPLS label stack length probed as 3 2016-01-22T08:58:56.346Z|00011|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath supports unique flow ids 2016-01-22T08:58:56.346Z|00012|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does not support ct_state 2016-01-22T08:58:56.346Z|00013|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does not support ct_zone 2016-01-22T08:58:56.346Z|00014|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does not support ct_mark 2016-01-22T08:58:56.346Z|00015|ofproto_dpif|INFO|netdev@ovs-netdev: Datapath does not support ct_label 2016-01-22T08:58:56.360Z|00016|bridge|INFO|bridge ovsdpdkbr0: added interface ovsdpdkbr0 on port 65534 2016-01-22T08:58:56.361Z|00017|bridge|INFO|bridge ovsdpdkbr0: using datapath ID 00005a4a1ed0a14d 2016-01-22T08:58:56.361Z|00018|connmgr|INFO|ovsdpdkbr0: added service controller "punix:/var/run/openvswitch/ovsdpdkbr0.mgmt" OVS-LOG: ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl add-br ovsdpdkbr0 -- set bridge ovsdpdkbr0 datapath_type=netdev systemd-udevd[3607]: Could not generate persistent MAC address for ovs-netdev: No such file or directory kernel: [50165.886554] device ovs-netdev entered promiscuous mode kernel: [50165.901261] device ovsdpdkbr0 entered promiscuous mode CMD: sudo ovs-vsctl add-port ovsdpdkbr0 dpdk0 -- set Interface dpdk0 type=dpdk SYSLOG: 2016-01-22T08:59:06.369Z|00019|memory|INFO|peak resident set size grew 155% in last 10.0 seconds, from 37256 kB to 95008 kB 2016-01-22T08:59:06.369Z|00020|memory|INFO|handlers:4 ports:1 revalidators:2 rules:5 2016-01-22T08:59:30.989Z|00021|dpdk|INFO|Port 0: 8c:dc:d4:b3:6d:e9 2016-01-22T08:59:31.520Z|00022|dpdk|INFO|Port 0: 8c:dc:d4:b3:6d:e9 2016-01-22T08:59:31.521Z|00023|dpif_netdev|INFO|Created 1 pmd threads on numa node 0 2016-01-22T08:59:31.522Z|00001|dpif_netdev(pmd16)|INFO|Core 0 processing port 'dpdk0' 2016-01-22T08:59:31.522Z|00024|bridge|INFO|bridge ovsdpdkbr0: added interface dpdk0 on port 1 2016-01-22T08:59:31.522Z|00025|bridge|INFO|bridge ovsdpdkbr0: using datapath ID 00008cdcd4b36de9 2016-01-22T08:59:31.523Z|00002|dpif_netdev(pmd16)|INFO|Core 0 processing port 'dpdk0' OVS-LOG: ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl add-port ovsdpdkbr0 dpdk0 -- set Interface dpdk0 type=dpdk ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a79ebc0 hw_ring=0x7f211a7a6c00 dma_addr=0x81a7a6c00 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f211a78a6c0 sw_sc_ring=0x7f211a786580 hw_ring=0x7f211a78e800 dma_addr=0x81a78e800 ovs-vswitchd[3595]: PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 4 (port=0). ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a79ebc0 hw_ring=0x7f211a7a6c00 dma_addr=0x81a7a6c00 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a76e4c0 hw_ring=0x7f211a776500 dma_addr=0x81a776500 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a756440 hw_ring=0x7f211a75e480 dma_addr=0x81a75e480 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a73e3c0 hw_ring=0x7f211a746400 dma_addr=0x81a746400 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a726340 hw_ring=0x7f211a72e380 dma_addr=0x81a72e380 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a70e2c0 hw_ring=0x7f211a716300 dma_addr=0x81a716300 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a6f6240 hw_ring=0x7f211a6fe280 dma_addr=0x81a6fe280 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a6de1c0 hw_ring=0x7f211a6e6200 dma_addr=0x81a6e6200 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a6c6140 hw_ring=0x7f211a6ce180 dma_addr=0x81a6ce180 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a6ae0c0 hw_ring=0x7f211a6b6100 dma_addr=0x81a6b6100 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a696040 hw_ring=0x7f211a69e080 dma_addr=0x81a69e080 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a67dfc0 hw_ring=0x7f211a686000 dma_addr=0x81a686000 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f211a665e40 hw_ring=0x7f211a66de80 dma_addr=0x81a66de80 ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Using simple tx code path ovs-vswitchd[3595]: PMD: ixgbe_set_tx_function(): Vector tx enabled. ovs-vswitchd[3595]: PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f211a78a6c0 sw_sc_ring=0x7f211a786580 hw_ring=0x7f211a78e800 dma_addr=0x81a78e800 ovs-vswitchd[3595]: PMD: ixgbe_set_rx_function(): Vector rx enabled, please make sure RX burst size no less than 4 (port=0). CMD: sudo ovs-vsctl add-port ovsdpdkbr0 vhost-user-1 -- set Interface vhost-user-1 type=dpdkvhostuser OVS-LOG: 2016-01-22T09:00:35.145Z|00026|dpdk|INFO|Socket /var/run/openvswitch/vhost-user-1 created for vhost-user port vhost-user-1 2016-01-22T09:00:35.145Z|00003|dpif_netdev(pmd16)|INFO|Core 0 processing port 'dpdk0' 2016-01-22T09:00:35.145Z|00004|dpif_netdev(pmd16)|INFO|Core 0 processing port 'vhost-user-1' 2016-01-22T09:00:35.145Z|00027|bridge|INFO|bridge ovsdpdkbr0: added interface vhost-user-1 on port 2 SYSLOG: ovs-vsctl: ovs|00001|vsctl|INFO|Called as ovs-vsctl add-port ovsdpdkbr0 vhost-user-1 -- set Interface vhost-user-1 type=dpdkvhostuser ovs-vswitchd[3595]: VHOST_CONFIG: socket created, fd:46 ovs-vswitchd[3595]: VHOST_CONFIG: bind to /var/run/openvswitch/vhost-user-1 Eventually we can see the poll thread in top PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3595 root 10 -10 4975344 103936 9916 S 100.0 0.3 33:13.56 ovs-vswitchd