FD.io VPP
v21.10.1-2-g0a485f517
Vector Packet Processing
|
This driver relies on Linux AF_XDP socket to rx/tx Ethernet packets.
Under development: it should work, but has not been thoroughly tested.
Because of AF_XDP restrictions, the MTU is limited to below PAGE_SIZE (4096-bytes on most systems) minus 256-bytes, and they are additional limitations depending upon specific Linux device drivers. As a rule of thumb, a MTU of 3000-bytes or less should be safe.
Furthermore, upon UMEM creation, the kernel allocates a physically-contiguous structure, whose size is proportional to the number of 4KB pages contained in the UMEM. That allocation might fail when the number of buffers allocated by VPP is too high. That number can be controlled with the buffers { buffers-per-numa }
configuration option. Finally, note that because of this limitation, this plugin is unlikely to be compatible with the use of 1GB hugepages.
Interrupt and adaptive mode are supported but is limited by default to single threaded (no worker) configurations because of a kernel limitation prior to 5.6. You can bypass the limitation at interface creation time by adding the no-syscall-lock
parameter, but you must be sure that your kernel can support it, otherwise you will experience double-frees. See https://lore.kernel.org/bpf/BYAPR11MB365382C5DB1E5FCC53242609C1549@BYAPR11MB3653.namprd11.prod.outlook.com/ for more details.
When setting the number of queues on Mellanox NIC with ethtool -L
, you must use twice the amount of configured queues: it looks like the Linux driver will create separate RX queues and TX queues (but all queues can be used for both RX and TX, the NIC will just not sent any packet on "pure" TX queues. Confused? So I am.). For example if you set combined 2
you will effectively have to create 4 rx queues in AF_XDP if you want to be sure to receive all packets.
This drivers supports Linux kernel 5.4 and later. Kernels older than 5.4 are missing unaligned buffers support.
The Linux kernel interface must be up and have enough queues before creating the VPP AF_XDP interface, otherwise Linux will deny creating the AF_XDP socket. The AF_XDP interface will claim NIC RX queue starting from 0, up to the requested number of RX queues (only 1 by default). It means all packets destined to NIC RX queue [0, num_rx_queues[
will be received by the AF_XDP interface, and only them. Depending on your configuration, there will usually be several RX queues (typically 1 per core) and packets are spread accross queues by RSS. In order to receive consistent traffic, you must program the NIC dispatching accordingly. The simplest way to get all the packets is to specify num-rx-queues all
to grab all available queues or to reconfigure the Linux kernel driver to use only num_rx_queues
RX queues (ie all NIC queues will be associated with the AF_XDP socket):
Additionally, the VPP AF_XDP interface will use a MAC address generated at creation time instead of the Linux kernel interface MAC. As Linux kernel interface are not in promiscuous mode by default (see below) this will results in a useless configuration where the VPP AF_XDP interface only receives packets destined to the Linux kernel interface MAC just to drop them because the destination MAC does not match VPP AF_XDP interface MAC. If you want to use the Linux interface MAC for the VPP AF_XDP interface, you can change it afterwards in VPP:
Finally, if you wish to receive all packets and not only the packets destined to the Linux kernel interface MAC you need to set the Linux kernel interface in promiscuous mode:
When creating an AF_XDP interface, it will receive all packets arriving to the NIC RX queue [0, num_rx_queues[
. You need to configure the Linux kernel NIC driver properly to ensure that only intented packets will arrive in this queue. There is no way to filter the packets after-the-fact using eg. netfilter or eBPF.
~# vppctl create int af_xdp host-if enp216s0f0 num-rx-queues 4 prog extras/bpf/af_xdp.bpf.o ``` In that case it will replace any previously attached program. A custom XDP program example is provided in extras/bpf/
.
AF_XDP relies on the Linux kernel NIC driver to rx/tx packets. To reach high-performance (10's MPPS), the Linux kernel NIC driver must support zero-copy mode and its RX path must run on a dedicated core in the NUMA where the NIC is physically connected.