FD.io VPP
v19.08.3-2-gbabecb413
Vector Packet Processing
|
VPP provides a binary API scheme to allow a wide variety of client codes to program data-plane tables. As of this writing, there are hundreds of binary APIs.
Messages are defined in *.api
files. Today, there are about 50 api files, with more arriving as folks add programmable features. The API file compiler sources reside in src/tools/vppapigen.
From src/vnet/interface.api, here's a typical request/response message definition:
To a first approximation, the API compiler renders this definition into build-root/.../vpp/include/vnet/interface.api.h
as follows:
To change the admin state of an interface, a binary api client sends a vl_api_sw_interface_set_flags_t to VPP, which will respond with a vl_api_sw_interface_set_flags_reply_t message.
Multiple layers of software, transport types, and shared libraries implement a variety of features:
Correctly-coded message handlers know nothing about the transport used to deliver messages to/from VPP. It's reasonably straighforward to use multiple API message transport types simultaneously.
For historical reasons, binary api messages are (putatively) sent in network byte order. As of this writing, we're seriously considering whether that choice makes sense.
Since binary API messages are always processed in order, we allocate messages using a ring allocator whenever possible. This scheme is extremely fast when compared with a traditional memory allocator, and doesn't cause heap fragmentation. See src/vlibmemory/memory_shared.c vl_msg_api_alloc_internal().
Regardless of transport, binary api messages always follow a msgbuf_t header:
This structure makes it easy to trace messages without having to decode them - simply save data_len bytes - and allows vl_msg_api_free() to rapidly dispose of message buffers:
It's extremely important that VPP can capture and replay sizeable binary API traces. System-level issues involving hundreds of thousands of API transactions can be re-run in a second or less. Partial replay allows one to binary-search for the point where the wheels fall off. One can add scaffolding to the data plane, to trigger when complex conditions obtain.
With binary API trace, print, and replay, system-level bug reports of the form "after 300,000 API transactions, the VPP data-plane stopped forwarding traffic, FIX IT!" can be solved offline.
More often than not, one discovers that a control-plane client misprograms the data plane after a long time or under complex circumstances. Without direct evidence, "it's a data-plane problem!"
See src/vlibmemory/memory_vlib::c vl_msg_api_process_file(), and src/vlibapi/api_shared.c. See also the debug CLI command "api trace"
Establishing a binary API connection to VPP from a C-language client is easy:
32 is a typical value for client_message_queue_length. VPP cannot block when it needs to send an API message to a binary API client, and the VPP-side binary API message handlers are very fast. When sending asynchronous messages, make sure to scrape the binary API rx ring with some enthusiasm.
Calling vl_client_connect_to_vlib spins up a binary API message RX pthread:
To handle the binary API message queue yourself, use vl_client_connect_to_vlib_no_rx_pthread.
In turn, vl_msg_api_queue_handler(...) uses mutex/condvar signalling to wake up, process VPP -> client traffic, then sleep. VPP supplies a condvar broadcast when the VPP -> client API message queue transitions from empty to nonempty.
VPP checks its own binary API input queue at a very high rate. VPP invokes message handlers in "process" context [aka cooperative multitasking thread context] at a variable rate, depending on data-plane packet processing requirements.
To disconnect from VPP, call vl_client_disconnect_from_vlib. Please arrange to call this function if the client application terminates abnormally. VPP makes every effort to hold a decent funeral for dead clients, but VPP can't guarantee to free leaked memory in the shared binary API segment.
The point of the exercise is to send binary API messages to VPP, and to receive replies from VPP. Many VPP binary APIs comprise a client request message, and a simple status reply. For example, to set the admin status of an interface, one codes:
Key points:
Unless you've made other arrangements (see vl_client_connect_to_vlib_no_rx_pthread), messages are received on a separate rx pthread. Synchronization with the client application main thread is the responsibility of the application!
Set up message handlers about as follows:
The key API used to establish message handlers is vl_msg_api_set_handlers , which sets values in multiple parallel vectors in the api_main_t structure. As of this writing: not all vector element values can be set through the API. You'll see sporadic API message registrations followed by minor adjustments of this form: