FD.io VPP  v18.01.2-1-g9b554f3
Vector Packet Processing
Example setup

VPP-memif master icmp_responder slave

Libmemif example app(s) use memif default socket file: /run/vpp/memif.sock.

Run VPP and icmpr-epoll example (default example when running in container).

Other examples work similar to icmpr-epoll. Brief explanation can be found in Examples .

VPP-side config:

1 DBGvpp# create memif id 0 master
2 DBGvpp# set int state memif0/0 up
3 DBGvpp# set int ip address memif0/0 192.168.1.1/24

icmpr-epoll:

1 conn 0 0

Memif in slave mode will try to connect every 2 seconds. If connection establishment is successfull, a message will show.

1 INFO: memif connected!

Error messages like "unmatched interface id" are printed only in debug mode.

Check connected status. Use show command in icmpr-epoll:

1 show
2 MEMIF DETAILS
3 ==============================
4 interface index: 0
5  interface ip: 192.168.1.2
6  interface name: memif_connection
7  app name: ICMP_Responder
8  remote interface name: memif0/0
9  remote app name: VPP 17.10-rc0~132-g62f9cdd
10  id: 0
11  secret:
12  role: slave
13  mode: ethernet
14  socket filename: /run/vpp/memif.sock
15  rx queues:
16  queue id: 0
17  ring size: 1024
18  buffer size: 2048
19  tx queues:
20  queue id: 0
21  ring size: 1024
22  buffer size: 2048
23  link: up
24 interface index: 1
25  no connection

Use sh memif command in VPP:

1 DBGvpp# sh memif
2 interface memif0/0
3  remote-name "ICMP_Responder"
4  remote-interface "memif_connection"
5  id 0 mode ethernet file /run/vpp/memif.sock
6  flags admin-up connected
7  listener-fd 12 conn-fd 13
8  num-s2m-rings 1 num-m2s-rings 1 buffer-size 0
9  master-to-slave ring 0:
10  region 0 offset 32896 ring-size 1024 int-fd 16
11  head 0 tail 0 flags 0x0000 interrupts 0
12  master-to-slave ring 0:
13  region 0 offset 0 ring-size 1024 int-fd 15
14  head 0 tail 0 flags 0x0001 interrupts 0

Send ping from VPP to icmpr-epoll:

1 DBGvpp# ping 192.168.1.2
2 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=.1888 ms
3 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=.1985 ms
4 64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=.1813 ms
5 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=.1929 ms
6 
7 Statistics: 5 sent, 4 received, 20% packet loss

multiple queues VPP-memif slave icmp_responder master

Run icmpr-epoll as in previous example setup. Run VPP with startup conf, enabling 2 worker threads. Example startup.conf:

1 unix {
2  interactive
3  nodaemon
4  full-coredump
5 }
6 
7 cpu {
8  workers 2
9 }

VPP-side config:

1 DBGvpp# create memif id 0 slave rx-queues 2 tx-queues 2
2 DBGvpp# set int state memif0/0 up
3 DBGvpp# set int ip address memif0/0 192.168.1.1/24

icmpr-epoll:

1 conn 0 1

When connection is established a message will print:

1 INFO: memif connected!

Error messages like "unmatched interface id" are printed only in debug mode.

Check connected status. Use show command in icmpr-epoll:

1 show
2 MEMIF DETAILS
3 ==============================
4 interface index: 0
5  interface ip: 192.168.1.2
6  interface name: memif_connection
7  app name: ICMP_Responder
8  remote interface name: memif0/0
9  remote app name: VPP 17.10-rc0~132-g62f9cdd
10  id: 0
11  secret:
12  role: master
13  mode: ethernet
14  socket filename: /run/vpp/memif.sock
15  rx queues:
16  queue id: 0
17  ring size: 1024
18  buffer size: 2048
19  queue id: 1
20  ring size: 1024
21  buffer size: 2048
22  tx queues:
23  queue id: 0
24  ring size: 1024
25  buffer size: 2048
26  queue id: 1
27  ring size: 1024
28  buffer size: 2048
29  link: up
30 interface index: 1
31  no connection

Use sh memif command in VPP:

1 DBGvpp# sh memif
2 interface memif0/0
3  remote-name "ICMP_Responder"
4  remote-interface "memif_connection"
5  id 0 mode ethernet file /run/vpp/memif.sock
6  flags admin-up slave connected
7  listener-fd -1 conn-fd 12
8  num-s2m-rings 2 num-m2s-rings 2 buffer-size 2048
9  slave-to-master ring 0:
10  region 0 offset 0 ring-size 1024 int-fd 14
11  head 0 tail 0 flags 0x0000 interrupts 0
12  slave-to-master ring 1:
13  region 0 offset 32896 ring-size 1024 int-fd 15
14  head 0 tail 0 flags 0x0000 interrupts 0
15  slave-to-master ring 0:
16  region 0 offset 65792 ring-size 1024 int-fd 16
17  head 0 tail 0 flags 0x0001 interrupts 0
18  slave-to-master ring 1:
19  region 0 offset 98688 ring-size 1024 int-fd 17
20  head 0 tail 0 flags 0x0001 interrupts 0

Send ping from VPP to icmpr-epoll:

1 DBGvpp# ping 192.168.1.2
2 64 bytes from 192.168.1.2: icmp_seq=2 ttl=64 time=.1439 ms
3 64 bytes from 192.168.1.2: icmp_seq=3 ttl=64 time=.2184 ms
4 64 bytes from 192.168.1.2: icmp_seq=4 ttl=64 time=.1458 ms
5 64 bytes from 192.168.1.2: icmp_seq=5 ttl=64 time=.1687 ms
6 
7 Statistics: 5 sent, 4 received, 20% packet loss

icmp_responder master icmp_responder slave

This setup creates connection between two applications using libmemif. Traffic functionality is the same as when connection to VPP. App can receive ARP/ICMP request and transmit response.

Run two instances of icmpr-epoll example.

If not running in container, make sure folder /run/vpp/ exists before creating memif master.

Instance 1 will be in master mode, instance 2 in slave mode. instance 1:

1 conn 0 1

instance 2:

1 conn 0 0

In 2 seconds, both instances should print connected! message:

1 INFO: memif connected!

Check peer interface names using show command.