VPP Container Test Bench
This project spins up a pair of Docker containers, both of which are running Ubuntu 20.04 “Focal Fossa” (x86_64) along with VPP. At run-time, the containers will attempt to create various links between each other, using both the Linux networking stack as well as VPP, and will then send some simple traffic back-and-forth (i.e. ICMP echo/ping requests and HTTP GET requests).
The intent of this example is to provide a relatively simple example of connecting containers via VPP and allowing others to use it as a springboard of sorts for their own projects and examples. Besides Docker and a handful of common Linux command-line utilities, not much else is required to build this example (due to most of the dependencies being lumped inside the containers themselves).
Instructions - Short Version
The use of an Ubuntu 20.04 LTS Linux distro, running on x86_64 hardware, is
required for these labs. If your current workstation/PC/laptop/etc. is
unable to run such a setup natively, the reader is now tasked with figuring out
how to get such a setup in order. This can be accomplished, for example,
through the use of virtual machines via tools like VirtualBox, or vagrant
.
As this can be a time consuming task for readers new to virtual machines, we
leave it as an exercise for the reader, as it is impractical to provide support
for such a task in this narrow/focused set of labs and tutorials.
This being said, it’s quite probable that one could use these labs on different flavors/distros of Linux, since the bulk of the work involved takes place inside containers which are always set to use an Ubuntu 20.04 baseline. However, for the sake of these labs, any other such setup is not supported.
Replicate the file listings at the end of this document (File Listings). You can also directly acquire a copy of these files by cloning the VPP repo, and navigating to the
docs/usecases/vpp_testbench/src
path to save yourself the hassle of copy-pasting and naming the files. Once that’s done, open a shell, and navigate to the location housing the project files.To build the project, simply run:
make
To start up the containers and have all the initialization logic take place, run:
make start
To trigger some basic traffic tests, run:
make poll
To terminate the containers and clean-up associated resources, run
make stop
To launch an interactive shell for the “client” container, run
make shell_client
To launch an interactive shell for the “server” container, run
make shell_server
Instructions - Long Version
Directory Structure and File Purposes
First, let’s quickly review the purpose of the various files used in this project.
vpp_testbench_helpers.sh
: shell variables and helper functions re-used in other scripts in this project. Intended to be sourced (i.e. not intended to be run directly). Some of the helper functions are used at run-time within the containers, while others are intended to be run in the default namespace on the host operating system to help with run-time configuration/bring up of the testbench.Dockerfile.vpp_testbench
: used to build the various Docker images used in this project (i.e. so VPP, our test tools, etc.; are all encapsulated within containers rather than being deployed to the host OS).Dockerfile.vpp_testbench.dockerignore
: a “permit-list” to restrict what files we permit to be included in our Docker images (helps keep image size down and provides some minor security benefits at build time, at least in general).entrypoint_client.sh
: entrypoint script used by the “client” Docker container when it is launched.entrypoint_server.sh
: entrypoint script used by the “server” Docker container when it is launched.Makefile
: top-level script; used to trigger the artifacts and Docker image builds, provides rules for starting/testing/stopping the containers, etc.
Getting Started
First, we’ll assume you are running on a Ubuntu 20.04 x86_64 setup (either on a
bare metal host or in a virtual machine), and have acquired a copy of the
project files (either by cloning the VPP git repository, or duplicating them
from File Listings). Now, just run make
. The
process should take a few minutes as it pulls the baseline Ubuntu Docker image,
applies system/package updates/upgrades via apt
, and installs VPP.
Next, one can start up the containers used by the project via make start
.
From this point forward, most testing, experiments, etc.; will likely involve
modifying/extending the poll_containers
definition inside Makefile
(probably easiest to just have it invoke a shell script that you write for your
own testing). Once you’ve completed various test runs, the entire deployment
can be cleaned-up via make stop
, and the whole process of starting,
testing, stopping, etc.; can be repeated as needed.
In addition to starting up the containers, make start
will establish
various types of links/connections between the two containers, making use of
both the Linux network stack, as well as VPP, to handle the “plumbing”
involved. This is to allow various types of connections between the two
containers, and to allow the reader to experiment with them (i.e. using
vppctl
to configure or trace packets going over VPP-managed links, use
traditional Linux command line utilities like tcpdump
, iproute2
,
ping
, etc.; to accomplish similar tasks over links managed purely by the
Linux network stack, etc.). Later labs will also encourage readers to compare
the two types of links (perhaps some performance metrics/profiling, or similar
tasks). This testbench project is effectively intended as a baseline workspace
upon which one may design and run the labs (or your own projects and examples,
whichever works for you).
Labs
Future Labs
Note
Coming soon.
Lab: Writing your First CLI Application (Querying Statistics)
Lab: MACSWAP Plugin Revisited
File Listings
Makefile
1################################################################################
2# @brief: Makefile for building the VPP testbench example.
3# @author: Matthew Giassa.
4# @copyright: (C) Cisco 2021.
5################################################################################
6#------------------------------------------------------------------------------#
7# Constants and settings.
8#------------------------------------------------------------------------------#
9SHELL=/bin/bash
10.DEFAULT_GOAL: all
11
12# Image names.
13# TODO: semver2 format if we want to publish these to a registry.
14DOCKER_CLIENT_IMG := vpp-testbench-client
15DOCKER_CLIENT_REL := local
16DOCKER_CLIENT_IMG_FULL := $(DOCKER_CLIENT_IMG):$(DOCKER_CLIENT_REL)
17DOCKER_SERVER_IMG := vpp-testbench-server
18DOCKER_SERVER_REL := local
19DOCKER_SERVER_IMG_FULL := $(DOCKER_SERVER_IMG):$(DOCKER_SERVER_REL)
20# Docker build-time settings (and run-time settings as well).
21DOCKER_HEALTH_PROBE_PORT := $(shell bash -c ". vpp_testbench_helpers.sh; host_only_get_docker_health_probe_port")
22
23#------------------------------------------------------------------------------#
24# Functions.
25#------------------------------------------------------------------------------#
26#------------------------------------------------------------------------------#
27# Cleanup running containers, Docker networks, etc.; from previous runs.
28define cleanup_everything
29 # Terminate the containers.
30 bash -c "\
31 . vpp_testbench_helpers.sh; \
32 host_only_kill_testbench_client_container $(DOCKER_CLIENT_IMG_FULL); \
33 host_only_kill_testbench_server_container $(DOCKER_SERVER_IMG_FULL); \
34 "
35
36 # Cleanup Docker bridge network.
37 bash -c "\
38 . vpp_testbench_helpers.sh; \
39 host_only_destroy_docker_networks; \
40 "
41endef
42
43#------------------------------------------------------------------------------#
44# Launch our containers and connect them to a private Docker network for
45# testing.
46define launch_testbench
47 # Create Docker bridge network.
48 bash -c "\
49 . vpp_testbench_helpers.sh; \
50 host_only_create_docker_networks; \
51 "
52
53 # Launch the containers.
54 bash -c "\
55 . vpp_testbench_helpers.sh; \
56 host_only_run_testbench_client_container $(DOCKER_CLIENT_IMG_FULL); \
57 host_only_run_testbench_server_container $(DOCKER_SERVER_IMG_FULL); \
58 "
59
60 # Entrypoint scripts will bring up the various links.
61 # Use "docker ps" to check status of containers, see if their health
62 # probes are working as expected (i.e. "health"), etc.
63endef
64
65#------------------------------------------------------------------------------#
66# Goals.
67#------------------------------------------------------------------------------#
68#------------------------------------------------------------------------------#
69# Default goal.
70.PHONY: all
71all: docker
72 @echo Done.
73
74#------------------------------------------------------------------------------#
75# Build all docker images.
76.PHONY: docker
77docker: Dockerfile.vpp_testbench Dockerfile.vpp_testbench.dockerignore \
78 entrypoint_client.sh entrypoint_server.sh \
79 vpp_testbench_helpers.sh
80 # Client image.
81 DOCKER_BUILDKIT=1 docker build \
82 --file Dockerfile.vpp_testbench \
83 --build-arg HEALTHCHECK_PORT=$(DOCKER_HEALTH_PROBE_PORT) \
84 --tag $(DOCKER_CLIENT_IMG_FULL) \
85 --target client_img \
86 .
87 # Server image.
88 DOCKER_BUILDKIT=1 docker build \
89 --file Dockerfile.vpp_testbench \
90 --build-arg HEALTHCHECK_PORT=$(DOCKER_HEALTH_PROBE_PORT) \
91 --tag $(DOCKER_SERVER_IMG_FULL) \
92 --target server_img \
93 .
94
95#------------------------------------------------------------------------------#
96# Execute end-to-end test via containers.
97.PHONY: test
98test:
99 # Cleanup anything from previous runs.
100 $(call cleanup_everything)
101
102 # Launch our testbench.
103 $(call launch_testbench)
104
105 # Final cleanup.
106 $(call cleanup_everything)
107
108#------------------------------------------------------------------------------#
109# For manually cleaning up a test that fails partway through its execution.
110.PHONY: clean
111clean:
112 $(call cleanup_everything)
113
114#------------------------------------------------------------------------------#
115# For manually launching our testbench for interactive testing.
116.PHONY: start
117start:
118 $(call launch_testbench)
119
120#------------------------------------------------------------------------------#
121# For manually stopping (and cleaning up) our testbench.
122.PHONY: stop
123stop:
124 $(call cleanup_everything)
125
126#------------------------------------------------------------------------------#
127# Create an interactive shell session connected to the client container (for
128# manual testing). Typically preceded by "make start", and concluded with
129# "make stop" after exiting the shell.
130.PHONY: shell_client
131shell_client:
132 bash -c "\
133 . vpp_testbench_helpers.sh; \
134 host_only_shell_client_container; \
135 "
136
137#------------------------------------------------------------------------------#
138# Create an interactive shell session connected to the server container (for
139# manual testing). Typically preceded by "make start", and concluded with
140# "make stop" after exiting the shell.
141.PHONY: shell_server
142shell_server:
143 bash -c "\
144 . vpp_testbench_helpers.sh; \
145 host_only_shell_server_container; \
146 "
Dockerfile.vpp_testbench
1#------------------------------------------------------------------------------#
2# @brief: Dockerfile for building the VPP testbench project.
3# @author: Matthew Giassa <mgiassa@cisco.com>
4# @copyright: (C) Cisco 2021.
5#------------------------------------------------------------------------------#
6# Baseline image both client and server inherit from.
7FROM ubuntu:focal as baseline
8
9# System packages.
10RUN apt update -y && \
11 DEBIAN_FRONTEND="noninteractive" apt install -y tzdata termshark && \
12 apt install -y \
13 apt-transport-https \
14 axel \
15 bash \
16 binutils \
17 bridge-utils \
18 ca-certificates \
19 coreutils \
20 curl \
21 gnupg \
22 htop \
23 iftop \
24 iproute2 \
25 iptables \
26 iputils-ping \
27 netcat \
28 net-tools \
29 nload \
30 nmap \
31 procps \
32 python3 \
33 python3-dev \
34 python3-pip \
35 sudo \
36 wget \
37 tcpdump \
38 vim \
39 && \
40 apt clean -y
41# Python packages.
42RUN python3 -m pip install \
43 scapy
44
45# VPP.
46RUN bash -c "curl -L https://packagecloud.io/fdio/master/gpgkey | apt-key add -" && \
47 bash -c "echo \"deb [trusted=yes] https://packagecloud.io/fdio/release/ubuntu focal main\" >> /etc/apt/sources.list.d/99fd.io.list" && \
48 apt update && \
49 apt install -y \
50 vpp \
51 vpp-plugin-core \
52 vpp-plugin-dpdk \
53 && \
54 apt clean -y
55
56# Used by client/server entrypoint scripts.
57ADD vpp_testbench_helpers.sh /
58
59
60#------------------------------------------------------------------------------#
61# Client image.
62FROM baseline as client_img
63# Enable a health probe.
64ARG HEALTHCHECK_PORT=8080
65ENV HEALTHCHECK_PORT_RUNTIME="${HEALTHCHECK_PORT}"
66HEALTHCHECK CMD curl --fail "http://localhost:$HEALTHCHECK_PORT_RUNTIME" || exit 1
67# Image-specific overrides.
68ADD ./entrypoint_client.sh /entrypoint.sh
69ENTRYPOINT ["/entrypoint.sh"]
70
71
72#------------------------------------------------------------------------------#
73# Server image.
74FROM baseline as server_img
75# Enable a health probe.
76ARG HEALTHCHECK_PORT=8080
77ENV HEALTHCHECK_PORT_RUNTIME="${HEALTHCHECK_PORT}"
78HEALTHCHECK CMD curl --fail "http://localhost:$HEALTHCHECK_PORT_RUNTIME" || exit 1
79# Image-specific overrides.
80ADD ./entrypoint_server.sh /entrypoint.sh
81ENTRYPOINT ["/entrypoint.sh"]
Dockerfile.vpp_testbench.dockerignore
1#------------------------------------------------------------------------------#
2# @brief: Dockerfile permit/deny-list for building the VPP testbench
3# project.
4# @author: Matthew Giassa <mgiassa@cisco.com>
5# @copyright: (C) Cisco 2021.
6#------------------------------------------------------------------------------#
7# Ignore everything by default. Permit-list only.
8*
9
10# Entrypoint scripts and other artifacts.
11!entrypoint_client.sh
12!entrypoint_server.sh
13!vpp_testbench_helpers.sh
vpp_testbench_helpers.sh
1#!/bin/bash
2################################################################################
3# @brief: Helper functions for the VPP testbench project.
4# NOTE: functions prefixed with "host_only" are functions
5# intended to be executed on the host OS, **outside** of the
6# Docker containers. These are typically functions for bring-up
7# (i.e. creating the Docker networks, launching/terminating the
8# Docker containers, etc.). If a function is not prefixed with
9# "host_only", assume that the function/value/etc. is intended
10# for use within the Docker containers. We could maybe re-factor
11# this in the future so "host_only" functions live in a separate
12# file.
13# @author: Matthew Giassa <mgiassa@cisco.com>
14# @copyright: (C) Cisco 2021.
15################################################################################
16
17# Meant to be sourced, not executed directly.
18if [ "${BASH_SOURCE[0]}" -ef "$0" ]; then
19 echo "This script is intended to be sourced, not run. Aborting."
20 false
21 exit
22fi
23
24#------------------------------------------------------------------------------#
25# For tests using the Linux kernel network stack.
26#------------------------------------------------------------------------------#
27# Health check probe port for all containers.
28export DOCKER_HEALTH_PROBE_PORT="8123"
29# Docker bridge network settings.
30export CLIENT_BRIDGE_IP_DOCKER="169.254.0.1"
31export SERVER_BRIDGE_IP_DOCKER="169.254.0.2"
32export BRIDGE_NET_DOCKER="169.254.0.0/24"
33export BRIDGE_GW_DOCKER="169.254.0.254"
34# Overlay IP addresses.
35export CLIENT_VXLAN_IP_LINUX="169.254.10.1"
36export SERVER_VXLAN_IP_LINUX="169.254.10.2"
37export MASK_VXLAN_LINUX="24"
38export VXLAN_ID_LINUX="42"
39# IANA (rather than Linux legacy port value).
40export VXLAN_PORT="4789"
41
42# Docker network we use to bridge containers.
43export DOCKER_NET="vpp-testbench-net"
44# Docker container names for client and server (runtime aliases).
45export DOCKER_CLIENT_HOST="vpp-testbench-client"
46export DOCKER_SERVER_HOST="vpp-testbench-server"
47# Some related variables have to be computed at the last second, so they
48# are not all defined up-front.
49export CLIENT_VPP_NETNS_DST="/var/run/netns/${DOCKER_CLIENT_HOST}"
50export SERVER_VPP_NETNS_DST="/var/run/netns/${DOCKER_SERVER_HOST}"
51
52# VPP options.
53# These can be arbitrarily named.
54export CLIENT_VPP_HOST_IF="vpp1"
55export SERVER_VPP_HOST_IF="vpp2"
56# Putting VPP interfaces on separate subnet from Linux-stack i/f.
57export CLIENT_VPP_MEMIF_IP="169.254.11.1"
58export SERVER_VPP_MEMIF_IP="169.254.11.2"
59export VPP_MEMIF_NM="24"
60export CLIENT_VPP_TAP_IP_MEMIF="169.254.12.1"
61export SERVER_VPP_TAP_IP_MEMIF="169.254.12.2"
62export VPP_TAP_NM="24"
63# Bridge domain ID (for VPP tap + VXLAN interfaces). Arbitrary.
64export VPP_BRIDGE_DOMAIN_TAP="1000"
65
66# VPP socket path. Make it one level "deeper" than the "/run/vpp" that is used
67# by default, so our containers don't accidentally connect to an instance of
68# VPP running on the host OS (i.e. "/run/vpp/vpp.sock"), and hang the system.
69export VPP_SOCK_PATH="/run/vpp/containers"
70
71#------------------------------------------------------------------------------#
72# @brief: Converts an integer value representation of a VXLAN ID to a
73# VXLAN IPv4 multicast address (string represenation). This
74# effectively sets the first octet to "239" and the remaining 3x
75# octets to the IP-address equivalent of a 24-bit value.
76# Assumes that it's never supplied an input greater than what a
77# 24-bit unsigned integer can hold.
78function vxlan_id_to_mc_ip()
79{
80 if [ $# -ne 1 ]; then
81 echo "Sanity failure."
82 false
83 exit
84 fi
85
86 local id="${1}"
87 local a b c d ret
88 a="239"
89 b="$(((id>>16) & 0xff))"
90 c="$(((id>>8) & 0xff))"
91 d="$(((id) & 0xff))"
92 ret="${a}.${b}.${c}.${d}"
93
94 echo "${ret}"
95 true
96}
97# Multicast address for VXLAN. Treat the lower three octets as the 24-bit
98# representation of the VXLAN ID for ease-of-use (use-case specific, not
99# necessarily an established rule/protocol).
100MC_VXLAN_ADDR_LINUX="$(vxlan_id_to_mc_ip ${VXLAN_ID_LINUX})"
101export MC_VXLAN_ADDR_LINUX
102
103#------------------------------------------------------------------------------#
104# @brief: Get'er function (so makefile can re-use common values from this
105# script, and propagate them down to the Docker build operations
106# and logic within the Dockerfile; "DRY").
107function host_only_get_docker_health_probe_port()
108{
109 echo "${DOCKER_HEALTH_PROBE_PORT}"
110}
111
112#------------------------------------------------------------------------------#
113# @brief: Creates the Docker bridge network used to connect the
114# client and server testbench containers.
115function host_only_create_docker_networks()
116{
117 # Create network (bridge for VXLAN). Don't touch 172.16/12 subnet, as
118 # Docker uses it by default for its own overlay functionality.
119 docker network create \
120 --driver bridge \
121 --subnet=${BRIDGE_NET_DOCKER} \
122 --gateway=${BRIDGE_GW_DOCKER} \
123 "${DOCKER_NET}"
124}
125
126#------------------------------------------------------------------------------#
127# @brief: Destroys the Docker bridge network for connecting the
128# containers.
129function host_only_destroy_docker_networks()
130{
131 docker network rm "${DOCKER_NET}" || true
132}
133
134#------------------------------------------------------------------------------#
135# @brief: Bringup/dependency helper for VPP.
136function host_only_create_vpp_deps()
137{
138 # Create area for VPP sockets and mount points, if it doesn't already
139 # exist. Our containers need access to this path so they can see each
140 # others' respective sockets so we can bind them together via memif.
141 sudo mkdir -p "${VPP_SOCK_PATH}"
142}
143
144#------------------------------------------------------------------------------#
145# @brief: Launches the testbench client container.
146function host_only_run_testbench_client_container()
147{
148 # Sanity check.
149 if [ $# -ne 1 ]; then
150 echo "Sanity failure."
151 false
152 exit
153 fi
154
155 # Launch container. Mount the local PWD into the container too (so we can
156 # backup results).
157 local image_name="${1}"
158 docker run -d --rm \
159 --cap-add=NET_ADMIN \
160 --cap-add=SYS_NICE \
161 --cap-add=SYS_PTRACE \
162 --device=/dev/net/tun:/dev/net/tun \
163 --device=/dev/vfio/vfio:/dev/vfio/vfio \
164 --device=/dev/vhost-net:/dev/vhost-net \
165 --name "${DOCKER_CLIENT_HOST}" \
166 --volume="$(pwd):/work:rw" \
167 --volume="${VPP_SOCK_PATH}:/run/vpp:rw" \
168 --network name="${DOCKER_NET},ip=${CLIENT_BRIDGE_IP_DOCKER}" \
169 --workdir=/work \
170 "${image_name}"
171}
172
173#------------------------------------------------------------------------------#
174# @brief: Launches the testbench server container.
175function host_only_run_testbench_server_container()
176{
177 # Sanity check.
178 if [ $# -ne 1 ]; then
179 echo "Sanity failure."
180 false
181 exit
182 fi
183
184 # Launch container. Mount the local PWD into the container too (so we can
185 # backup results).
186 local image_name="${1}"
187 docker run -d --rm \
188 --cap-add=NET_ADMIN \
189 --cap-add=SYS_NICE \
190 --cap-add=SYS_PTRACE \
191 --device=/dev/net/tun:/dev/net/tun \
192 --device=/dev/vfio/vfio:/dev/vfio/vfio \
193 --device=/dev/vhost-net:/dev/vhost-net \
194 --name "${DOCKER_SERVER_HOST}" \
195 --volume="${VPP_SOCK_PATH}:/run/vpp:rw" \
196 --network name="${DOCKER_NET},ip=${SERVER_BRIDGE_IP_DOCKER}" \
197 "${image_name}"
198}
199
200#------------------------------------------------------------------------------#
201# @brief: Terminates the testbench client container.
202function host_only_kill_testbench_client_container()
203{
204 docker kill "${DOCKER_CLIENT_HOST}" || true
205 docker rm "${DOCKER_CLIENT_HOST}" || true
206}
207
208#------------------------------------------------------------------------------#
209# @brief: Terminates the testbench server container.
210function host_only_kill_testbench_server_container()
211{
212 docker kill "${DOCKER_SERVER_HOST}" || true
213 docker rm "${DOCKER_SERVER_HOST}" || true
214}
215
216#------------------------------------------------------------------------------#
217# @brief: Launches an interactive shell in the client container.
218function host_only_shell_client_container()
219{
220 docker exec -it "${DOCKER_CLIENT_HOST}" bash --init-file /entrypoint.sh
221}
222
223#------------------------------------------------------------------------------#
224# @brief: Launches an interactive shell in the server container.
225function host_only_shell_server_container()
226{
227 docker exec -it "${DOCKER_SERVER_HOST}" bash --init-file /entrypoint.sh
228}
229
230#------------------------------------------------------------------------------#
231# @brief: Determines the network namespace or "netns" associated with a
232# running Docker container, and then creates a network interface
233# in the default/host netns, and moves it into the netns
234# associated with the container.
235function host_only_move_host_interfaces_into_container()
236{
237 # NOTE: this is only necessary if we want to create Linux network
238 # interfaces while working in the default namespace, and then move them
239 # into container network namespaces.
240 # In earlier versions of this code, we did such an operation, but now we
241 # just create the interfaces inside the containers themselves (requires
242 # CAP_NET_ADMIN, or privileged containers, which we avoid). This is left
243 # here as it's occasionally useful for debug purposes (or might become a
244 # mini-lab itself).
245
246 # Make sure netns path exists.
247 sudo mkdir -p /var/run/netns
248
249 # Mount container network namespaces so that they are accessible via "ip
250 # netns". Ignore "START_OF_SCRIPT": just used to make
251 # linter-compliant text indentation look nicer.
252 DOCKER_CLIENT_PID=$(docker inspect -f '{{.State.Pid}}' ${DOCKER_CLIENT_HOST})
253 DOCKER_SERVER_PID=$(docker inspect -f '{{.State.Pid}}' ${DOCKER_SERVER_HOST})
254 CLIENT_VPP_NETNS_SRC=/proc/${DOCKER_CLIENT_PID}/ns/net
255 SERVER_VPP_NETNS_SRC=/proc/${DOCKER_SERVER_PID}/ns/net
256 sudo ln -sfT "${CLIENT_VPP_NETNS_SRC}" "${CLIENT_VPP_NETNS_DST}"
257 sudo ln -sfT "${SERVER_VPP_NETNS_SRC}" "${SERVER_VPP_NETNS_DST}"
258
259 # Move these interfaces into the namespaces of the containers and assign an
260 # IPv4 address to them.
261 sudo ip link set dev "${CLIENT_VPP_HOST_IF}" netns "${DOCKER_CLIENT_NETNS}"
262 sudo ip link set dev "${SERVER_VPP_HOST_IF}" netns "${DOCKER_SERVER_NETNS}"
263 docker exec ${DOCKER_CLIENT_HOST} ip a
264 docker exec ${DOCKER_SERVER_HOST} ip a
265
266 # Bring up the links and assign IP addresses. This must be done
267 # **after** moving the interfaces to a new netns, as we might have a
268 # hypothetical use case where we assign the same IP to multiple
269 # interfaces, which would be a problem. This collision issue isn't a
270 # problem though if the interfaces are in separate network namespaces
271 # though.
272}
entrypoint_client.sh
1#!/bin/bash
2################################################################################
3# @brief: Launcher/entrypoint script plus helper functions for "client
4# side" container in the VPP testbench.
5# @author: Matthew Giassa <mgiassa@cisco.com>
6# @copyright: (C) Cisco 2021.
7################################################################################
8
9################################################################################
10# Dependencies.
11################################################################################
12# Import common settings for server and client. This is supplied via the
13# Dockerfile build.
14# shellcheck disable=SC1091
15. vpp_testbench_helpers.sh
16
17################################################################################
18# Globals.
19################################################################################
20# VPP instance socket.
21export VPP_SOCK=/run/vpp/vpp.testbench-client.sock
22# Alias for vppctl that uses the correct socket name.
23export VPPCTL="vppctl -s ${VPP_SOCK}"
24# Our "Docker bridge network". Don't change this value.
25export NET_IF_DOCKER="eth0"
26# Name of link associated with our VXLAN.
27export LINK_VXLAN_LINUX="vxlan-vid-${VXLAN_ID_LINUX}"
28
29################################################################################
30# Function definitions.
31################################################################################
32#------------------------------------------------------------------------------#
33# @brief: Alias for vppctl (knowing which API socket to use).
34function vc()
35{
36 vppctl -s "${VPP_SOCK}" "${@}"
37}
38
39#------------------------------------------------------------------------------#
40# @brief: Used to initialize/configure the client container once it's up and
41# running.
42function context_create()
43{
44 set -x
45 echo "Running client. Host: $(hostname)"
46 local mtu
47
48 # Setup VXLAN overlay.
49 ip link add "${LINK_VXLAN_LINUX}" \
50 type vxlan \
51 id "${VXLAN_ID_LINUX}" \
52 dstport "${VXLAN_PORT}" \
53 local "${CLIENT_BRIDGE_IP_DOCKER}" \
54 group "${MC_VXLAN_ADDR_LINUX}" \
55 dev "${NET_IF_DOCKER}" \
56 ttl 1
57 ip link set "${LINK_VXLAN_LINUX}" up
58 ip addr add "${CLIENT_VXLAN_IP_LINUX}/${MASK_VXLAN_LINUX}" dev "${LINK_VXLAN_LINUX}"
59
60 # Get MTU of interface. VXLAN must use a smaller value due to overhead.
61 mtu="$(cat /sys/class/net/${NET_IF_DOCKER}/mtu)"
62
63 # Decrease VXLAN MTU. This should already be handled for us by iproute2, but
64 # just being cautious.
65 ip link set dev "${LINK_VXLAN_LINUX}" mtu "$((mtu - 50))"
66
67 # Bring-up VPP and create tap interfaces and VXLAN tunnel.
68 vpp \
69 unix '{' log /tmp/vpp1.log full-coredump cli-listen ${VPP_SOCK} '}' \
70 api-segment '{' prefix vpp1 '}' \
71 api-trace '{' on '}' \
72 dpdk '{' uio-driver uio_pci_generic no-pci '}'
73
74 # Wait for VPP to come up.
75 while ! ${VPPCTL} show log; do
76 sleep 1
77 done
78
79 # Bring up the memif interface and assign an IP to it.
80 ${VPPCTL} create interface memif id 0 slave
81 sleep 1
82 ${VPPCTL} set int state memif0/0 up
83 ${VPPCTL} set int ip address memif0/0 "${CLIENT_VPP_MEMIF_IP}/${VPP_MEMIF_NM}"
84
85 # Create VPP-controlled tap interface bridged to the memif.
86 ${VPPCTL} create tap id 0 host-if-name vpp-tap-0
87 sleep 1
88 ${VPPCTL} set interface state tap0 up
89 ip addr add "${CLIENT_VPP_TAP_IP_MEMIF}/${VPP_TAP_NM}" dev vpp-tap-0
90 ${VPPCTL} set interface l2 bridge tap0 "${VPP_BRIDGE_DOMAIN_TAP}"
91 ${VPPCTL} set interface l2 bridge memif0/0 "${VPP_BRIDGE_DOMAIN_TAP}"
92}
93
94#------------------------------------------------------------------------------#
95# @brief: Used to shutdown/cleanup the client container.
96function context_destroy()
97{
98 # OS will reclaim interfaces and resources when container is terminated.
99 :
100}
101
102#------------------------------------------------------------------------------#
103# @brief: Client worker loop to keep the container alive. Just idles.
104function context_loop()
105{
106 # Sleep indefinitely (to keep container alive for testing).
107 tail -f /dev/null
108}
109
110#------------------------------------------------------------------------------#
111# @brief: Launches a minimalistic web server via netcat. The Dockerfile
112# associated with this project is configured to treat the web server
113# replying with "200 OK" as a sort of simple health probe.
114function health_check_init()
115{
116 while true; do
117 echo -e "HTTP/1.1 200 OK\n\nHOST:$(hostname)\nDATE:$(date)" \
118 | nc -l -p "${DOCKER_HEALTH_PROBE_PORT}" -q 1
119 done
120}
121
122#------------------------------------------------------------------------------#
123# @brief: Main/default entry point.
124function main()
125{
126 # Make sure we always cleanup.
127 trap context_destroy EXIT
128
129 # Bring up interfaces.
130 context_create
131
132 # Enable health check responder.
133 health_check_init &
134
135 # Enter our worker loop.
136 context_loop
137}
138
139#------------------------------------------------------------------------------#
140# Script is generally intended to be sourced and individual functions called.
141# If just run as a standalone script, assume it's being used as the entrypoint
142# for a Docker container.
143if [ "${BASH_SOURCE[0]}" -ef "$0" ]; then
144 # Being run. Launch main.
145 main "${@}"
146else
147 # Being sourced. Do nothing.
148 :
149fi
entrypoint_server.sh
1#!/bin/bash
2################################################################################
3# @brief: Launcher/entrypoint script plus helper functions for "server
4# side" container in the VPP testbench.
5# @author: Matthew Giassa <mgiassa@cisco.com>
6# @copyright: (C) Cisco 2021.
7################################################################################
8
9################################################################################
10# Dependencies.
11################################################################################
12# Import common settings for server and client. This is supplied via the
13# Dockerfile build.
14# shellcheck disable=SC1091
15. vpp_testbench_helpers.sh
16
17################################################################################
18# Globals.
19################################################################################
20# VPP instance socket.
21export VPP_SOCK=/run/vpp/vpp.testbench-server.sock
22# Alias for vppctl that uses the correct socket name.
23export VPPCTL="vppctl -s ${VPP_SOCK}"
24# Our "Docker bridge network". Don't change this value.
25export NET_IF_DOCKER="eth0"
26# Name of link associated with our VXLAN.
27export LINK_VXLAN_LINUX="vxlan-vid-${VXLAN_ID_LINUX}"
28
29################################################################################
30# Function definitions.
31################################################################################
32#------------------------------------------------------------------------------#
33# @brief: Alias for vppctl (knowing which API socket to use).
34function vc()
35{
36 vppctl -s "${VPP_SOCK}" "${@}"
37}
38
39#------------------------------------------------------------------------------#
40# @brief: Used to initialize/configure the server container once it's up and
41# running.
42function context_create()
43{
44 set -x
45 echo "Running server. Host: $(hostname)"
46 local mtu
47
48 # Setup VXLAN overlay.
49 ip link add "${LINK_VXLAN_LINUX}" \
50 type vxlan \
51 id "${VXLAN_ID_LINUX}" \
52 dstport "${VXLAN_PORT}" \
53 local "${SERVER_BRIDGE_IP_DOCKER}" \
54 group "${MC_VXLAN_ADDR_LINUX}" \
55 dev "${NET_IF_DOCKER}" \
56 ttl 1
57 ip link set "${LINK_VXLAN_LINUX}" up
58 ip addr add "${SERVER_VXLAN_IP_LINUX}/${MASK_VXLAN_LINUX}" dev "${LINK_VXLAN_LINUX}"
59
60 # Get MTU of interface. VXLAN must use a smaller value due to overhead.
61 mtu="$(cat /sys/class/net/${NET_IF_DOCKER}/mtu)"
62
63 # Decrease VXLAN MTU. This should already be handled for us by iproute2, but
64 # just being cautious.
65 ip link set dev "${LINK_VXLAN_LINUX}" mtu "$((mtu - 50))"
66
67 # Bring-up VPP and create tap interfaces and VXLAN tunnel.
68 vpp \
69 unix '{' log /tmp/vpp1.log full-coredump cli-listen ${VPP_SOCK} '}' \
70 api-segment '{' prefix vpp1 '}' \
71 api-trace '{' on '}' \
72 dpdk '{' uio-driver uio_pci_generic no-pci '}'
73
74 # Wait for VPP to come up.
75 while ! ${VPPCTL} show log; do
76 sleep 1
77 done
78
79 # Bring up the memif interface and assign an IP to it.
80 ${VPPCTL} create interface memif id 0 master
81 sleep 1
82 ${VPPCTL} set int state memif0/0 up
83 ${VPPCTL} set int ip address memif0/0 "${SERVER_VPP_MEMIF_IP}/${VPP_MEMIF_NM}"
84
85 # Create VPP-controlled tap interface bridged to the memif.
86 ${VPPCTL} create tap id 0 host-if-name vpp-tap-0
87 sleep 1
88 ${VPPCTL} set interface state tap0 up
89 ip addr add "${SERVER_VPP_TAP_IP_MEMIF}/${VPP_TAP_NM}" dev vpp-tap-0
90 ${VPPCTL} set interface l2 bridge tap0 "${VPP_BRIDGE_DOMAIN_TAP}"
91 ${VPPCTL} set interface l2 bridge memif0/0 "${VPP_BRIDGE_DOMAIN_TAP}"
92}
93
94#------------------------------------------------------------------------------#
95# @brief: Used to shutdown/cleanup the server container.
96function context_destroy()
97{
98 # OS will reclaim interfaces and resources when container is terminated.
99 :
100}
101
102#------------------------------------------------------------------------------#
103# @brief: Server worker loop to keep the container alive. Just idles.
104function context_loop()
105{
106 # Sleep indefinitely (to keep container alive for testing).
107 tail -f /dev/null
108}
109
110#------------------------------------------------------------------------------#
111# @brief: Launches a minimalistic web server via netcat. The Dockerfile
112# associated with this project is configured to treat the web server
113# replying with "200 OK" as a sort of simple health probe.
114function health_check_init()
115{
116 while true; do
117 echo -e "HTTP/1.1 200 OK\n\nHOST:$(hostname)\nDATE:$(date)" \
118 | nc -l -p "${DOCKER_HEALTH_PROBE_PORT}" -q 1
119 done
120}
121
122#------------------------------------------------------------------------------#
123# @brief: Launches a minimalistic web server via netcat. This instance is
124# meant to bind to the Linux VXLAN tunnel we create.
125function web_server_vxlan_linux()
126{
127 while true; do
128 echo -e "HTTP/1.1 200 OK\n\nHOST:$(hostname)\nDATE:$(date)\nHello from the Linux interface." \
129 | nc -l -s "${SERVER_VXLAN_IP_LINUX}" -p 8000 -q 1
130 done
131}
132
133#------------------------------------------------------------------------------#
134# @brief: Launches a minimalistic web server via netcat. This instance is
135# meant to bind to the VPP VXLAN tunnel we create.
136function web_server_vpp_tap()
137{
138 while true; do
139 echo -e "HTTP/1.1 200 OK\n\nHOST:$(hostname)\nDATE:$(date)\nHello from the VPP interface." \
140 | nc -l -s "${SERVER_VPP_TAP_IP_MEMIF}" -p 8000 -q 1
141 done
142}
143
144#------------------------------------------------------------------------------#
145# @brief: Main/default entry point.
146function main()
147{
148 # Make sure we always cleanup.
149 trap context_destroy EXIT
150
151 # Bring up interfaces.
152 context_create
153
154 # Enable health check responder.
155 health_check_init &
156
157 # Bring up test web servers.
158 web_server_vxlan_linux &
159 web_server_vpp_tap &
160
161 # Enter our worker loop.
162 context_loop
163}
164
165#------------------------------------------------------------------------------#
166# Script is generally intended to be sourced and individual functions called.
167# If just run as a standalone script, assume it's being used as the entrypoint
168# for a Docker container.
169if [ "${BASH_SOURCE[0]}" -ef "$0" ]; then
170 # Being run. Launch main.
171 main "${@}"
172else
173 # Being sourced. Do nothing.
174 :
175fi