3.11. shared¶
3.11.1. container suite¶
3.11.1.1. Construct container on all DUTs¶
Construct 1 CNF of specific technology on all DUT nodes. Arguments: - nf_chains: Total number of chains (Optional). Type: integer, default value: ${1} - nf_nodes: Total number of nodes per chain (Optional). Type: integer, default value: ${1} - nf_chain: Chain ID (Optional). Type: integer, default value: ${1} - nf_node: Node ID (Optional). Type: integer, default value: ${1} - auto_scale - If True, use same amount of Dataplane threads for network function as DUT, otherwise use single physical core for every network function. Type: boolean - pinning: Set True if CPU pinning should be done on starting containers. Type: boolean, default value: ${False} Example: | Construct container on all DUTs | 1 | 1 | 1 | 1 | ${True} |
${duts}= Get Matches ${nodes} DUT*
FOR ${dut} IN @{duts}
\ Run Keyword Construct container on DUT ${dut} ${nf_chains} ${nf_nodes} ${nf_chain} ${nf_node} ${auto_scale} ${pinning}
3.11.1.2. Construct container on DUT¶
Construct 1 CNF of specific technology on specific DUT. Arguments: - dut: DUT node to construct the CNF on. Type: string - nf_chains: Total number of chains (Optional). Type: integer, default value: ${1} - nf_nodes: Total number of nodes per chain (Optional). Type: integer, default value: ${1} - nf_chain: Chain ID (Optional). Type: integer, default value: ${1} - nf_node: Node ID (Optional). Type: integer, default value: ${1} - auto_scale - If True, use same amount of Dataplane threads for network function as DUT, otherwise use single physical core for every network function. Type: boolean - pinning: Set True if CPU pinning should be done on starting containers. Type: boolean, default value: ${False} Example: | Construct container on DUT | DUT1 | 1 | 1 | 1 | 1 | | ${True} |
${nf_dtcr_status} ${value}= Run Keyword And Ignore Error Variable Should Exist ${nf_dtcr}
${nf_dtcr}= Run Keyword If '${nf_dtcr_status}' == 'PASS' Set Variable ${nf_dtcr} ELSE Set Variable ${1}
${nf_dtc}= Run Keyword If ${pinning} Set Variable If ${auto_scale} ${cpu_count_int} ${nf_dtc}
${nf_id}= Evaluate (${nf_chain} - ${1}) * ${nf_nodes} + ${nf_node}
${env}= Create List DEBIAN_FRONTEND=noninteractive
${dut1_uuid_length} = Get Length ${DUT1_UUID}
${root}= Run Keyword If ${dut1_uuid_length} Get Docker Mergeddir ${nodes['DUT1']} ${DUT1_UUID} ELSE Set Variable ${EMPTY}
${node_arch}= Get Node Arch ${nodes['${dut}']}
${name}= Set Variable ${dut}_${container_group}${nf_id}${DUT1_UUID}
${mnt}= Create List ${root}/tmp/:/mnt/host/ ${root}/tmp/vpp_sockets/${name}/:/run/vpp/ ${root}/dev/vfio/:/dev/vfio/ ${root}/usr/bin/vpp:/usr/bin/vpp ${root}/usr/bin/vppctl:/usr/bin/vppctl ${root}/usr/lib/${node_arch}-linux-gnu/:/usr/lib/${node_arch}-linux-gnu/ ${root}/usr/share/vpp/:/usr/share/vpp/
${nf_cpus}= Set Variable ${None}
${nf_cpus}= Run Keyword If ${pinning} Get Affinity NF ${nodes} ${dut} nf_chains=${nf_chains} nf_nodes=${nf_nodes} nf_chain=${nf_chain} nf_node=${nf_node} vs_dtc=${cpu_count_int} nf_dtc=${nf_dtc} nf_dtcr=${nf_dtcr}
&{cont_args}= Create Dictionary name=${name} node=${nodes['${dut}']} mnt=${mnt} env=${env} root=${root} page_size=${page_size}
Run Keyword If ${pinning} Set To Dictionary ${cont_args} cpuset_cpus=${nf_cpus}
Run Keyword ${container_group}.Construct container &{cont_args}
3.11.1.3. Construct chain of containers¶
Construct 1 chain of 1..N CNFs on selected/all DUT nodes. Arguments: - dut: DUT node to start the containers on. Run on all nodes if None. Type: string or None - nf_chains: Total number of chains. Type: integer - nf_nodes: Total number of nodes per chain. Type: integer - nf_chain: Chain ID. Type: integer - auto_scale - If True, use same amount of Dataplane threads for network function as DUT, otherwise use single physical core for every network function. Type: boolean - pinning: Set True if CPU pinning should be done on starting containers. Type: boolean, default value: ${False} Example: | Construct chain of containers | 1 | 1 | 1 | ${True} |
FOR ${nf_node} IN RANGE 1 ${nf_nodes}+1
\ Run Keyword If '${dut}' == '${None}' Construct container on all DUTs nf_chains=${nf_chains} nf_nodes=${nf_nodes} nf_chain=${nf_chain} nf_node=${nf_node} auto_scale=${auto_scale} pinning=${pinning} ELSE Construct container on DUT ${dut} nf_chains=${nf_chains} nf_nodes=${nf_nodes} nf_chain=${nf_chain} nf_node=${nf_node} auto_scale=${auto_scale} pinning=${pinning}
3.11.1.4. Construct chains of containers¶
Construct 1..N chains of 1..N CNFs on selected/all DUT nodes. Arguments: - dut: DUT node to start the containers on. Run on all nodes if None. Type: string or None - nf_chains: Total number of chains (Optional). Type: integer, default value: ${1} - nf_nodes: Total number of nodes per chain (Optional). Type: integer, default value: ${1} - auto_scale - If True, use same amount of Dataplane threads for network function as DUT, otherwise use single physical core for every network function. Type: boolean - pinning: Set True if CPU pinning should be done on starting containers. Type: boolean, default value: ${True} Example: | Construct chains of containers | 1 | 1 |
FOR ${nf_chain} IN RANGE 1 ${nf_chains}+1
\ Construct chain of containers dut=${dut} nf_chains=${nf_chains} nf_nodes=${nf_nodes} nf_chain=${nf_chain} auto_scale=${auto_scale} pinning=${pinning}
3.11.1.5. Acquire all ‘${group}’ containers¶
Acquire all container(s) in specific container group on all DUT nodes.
${group}
Run Keyword ${group}.Acquire all containers
3.11.1.6. Create all ‘${group}’ containers¶
Create/deploy all container(s) in specific container group on all DUT nodes.
${group}
Run Keyword ${group}.Create all containers
3.11.1.7. Start VPP in all ‘${group}’ containers¶
Start VPP on all container(s) in specific container group on all DUT nodes.
${group}
Run Keyword ${group}.Start VPP In All Containers
3.11.1.8. Restart VPP in all ‘${group}’ containers¶
Restart VPP on all container(s) in specific container group on all DUT nodes.
${group}
Run Keyword ${group}.Restart VPP In All Containers
3.11.1.9. Configure VPP in all ‘${group}’ containers¶
Configure VPP on all container(s) in specific container group on all DUT nodes. Test (or broader scope) variables read: - container_chain_topology - Topology type used for configuring CNF (VPP) in container. Type: string
${group}
${dut1_if2} = Get Variable Value \${dut1_if2} ${None}
${dut2_if2} = Get Variable Value \${dut2_if2} ${None}
Run Keyword If '${container_chain_topology}' == 'chain_ip4' ${group}.Configure VPP In All Containers ${container_chain_topology} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_mac=${TG_pf2_mac}[0] nodes=${nf_nodes} ELSE IF '${container_chain_topology}' == 'chain_ipsec' ${group}.Configure VPP In All Containers ${container_chain_topology} tg_pf1_ip4=${tg_if1_ip4} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_ip4=${tg_if2_ip4} tg_pf2_mac=${TG_pf2_mac}[0] dut1_if1_ip4=${dut1_if1_ip4} dut1_if2_ip4=${dut1_if2_ip4} dut2_if1_ip4=${dut2_if1_ip4} dut2_if2_ip4=${dut2_if2_ip4} raddr_ip4=${raddr_ip4} laddr_ip4=${laddr_ip4} nodes=${nodes} nf_nodes=${nf_nodes} ELSE IF '${container_chain_topology}' == 'pipeline_ip4' ${group}.Configure VPP In All Containers ${container_chain_topology} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_mac=${TG_pf2_mac}[0] nodes=${nf_nodes} ELSE IF '${container_chain_topology}' == 'cross_horiz' ${group}.Configure VPP In All Containers ${container_chain_topology} dut1_if=${DUT1_${int}2}[0] dut2_if=${DUT2_${int}2}[0] ELSE ${group}.Configure VPP In All Containers ${container_chain_topology}
3.11.1.10. Stop all ‘${group}’ containers¶
Stop all container(s) in specific container group on all DUT nodes.
${group}
Run Keyword ${group}.Stop all containers
3.11.1.11. Destroy all ‘${group}’ containers¶
Destroy all container(s) in specific container group on all DUT nodes.
${group}
Run Keyword ${group}.Destroy all containers
3.11.1.12. Verify VPP in all ‘${group}’ containers¶
Verify that VPP is running inside containers in specific container group on all DUT nodes. Does 120 retries with one second between retries.
${group}
Run Keyword ${group}.Verify VPP in all containers
3.11.1.13. Start containers for test¶
Start containers for test. Arguments: - dut: DUT node to start the containers on. Run on all nodes if None. Type: string or None - nf_chains: Total number of chains. Type: integer - nf_nodes: Total number of nodes per chain. Type: integer - auto_scale - If True, use same amount of Dataplane threads for network function as DUT, otherwise use single physical core for every network function. Type: boolean - pinning: Set True if CPU pinning should be done on starting containers. Type: boolean, default value: ${False} Example: | Start containers for test | 1 | 1 |
Set Test Variable @{container_groups} @{EMPTY}
Set Test Variable ${container_group} CNF
Set Test Variable ${nf_nodes}
Import Library resources.libraries.python.ContainerUtils.ContainerManager engine=${container_engine} WITH NAME ${container_group}
Construct chains of containers dut=${dut} nf_chains=${nf_chains} nf_nodes=${nf_nodes} auto_scale=${auto_scale} pinning=${pinning}
Acquire all '${container_group}' containers
Create all '${container_group}' containers
Configure VPP in all '${container_group}' containers
Start VPP in all '${container_group}' containers
Append To List ${container_groups} ${container_group}
Save VPP PIDs
3.11.1.14. Start vswitch in container on DUT¶
Configure and start vswitch in container. Arguments: - dut: DUT node on which to install vswitch. Type: string - phy_cores - Number of physical cores to use. Type: integer - rx_queues: Number of RX queues. Type: integer Example: | Start vswitch in container on DUT | DUT1 | 1 | 1 |
Set Test Variable ${container_group} VSWITCH
Import Library resources.libraries.python.ContainerUtils.ContainerManager engine=${container_engine} WITH NAME VSWITCH
Construct container on DUT ${dut} nf_chains=${1} nf_nodes=${1} nf_chain=${1} nf_node=${1} auto_scale=${False} pinning=${False}
Acquire all '${container_group}' containers
Create all '${container_group}' containers
${cpu_count_int} Convert to Integer ${phy_cores}
${dp_count_int} Convert to Integer ${phy_cores}
${smt_used}= Is SMT enabled ${nodes['${dut}']['cpuinfo']}
${dp_count_int}= Run keyword if ${smt_used} Evaluate int(${cpu_count_int}*2) ELSE Set variable ${dp_count_int}
${rxq_ratio} = Get Variable Value \${rxq_ratio} ${1}
${rxq_count_int}= Run Keyword If ${rx_queues} Set variable ${rx_queues} ELSE Evaluate int(${dp_count_int}/${rxq_ratio})
${rxq_count_int}= Run keyword if ${rxq_count_int} == 0 Set variable ${1} ELSE Set variable ${rxq_count_int}
VSWITCH.Configure VPP in all containers chain_vswitch rxq=${rxq_count_int} n_instances=${n_instances} node=${dut} dut1_if1=${DUT1_${int}1}[0] dut1_if2=${DUT1_${int}2}[0] dut2_if1=${DUT2_${int}1}[0] dut2_if2=${DUT2_${int}2}[0] dut2_if2_ip4=${dut2_if2_ip4} tg_pf1_ip4=${tg_if1_ip4} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_ip4=${tgi_f2_ip4} tg_pf2_mac=${TG_pf2_mac}[0] nodes=${nodes}
Start VPP in all '${container_group}' containers
3.11.1.15. Start vswitch in container¶
Configure and start vswitch in container on all DUTs. Arguments: - phy_cores - Number of physical cores to use. Type: integer - rx_queues: Number of RX queues. Type: integer Example: | Start vswitch in container | 1 | 1 |
FOR ${dut} IN @{duts}
\ Run Keyword Start vswitch in container on DUT ${dut} ${phy_cores} ${rx_queues}
Append To List ${container_groups} ${container_group}
Save VPP PIDs
3.11.2. default suite¶
3.11.2.1. Call Resetter¶
Check for a presence of test variable ${resetter}. If it exists (and not None), call the resetter (as a Python callable). This is usually used to reset any state on DUT before next trial. TODO: Move to a more specific library if needed. Example: | Call Resetter |
${resetter} = Get Resetter
Run Keyword If $resetter Evaluate $resetter()
3.11.2.2. Configure crypto device on all DUTs¶
Verify if Crypto QAT device virtual functions are initialized on all DUTs. If parameter force_init is set to True, then try to initialize/disable. Arguments: - crypto_type - Crypto device type - HW_DH895xcc or HW_C3xxx; default value: HW_DH895xcc. Type: string - numvfs - Number of VFs to initialize, 0 - disable the VFs; default value: ${32} Type: integer - force_init - Force to initialize. Type: boolean Example: | Configure crypto device on all DUTs | HW_DH895xcc | ${32} |
FOR ${dut} IN @{duts}
\ Crypto Device Verify ${nodes['${dut}']} ${crypto_type} ${numvfs} force_init=${force_init}
3.11.2.3. Configure kernel module on all DUTs¶
Verify if specific kernel module is loaded on all DUTs. If parameter force_load is set to True, then try to load. Arguments: - module - Module to verify. Type: string - force_load - Try to load module. Type: boolean Example: | Configure kernel module on all DUTs | ${True} |
Verify Kernel Module on All DUTs ${nodes} ${module} force_load=${force_load}
3.11.2.4. Get Keyname for DUT¶
Get the Keyname for the DUT in the keyname list. Returns lowercase keyname value. Arguments: - dutx - DUT to find keyname. Type: dict - dut_keys - DUT Keynames to search. Type: list Example: | Get Keyname for DUT | ${dutx} | ${duts} |
FOR ${key} IN @{dut_keys}
\ ${found_key} ${value}= Run Keyword And Ignore Error Dictionaries Should Be Equal ${nodes['${key}']} ${dutx}
\ Run Keyword If '${found_key}' == 'PASS' EXIT FOR LOOP
Run Keyword If '${found_key}' != 'PASS' Fail Keyname for ${dutx} not found
${keyname}= Convert To Lowercase ${key}
Return From Keyword ${keyname}
3.11.2.5. Create base startup configuration of VPP on all DUTs¶
Create base startup configuration of VPP to all DUTs.
FOR ${dut} IN @{duts}
\ Import Library resources.libraries.python.VppConfigGenerator WITH NAME ${dut}
\ Run Keyword ${dut}.Set Node ${nodes['${dut}']} node_key=${dut}
\ Run Keyword ${dut}.Add Unix Log
\ Run Keyword ${dut}.Add Unix CLI Listen
\ Run Keyword ${dut}.Add Unix CLI No Pager
\ Run Keyword ${dut}.Add Unix Nodaemon
\ Run Keyword ${dut}.Add Unix Coredump
\ Run Keyword ${dut}.Add Socksvr ${SOCKSVR_PATH}
\ Run Keyword ${dut}.Add Main Heap Size ${${heap_size_mult}*${2}}G
\ Run Keyword ${dut}.Add Main Heap Page Size ${page_size}
\ Run Keyword ${dut}.Add Default Hugepage Size ${page_size}
\ Run Keyword ${dut}.Add Statseg Size 2G
\ Run Keyword ${dut}.Add Statseg Page Size ${page_size}
\ Run Keyword ${dut}.Add Statseg Per Node Counters on
\ Run Keyword ${dut}.Add Plugin disable default
\ Run Keyword ${dut}.Add Plugin enable @{plugins_to_enable}
\ Run Keyword ${dut}.Add IP6 Hash Buckets 2000000
\ Run Keyword ${dut}.Add IP6 Heap Size 4G
\ Run Keyword ${dut}.Add Graph Node Variant ${GRAPH_NODE_VARIANT}
3.11.2.6. Add worker threads to all DUTs¶
Setup worker threads in vpp startup configuration on all DUTs. Based on the SMT configuration of DUT if enabled keyword will automatically map also the sibling logical cores. Keyword will automatically set the appropriate test TAGs in format mTnC, where m=logical_core_count and n=physical_core_count. User can manually override RXQ, RXD, TXD parameters if needed. Arguments: - phy_cores - Number of physical cores to use. Type: integer - rx_queues - Number of RX queues. Type: integer - rxd - Number of RX descriptors. Type: integer - txd - Number of TX descriptors. Type: integer Example: | Add worker threads to all DUTs | ${1} | ${1} |
FOR ${dut} IN @{duts}
\ &{compute_resource_info}= Get Affinity Vswitch ${nodes} ${dut} ${phy_cores} rx_queues=${rx_queues} rxd=${rxd} txd=${txd}
\ Set Test Variable &{compute_resource_info}
\ Create compute resources variables
\ Run Keyword ${dut}.Add CPU Main Core ${cpu_main}
\ Run Keyword If ${cpu_count_int} > 0 ${dut}.Add CPU Corelist Workers ${cpu_wt}
\ Run Keyword ${dut}.Add Buffers Per Numa ${buffers_numa}
3.11.2.7. Create compute resources variables¶
Create compute resources variables _NOTE:_ This KW sets various suite variables based on computed resources.
${variables}= Get Dictionary Keys ${compute_resource_info}
FOR ${variable} IN @{variables}
\ ${value}= Get From Dictionary ${compute_resource_info} ${variable}
\ Set Test Variable ${${variable}} ${value}
Run Keyword If ${dp_count_int} > 1 Set Tags MTHREAD ELSE Set Tags STHREAD
Set Tags ${dp_count_int}T${cpu_count_int}C
3.11.2.8. Add NAT to all DUTs¶
Add NAT configuration to all DUTs. Arguments: - nat_mode - NAT mode; default value: deterministic. Type: string Example: | Add NAT to all DUTs | nat_mode=endpoint-dependent |
FOR ${dut} IN @{duts}
\ Run Keyword ${dut}.Add NAT value=${nat_mode}
3.11.2.9. Add NAT max translations per thread to all DUTs¶
Add NAT maximum number of translations per thread configuration. Arguments: - max_translations_per_thread - NAT maximum number of translations per thread. Type: string Example: | Add NAT translation memory to all DUTs | max_translations_per_thread=2048 |
FOR ${dut} IN @{duts}
\ Run Keyword ${dut}.Add NAT max translations per thread value=${max_translations_per_thread}
3.11.2.10. Write startup configuration on all VPP DUTs¶
Write VPP startup configuration without restarting VPP.
FOR ${dut} IN @{duts}
\ Run Keyword ${dut}.Write Config
3.11.2.11. Apply startup configuration on all VPP DUTs¶
Write VPP startup configuration and restart VPP on all DUTs. Arguments: - with_trace - Enable packet trace after VPP restart Type: boolean Example: | Apply startup configuration on all VPP DUTs | False |
FOR ${dut} IN @{duts}
\ Run Keyword ${dut}.Apply Config
Save VPP PIDs
Enable Coredump Limit VPP on All DUTs ${nodes}
Update All Interface Data On All Nodes ${nodes} skip_tg=${True}
Run Keyword If ${with_trace} VPP Enable Traces On All Duts ${nodes}
3.11.2.12. Apply startup configuration on VPP DUT¶
Write VPP startup configuration and restart VPP DUT. Arguments: - dut - DUT on which to apply the configuration. Type: string - with_trace - Enable packet trace after VPP restart Type: boolean
Run Keyword ${dut}.Apply Config
Save VPP PIDs on DUT ${dut}
Enable Coredump Limit VPP on DUT ${nodes['${dut}']}
${dutnode}= Copy Dictionary ${nodes}
Keep In Dictionary ${dutnode} ${dut}
Update All Interface Data On All Nodes ${dutnode} skip_tg=${True}
Run Keyword If ${with_trace} VPP Enable Traces On Dut ${nodes['${dut}']}
3.11.2.13. Save VPP PIDs¶
Get PIDs of VPP processes from all DUTs in topology andset it as a test variable. The PIDs are stored as dictionary itemswhere the key is the host and the value is the PID.
${setup_vpp_pids}= Get VPP PIDs ${nodes}
${keys}= Get Dictionary Keys ${setup_vpp_pids}
FOR ${key} IN @{keys}
\ ${pid}= Get From Dictionary ${setup_vpp_pids} ${key}
\ Run Keyword If $pid is None FAIL No VPP PID found on node ${key}
Set Test Variable ${setup_vpp_pids}
3.11.2.14. Save VPP PIDs on DUT¶
Get PID of VPP processes from DUT andset it as a test variable. The PID is stored as dictionary itemwhere the key is the host and the value is the PID.
${vpp_pids}= Get VPP PID ${nodes['${dut}']}
Run Keyword If ${vpp_pids} is None FAIL No VPP PID found on node ${nodes['${dut}']['host']
${status} ${message}= Run Keyword And Ignore Error Variable Should Exist ${setup_vpp_pids}
${setup_vpp_pids}= Run Keyword If '${status}' == 'FAIL' Create Dictionary ${nodes['${dut}']['host']}=${vpp_pids} ELSE Set To Dictionary ${setup_vpp_pids} ${nodes['${dut}']['host']}=${vpp_pids}
Set Test Variable ${setup_vpp_pids}
3.11.2.15. Verify VPP PID in Teardown¶
Check if the VPP PIDs on all DUTs are the same at the endof test as they were at the begining. If they are not, only a messageis printed on console and to log. The test will not fail.
${teardown_vpp_pids}= Get VPP PIDs ${nodes}
${err_msg}= Catenate ${SUITE NAME} - ${TEST NAME} \nThe VPP PIDs are not equal!\nTest Setup VPP PIDs: ${setup_vpp_pids}\nTest Teardown VPP PIDs: ${teardown_vpp_pids}
${rc} ${msg}= Run Keyword And Ignore Error Dictionaries Should Be Equal ${setup_vpp_pids} ${teardown_vpp_pids}
Run Keyword And Return If '${rc}'=='FAIL' Log ${err_msg} console=yes level=WARN
3.11.3. interfaces suite¶
3.11.3.1. Set single interfaces in path up¶
Set UP state on single physical VPP interfaces in path on all DUT nodes and set maximal MTU. Arguments: - pf - NIC physical function (physical port). Type: integer Example: | Set single interfaces in path | 1 |
FOR ${dut} IN @{duts}
\ Set interfaces in path up on node on PF ${dut} ${pf}
All VPP Interfaces Ready Wait ${nodes} retries=${60}
3.11.3.2. Set interfaces in path up¶
Set UP state on VPP interfaces in path on all DUT nodes and set maximal MTU. Arguments: - validate - Validate interfaces are up. Type: boolean
FOR ${dut} IN @{duts}
\ Set interfaces in path up on node ${dut}
Run Keyword If ${validate} All VPP Interfaces Ready Wait ${nodes} retries=${60}
3.11.3.3. Set interfaces in path up on node¶
Set UP state on VPP interfaces in path on specified DUT node and set maximal MTU. Arguments: - dut - DUT node on which to set the interfaces up. Type: string Example: | Set interfaces in path up on node | DUT1 |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ Set interfaces in path up on node on PF ${dut} ${pf}
3.11.3.4. Set interfaces in path up on node on PF¶
Set UP state on VPP interfaces in path on specified DUT node and set maximal MTU. Arguments: - dut - DUT node on which to set the interfaces up. Type: string - pf - NIC physical function (physical port). Type: integer Example: | Set interfaces in path up on node on PF | DUT1 | 1 |
${_chains} ${value}= Run Keyword And Ignore Error Variable Should Exist @{${dut}_${int}${pf}_1}
${_id}= Set Variable If '${_chains}' == 'PASS' _1 ${EMPTY}
FOR ${if} IN @{${dut}_${int}${pf}${_id}}
\ Set Interface State ${nodes['${dut}']} ${if} up
\ VPP Set Interface MTU ${nodes['${dut}']} ${if}
3.11.3.5. Pre-initialize layer driver¶
Pre-initialize driver based interfaces on each DUT. Arguments: - driver - NIC driver used in test [vfio-pci|avf|rdma-core]. Type: string Example: | Pre-initialize layer driver | vfio-pci |
Run Keyword Pre-initialize layer ${driver} on all DUTs
3.11.3.6. Pre-initialize layer tap on all DUTs¶
Pre-initialize tap driver. Currently no operation.
No operation
3.11.3.7. Pre-initialize layer vhost on all DUTs¶
Pre-initialize vhost driver. Currently no operation.
No operation
3.11.3.8. Pre-initialize layer vfio-pci on all DUTs¶
Pre-initialize vfio-pci driver by adding related sections to startup config on all DUTs.
${index}= Get Index From List ${TEST TAGS} DPDK
Run Keyword If ${index} >= 0 Return From Keyword
FOR ${dut} IN @{duts}
\ Stop VPP Service ${nodes['${dut}']}
\ Unbind PCI Devices From Other Driver ${nodes['${dut}']} vfio-pci @{${dut}_pf_pci}
\ Run keyword ${dut}.Add DPDK Dev @{${dut}_pf_pci}
\ Run Keyword If ${dpdk_no_tx_checksum_offload} ${dut}.Add DPDK No Tx Checksum Offload
\ Run Keyword ${dut}.Add DPDK Log Level debug
\ Run Keyword ${dut}.Add DPDK Uio Driver vfio-pci
\ Run Keyword ${dut}.Add DPDK Dev Default RXQ ${rxq_count_int}
\ Run Keyword If not ${jumbo} ${dut}.Add DPDK No Multi Seg
\ Run Keyword If ${nic_rxq_size} > 0 ${dut}.Add DPDK Dev Default RXD ${nic_rxq_size}
\ Run Keyword If ${nic_txq_size} > 0 ${dut}.Add DPDK Dev Default TXD ${nic_txq_size}
\ Run Keyword If '${crypto_type}' != '${None}' ${dut}.Add DPDK Cryptodev ${dp_count_int}
\ Run Keyword ${dut}.Add DPDK Max Simd Bitwidth ${GRAPH_NODE_VARIANT}
3.11.3.9. Pre-initialize layer avf on all DUTs¶
Pre-initialize avf driver. Currently no operation.
No operation
3.11.3.10. Pre-initialize layer af_xdp on all DUTs¶
Pre-initialize af_xdp driver.
FOR ${dut} IN @{duts}
\ Set Interface State PCI ${nodes['${dut}']} ${${dut}_pf_pci} state=up
\ Set Interface Channels ${nodes['${dut}']} ${${dut}_pf_pci} num_queues=${rxq_count_int} channel=combined
3.11.3.11. Pre-initialize layer rdma-core on all DUTs¶
Pre-initialize rdma-core driver.
FOR ${dut} IN @{duts}
\ Run Keyword If ${jumbo} Set Interface MTU ${nodes['${dut}']} ${${dut}_pf_pci} mtu=9200 ELSE Set Interface MTU ${nodes['${dut}']} ${${dut}_pf_pci} mtu=1518
FOR ${dut} IN @{duts}
\ Set Interface Flow Control ${nodes['${dut}']} ${${dut}_pf_pci} rxf="off" txf="off"
3.11.3.12. Pre-initialize layer mlx5_core on all DUTs¶
Pre-initialize mlx5_core driver.
FOR ${dut} IN @{duts}
\ Run Keyword If ${jumbo} Set Interface MTU ${nodes['${dut}']} ${${dut}_pf_pci} mtu=9200 ELSE Set Interface MTU ${nodes['${dut}']} ${${dut}_pf_pci} mtu=1518
\ Set Interface Flow Control ${nodes['${dut}']} ${${dut}_pf_pci} rxf="off" txf="off"
3.11.3.13. Initialize layer driver¶
Initialize driver based interfaces on all DUT. Interfaces are brought up. Arguments: - driver - NIC driver used in test [vfio-pci|avf|rdma-core]. Type: string - validate - Validate interfaces are up. Type: boolean Example: | Initialize layer driver | vfio-pci |
FOR ${dut} IN @{duts}
\ Initialize layer driver on node ${dut} ${driver}
Set Test Variable ${int} vf
Set interfaces in path up validate=${validate}
3.11.3.14. Initialize layer driver on node¶
Initialize driver based interfaces on DUT. Arguments: - dut - DUT node. Type: string - driver - NIC driver used in test [vfio-pci|avf|rdma-core]. Type: string Example: | Initialize layer driver | DUT1 | vfio-pci |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ ${_vf}= Copy List ${${dut}_${int}${pf}}
\ ${_ip4_addr}= Copy List ${${dut}_${int}${pf}_ip4_addr}
\ ${_ip4_prefix}= Copy List ${${dut}_${int}${pf}_ip4_prefix}
\ ${_mac}= Copy List ${${dut}_${int}${pf}_mac}
\ ${_pci}= Copy List ${${dut}_${int}${pf}_pci}
\ ${_vlan}= Copy List ${${dut}_${int}${pf}_vlan}
\ Set Test Variable ${${dut}_vf${pf}} ${_vf}
\ Set Test Variable ${${dut}_vf${pf}_ip4_addr} ${_ip4_addr}
\ Set Suite Variable ${${dut}_vf${pf}_ip4_prefix} ${_ip4_prefix}
\ Set Test Variable ${${dut}_vf${pf}_mac} ${_mac}
\ Set Test Variable ${${dut}_vf${pf}_pci} ${_pci}
\ Set Test Variable ${${dut}_vf${pf}_vlan} ${_vlan}
\ Run Keyword Initialize layer ${driver} on node ${dut} ${pf}
3.11.3.15. Initialize layer tap on node¶
Initialize tap interfaces on DUT. Arguments: - dut - DUT node. Type: string - pf - TAP ID (logical port). Type: integer Example: | Initialize layer tap on node | DUT1 | 0 |
Create Namespace ${nodes['${dut}']} tap${${pf}-1}_namespace
${tap_feature_mask}= Create Tap feature mask gso=${enable_gso}
${_tap}= And Add Tap Interface ${nodes['${dut}']} tap${${pf}-1} host_namespace=tap${${pf}-1}_namespace num_rx_queues=${rxq_count_int} rxq_size=${nic_rxq_size} txq_size=${nic_txq_size} tap_feature_mask=${tap_feature_mask}
${_mac}= Get Interface MAC ${nodes['${dut}']} tap${pf}
${_tap}= Create List ${_tap}
${_mac}= Create List ${_mac}
Vhost User Affinity ${nodes['${dut}']} ${${dut}_pf${pf}}[0] skip_cnt=${${CPU_CNT_MAIN}+${CPU_CNT_SYSTEM}+${cpu_count_int}}
Set Test Variable ${${dut}_vf${pf}} ${_tap}
Set Test Variable ${${dut}_vf${pf}_mac} ${_mac}
3.11.3.16. Initialize layer vhost on node¶
Initialize vhost interfaces on DUT. Arguments: - dut - DUT node. Type: string - pf - VHOST ID (logical port). Type: integer Example: | Initialize layer vhost on node | DUT1 | 0 |
${virtio_feature_mask}= Create Virtio feature mask gso=${enable_gso}
${vhost}= Vpp Create Vhost User Interface ${nodes['${dut}']} /var/run/vpp/sock-${pf}-1 is_server=${True} virtio_feature_mask=${virtio_feature_mask}
${_mac}= Get Interface MAC ${nodes['${dut}']} vhost${pf}
${_vhost}= Create List ${_vhost}
${_mac}= Create List ${_mac}
Set Test Variable ${${dut}_vf${pf}} ${_vhost}
Set Test Variable ${${dut}_vf${pf}_mac} ${_mac}
3.11.3.17. Initialize layer vfio-pci on node¶
Initialize vfio-pci interfaces on DUT on NIC PF. Currently no operation. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer Example: | Initialize layer vfio-pci on node | DUT1 | 1 |
No operation
3.11.3.18. Initialize layer avf on node¶
Initialize AVF (Intel) interfaces on DUT on NIC PF. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer Example: | Initialize layer avf on node | DUT1 | 1 |
FOR ${vf} IN RANGE 0 ${nic_vfs}
\ ${_avf}= VPP Create AVF Interface ${nodes['${dut}']} ${${dut}_vf${pf}}[${vf}] num_rx_queues=${rxq_count_int} rxq_size=${nic_rxq_size} txq_size=${nic_txq_size}
\ ${_ip4}= Get Interface IP4 ${nodes['${dut}']} ${_avf}
\ ${_ip4_prefix}= Get Interface IP4 Prefix Length ${nodes['${dut}']} ${_avf}
\ ${_mac}= Get Interface MAC ${nodes['${dut}']} ${_avf}
\ ${_pci}= Get Interface PCI Addr ${nodes['${dut}']} ${_avf}
\ ${_vlan}= Get Interface VLAN ${nodes['${dut}']} ${_avf}
\ Set List Value ${${dut}_vf${pf}} ${vf} ${_avf}
\ Set List Value ${${dut}_vf${pf}_ip4_addr} ${vf} ${_ip4}
\ Set List Value ${${dut}_vf${pf}_ip4_prefix} ${vf} ${_ip4_prefix}
\ Set List Value ${${dut}_vf${pf}_mac} ${vf} ${_mac}
\ Set List Value ${${dut}_vf${pf}_pci} ${vf} ${_pci}
\ Set List Value ${${dut}_vf${pf}_vlan} ${vf} ${_vlan}
3.11.3.19. Initialize layer af_xdp on node¶
Initialize AF_XDP (eBPF) interfaces on DUT on NIC PF. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer Example: | Initialize layer af_xdp on node | DUT1 | 1 |
${_af_xdp}= VPP Create AF XDP Interface ${nodes['${dut}']} ${${dut}_vf${pf}}[0] num_rx_queues=${65535} rxq_size=${nic_rxq_size} txq_size=${nic_txq_size}
${cpu_skip_cnt}= Evaluate ${CPU_CNT_SYSTEM}+${CPU_CNT_MAIN}
${cpu_skip_cnt}= Evaluate ${cpu_skip_cnt}+${cpu_count_int}
${cpu_skip_cnt}= Evaluate ${cpu_skip_cnt}+(${pf}-${1})*${rxq_count_int}
Set Interface IRQs Affinity ${nodes['${dut}']} ${_af_xdp} cpu_skip_cnt=${cpu_skip_cnt} cpu_cnt=${rxq_count_int}
Set List Value ${${dut}_vf${pf}} 0 ${_af_xdp}
3.11.3.20. Initialize layer rdma-core on node¶
Initialize rdma-core (Mellanox VPP) interfaces on DUT on NIC PF. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer Example: | Initialize layer rdma-core on node | DUT1 | 1 |
${_rdma}= VPP Create Rdma Interface ${nodes['${dut}']} ${${dut}_vf${pf}}[0] num_rx_queues=${rxq_count_int} rxq_size=${nic_rxq_size} txq_size=${nic_txq_size}
Set List Value ${${dut}_vf${pf}} 0 ${_rdma}
3.11.3.21. Initialize layer mlx5_core on node¶
Initialize mlx5_core interfaces on DUT on NIC PF. Currently no operation.
No operation
3.11.3.22. Initialize layer interface¶
Physical interfaces variables to be created on all DUTs. Arguments: - count - Number of untagged interfaces variables. Type: integer Example: | Initialize layer interface | 1 |
FOR ${dut} IN @{duts}
\ Initialize layer interface on node ${dut} count=${count}
3.11.3.23. Initialize layer interface on node¶
Physical interfaces variables to be created on all DUTs. Arguments: - dut - DUT node. Type: string - count - Number of baseline interface variables. Type: integer Example: | Initialize layer interface on node | DUT1 | 1 |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ Initialize layer interface on node on PF ${dut} ${pf} count=${count}
3.11.3.24. Initialize layer interface on node on PF¶
Baseline interfaces variables to be created. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer - count - Number of baseline interface variables. Type: integer Example: | Initialize layer interface on node on PF | DUT1 | 1 | 1 |
FOR ${id} IN RANGE 1 ${count} + 1
\ Set Test Variable ${${dut}_${int}${pf}_${id}} ${${dut}_${int}${pf}}
3.11.3.25. Initialize layer bonding¶
Bonded interfaces and variables to be created on all DUT’s interfaces. Arguments: - bond_mode - Link bonding mode. Type: string - lb_mode - Load balance mode. Type: string - count - Number of bond interface variables. Type: integer Example: | Initialize layer bonding | xor | l34 | 1 |
FOR ${dut} IN @{duts}
\ Initialize layer bonding on node ${dut} bond_mode=${bond_mode} lb_mode=${lb_mode} count=${count}
Set Test Variable ${int} bond
3.11.3.26. Initialize layer bonding on node¶
Bonded interface and variables to be created on across east and west DUT’s node interfaces. Arguments: - dut - DUT node. Type: string - bond_mode - Link bonding mode. Type: string - lb_mode - Load balance mode. Type: string - count - Number of bond interface variables. Type: integer Example: | Initialize layer bonding on node | DUT1 | xor | l34 | 1 |
${if_index}= VPP Create Bond Interface ${nodes['${dut}']} ${bond_mode} load_balance=${lb_mode} mac=00:00:00:01:01:01
Set Interface State ${nodes['${dut}']} ${if_index} up
VPP Add Bond Member ${nodes['${dut}']} ${${dut}_${int}1_1} ${if_index}
VPP Add Bond Member ${nodes['${dut}']} ${${dut}_${int}2_1} ${if_index}
FOR ${id} IN RANGE 1 ${count} + 1
\ Set Test Variable ${${dut}_bond1_${id}} ${if_index}
\ Set Test Variable ${${dut}_bond2_${id}} ${if_index}
3.11.3.27. Initialize layer dot1q¶
Dot1q interfaces and variables to be created on all DUTs. Arguments: - count - Number of chains. Type: integer - vlan_per_chain - Whether to create vlan subinterface for each chain. Type: boolean - start - Id of first chain, allows adding chains during test. Type: integer | Initialize layer dot1q | 1 | True | 1 |
FOR ${dut} IN @{duts}
\ Initialize layer dot1q on node ${dut} count=${count} vlan_per_chain=${vlan_per_chain} start=${start}
Set Test Variable ${int} dot1q
3.11.3.28. Initialize layer dot1q on node¶
Dot1q interfaces and variables to be created on all DUT’s node. Arguments: - dut - DUT node. Type: string - count - Number of chains. Type: integer - vlan_per_chain - Whether to create vlan subinterface for each chain. Type: boolean - start - Id of first chain, allows adding chains during test. Type: integer Example: | Initialize layer dot1q on node | DUT1 | 1 | True | 1 |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ Initialize layer dot1q on node on PF ${dut} pf=${pf} count=${count} vlan_per_chain=${vlan_per_chain} start=${start}
3.11.3.29. Initialize layer dot1q on node on PF¶
Dot1q interfaces and variables to be created on all DUT’s node interfaces. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer - count - Number of chains. Type: integer - vlan_per_chain - Whether to create vlan subinterface for each chain. Type: boolean - start - Id of first chain, allows adding chains during test. Type: integer Example: | Initialize layer dot1q on node on PF | DUT1 | 3 | True | 2 |
FOR ${id} IN RANGE ${start} ${count} + 1
\ ${_dot1q}= Initialize layer dot1q on node on PF for chain dut=${dut} pf=${pf} id=${id} vlan_per_chain=${vlan_per_chain}
\ ${_dot1q}= Set Variable If '${_dot1q}' == '${NONE}' ${${dut}_dot1q${pf}_1}[0] ${_dot1q}
\ ${_dot1q}= Create List ${_dot1q}
\ Set Test Variable ${${dut}_dot1q${pf}_${id}} ${_dot1q}
3.11.3.30. Initialize layer dot1q on node on PF for chain¶
Optionally create tag popping subinterface per chain. Return interface indices for dot1q layer interfaces, or Nones if subinterfaces are not created. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer - id - Positive index of the chain. Type: integer - vlan_per_chain - Whether to create vlan subinterface for each chain. Type: boolean Example: | Initialize layer dot1q on node on PF for chain | DUT1 | 1 | 1 | True |
Return From Keyword If ${id} != ${1} and not ${vlan_per_chain} ${NONE}
${_default}= Evaluate ${pf} * ${100} + ${id} - ${1}
${_vlan}= Get Variable Value \${${dut}_pf${pf}_vlan}
${_vlan}= Set Variable If '${_vlan}[0]' != '${NONE}' ${_vlan}[0] ${_default}
${_name} ${_index}= Create Vlan Subinterface ${nodes['${dut}']} ${${dut}_${int}${pf}_${id}}[0] ${_vlan}
Set Interface State ${nodes['${dut}']} ${_index} up
Configure L2 tag rewrite method on interfaces ${nodes['${dut}']} ${_index} TAG_REWRITE_METHOD=pop-1
Return From Keyword ${_index}
3.11.3.31. Initialize layer ip4vxlan¶
VXLAN interfaces and variables to be created on all DUT’s interfaces. Arguments: - count - Number of vxlan interfaces. Type: integer - start - Id of first chain, allows adding chains during test. Type: integer | Initialize layer ip4vxlan | 3 | 2 |
FOR ${dut} IN @{duts}
\ Initialize layer ip4vxlan on node ${dut} count=${count} start=${start}
Set Test Variable ${int} ip4vxlan
3.11.3.32. Initialize layer ip4vxlan on node¶
Setup VXLANoIPv4 between TG and DUTs and DUT to DUT by connecting physical and vxlan interfaces on each DUT. All interfaces are brought up. IPv4 addresses with prefix /24 are configured on interfaces towards TG. VXLAN sub-interfaces has same IPv4 address as interfaces. Arguments: - dut - DUT node. Type: string - count - Number of vxlan interfaces. Type: integer - start - Id of first chain, allows adding chains during test. Type: integer Example: | Initialize layer ip4vxlan on node | DUT1 | 3 | 2 |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ Initialize layer ip4vxlan on node on PF ${dut} pf=${pf} count=${count} start=${start}
3.11.3.33. Initialize layer ip4vxlan on node on PF¶
Setup VXLANoIPv4 between TG and DUTs and DUT to DUT by connecting physical and vxlan interfaces on each DUT. All interfaces are brought up. IPv4 addresses with prefix /24 are configured on interfaces towards TG. VXLAN sub-interfaces has same IPv4 address as interfaces. Arguments: - dut - DUT node. Type: string - pf - NIC physical function (physical port). Type: integer - count - Number of vxlan interfaces. Type: integer - start - Id of first chain, allows adding chains during test. Type: integer Example: | Initialize layer ip4vxlan on node on PF | DUT1 | 3 | 2 |
Run Keyword If "${start}" == "1" VPP Interface Set IP Address ${nodes['${dut}']} ${${dut}_${int}${pf}_1}[0] 172.${pf}6.0.1 24
FOR ${id} IN RANGE ${start} ${count} + 1
\ ${_subnet}= Evaluate ${id} - 1
\ ${_vni}= Evaluate ${id} - 1
\ ${_ip4vxlan}= Create VXLAN interface ${nodes['${dut}']} ${_vni} 172.${pf}6.0.1 172.${pf}7.${_subnet}.2
\ ${_prev_mac}= Set Variable If '${dut}' == 'DUT1' ${TG_pf1_mac}[0] ${DUT1_pf2_mac}[0]
\ ${_next_mac}= Set Variable If '${dut}' == 'DUT1' and ${duts_count} == 2 ${DUT2_pf1_mac}[0] ${TG_pf2_mac}[0]
\ ${_even}= Evaluate ${pf} % 2
\ ${_mac}= Set Variable If ${_even} ${_prev_mac} ${_next_mac}
\ VPP Add IP Neighbor ${nodes['${dut}']} ${${dut}_${int}${pf}_${id}}[0] 172.${pf}6.${_subnet}.2 ${_mac}
\ VPP Route Add ${nodes['${dut}']} 172.${pf}7.${_subnet}.0 24 gateway=172.${pf}6.${_subnet}.2 interface=${${dut}_${int}${pf}_${id}}[0]
\ Set VXLAN Bypass ${nodes['${dut}']} ${${dut}_${int}${pf}_${id}}[0]
\ ${_ip4vxlan}= Create List ${_ip4vxlan}
\ Set Test Variable ${${dut}_ip4vxlan${pf}_${id}} ${_ip4vxlan}
3.11.3.34. Configure vhost interfaces¶
Create two Vhost-User interfaces on defined VPP node. Arguments: - ${dut_node} - DUT node. Type: dictionary - ${sock1} - Socket path for first Vhost-User interface. Type: string - ${sock2} - Socket path for second Vhost-User interface. Type: string - ${vhost_if1} - Name of the first Vhost-User interface (Optional). Type: string - ${vhost_if2} - Name of the second Vhost-User interface (Optional). Type: string - ${is_server} - Server side of connection (Optional). Type: boolean - ${virtio_feature_mask} - Enabled Virtio feature flags (Optional). Type: integer _NOTE:_ This KW sets following test case variable: - ${${vhost_if1}} - First Vhost-User interface. - ${${vhost_if2}} - Second Vhost-User interface. Example: | Configure vhost interfaces | ${nodes[‘DUT1’]} | /tmp/sock1 | /tmp/sock2 |
${vhost_1}= Vpp Create Vhost User Interface ${dut_node} ${sock1} is_server=${is_server} virtio_feature_mask=${virtio_feature_mask}
${vhost_2}= Vpp Create Vhost User Interface ${dut_node} ${sock2} is_server=${is_server} virtio_feature_mask=${virtio_feature_mask}
${vhost_1_key}= Get Interface By SW Index ${dut_node} ${vhost_1}
${vhost_2_key}= Get Interface By SW Index ${dut_node} ${vhost_2}
${vhost_1_mac}= Get Interface MAC ${dut_node} ${vhost_1_key}
${vhost_2_mac}= Get Interface MAC ${dut_node} ${vhost_2_key}
Set Interface State ${dut_node} ${vhost_1} up
Set Interface State ${dut_node} ${vhost_2} up
Set Test Variable ${${vhost_if1}} ${vhost_1}
Set Test Variable ${${vhost_if2}} ${vhost_2}
Set Test Variable ${${vhost_if1}_mac} ${vhost_1_mac}
Set Test Variable ${${vhost_if2}_mac} ${vhost_2_mac}
3.11.3.35. Get Vhost dump¶
Get vhost-user dump. Arguments: - dut - DUT node data. Type: dictionary
${vhost_dump}= Vhost User Dump ${dut}
Return From Keyword ${vhost_dump}
3.11.4. memif suite¶
3.11.4.1. Set up memif interfaces on DUT node¶
Create two Memif interfaces on given VPP node. Arguments: - dut_node - DUT node. Type: dictionary - filename1 - Socket filename for 1st Memif interface. Type: string - filename2 - Socket filename for 2nd Memif interface. Type: string - mid - Memif interface ID. Type: integer, default value: ${1} - memif_if1 - Name of the first Memif interface (Optional). Type: string, default value: memif_if1 - memif_if2 - Name of the second Memif interface (Optional). Type: string, default value: memif_if2 - rxq - RX queues; 0 means do not set (Optional). Type: integer, default value: ${1} - txq - TX queues; 0 means do not set (Optional). Type: integer, default value: ${1} - role - Memif role (Optional). Type: string, default value: SLAVE _NOTE:_ This KW sets following test case variable: - ${${memif_if1}} - 1st Memif interface. - ${${memif_if2}} - 2nd Memif interface. Example: | Set up memif interfaces on DUT node | ${nodes[‘DUT1’]} | sock1 | sock2 | 1 | | Set up memif interfaces on DUT node | ${nodes[‘DUT2’]} | sock1 | sock2 | 1 | dut2_memif_if1 | dut2_memif_if2 | 1 | 1 | SLAVE | | ${nodes[‘DUT2’]} | sock1 | sock2 | 1 | rxq=0 | txq=0 | dcr_uuid=_a5730a0a-2ba1-4fe9-91bd-79b9828e968e |
${sid_1}= Evaluate (${mid}*2)-1
${sid_2}= Evaluate (${mid}*2)
${memif_1}= Create memif interface ${dut_node} ${filename1}${mid}${DUT1_UUID}-${sid_1} ${mid} ${sid_1} rxq=${rxq} txq=${txq} role=${role}
${memif_2}= Create memif interface ${dut_node} ${filename2}${mid}${DUT1_UUID}-${sid_2} ${mid} ${sid_2} rxq=${rxq} txq=${txq} role=${role}
Set Interface State ${dut_node} ${memif_1} up
Set Interface State ${dut_node} ${memif_2} up
Set Test Variable ${${memif_if1}} ${memif_1}
Set Test Variable ${${memif_if2}} ${memif_2}
3.11.4.2. Set up single memif interface on DUT node¶
Create single Memif interface on given VPP node. Arguments: - dut_node - DUT node. Type: dictionary - filename - Socket filename for Memif interface. Type: string - mid - Memif interface ID (Optional). Type: integer - sid - Memif socket ID (Optional). Type: integer - memif_if - Name of the Memif interface (Optional). Type: string - rxq - RX queues (Optional). Type: integer - txq - TX queues (Optional). Type: integer - role - Memif role (Optional). Type: string _NOTE:_ This KW sets following test case variable: - ${${memif_if}} - Memif interface. Example: | Set up single memif interface on DUT node | ${nodes[‘DUT1’]} | sock1 | 1 | dut1_memif_if1 | 1 | 1 | SLAVE |
${memif}= Create memif interface ${dut_node} ${filename}${mid}-${sid} ${mid} ${sid} rxq=${rxq} txq=${txq} role=${role}
Set Interface State ${dut_node} ${memif} up
Set Test Variable ${${memif_if}} ${memif}
3.11.5. suite_setup suite¶
3.11.5.1. Create suite topology variables¶
Create suite topology variables _NOTE:_ This KW sets various suite variables based on filtered topology. All variables are set with also backward compatibility format dut{m}_if{n} (where the value type is string). List type allows to access physical interfaces in same way as virtual interface (e.g. SRIOV). This keeps abstracted compatibility between existing L1 and L2 KWs library and underlaying physical topology. - duts - List of DUT nodes (name as seen in topology file). - duts_count - Number of DUT nodes. - int - Interfacy type (layer). Type: string - dut{n} - DUTx node. Type: dictionary - dut{m}_pf{n} - Nth interface of Mth DUT. Type: list - dut{m}_pf{n}_mac - Nth interface of Mth DUT - MAC address. Type: list - dut{m}_pf{n}_vlan - Nth interface of Mth DUT - VLAN id. Type: list - dut{m}_pf{n}_pci - Nth interface of Mth DUT - PCI address. Type: list - dut{m}_pf{n}_ip4_addr - Nth interface of Mth DUT - IPv4 address. Type: list - dut{m}_pf{n}_ip4_prefix - Nth interface of Mth DUT - IPv4 prefix. Type: list Arguments: - @{actions} - Additional setup action. Type: list
${variables}= Get Dictionary Keys ${topology_info}
FOR ${variable} IN @{variables}
\ ${value}= Get From Dictionary ${topology_info} ${variable}
\ Set Suite Variable ${${variable}} ${value}
FOR ${action} IN @{actions}
\ Run Keyword Additional Suite setup Action For ${action}
3.11.5.2. Setup suite topology interfaces¶
Common suite setup for one to multiple link tests.
Compute path for testing on given topology nodes in circular topology based on interface model provided as an argument and set corresponding suite variables. Arguments: - ${actions} - Additional setup action. Type: list
Start Suite Setup Export
${nic_model_list}= Create list ${nic_name}
&{info}= Compute Circular Topology ${nodes} filter_list=${nic_model_list} nic_pfs=${nic_pfs} always_same_link=${False} topo_has_tg=${True}
Set suite variable &{topology_info} &{info}
Create suite topology variables @{actions}
Finalize Suite Setup Export
3.11.5.3. Setup suite topology interfaces with no TG¶
Common suite setup for single link tests with no traffic generator node.
Compute path for testing on given topology nodes in circular topology based on interface model provided as an argument and set corresponding suite variables. Arguments: - ${actions} - Additional setup action. Type: list
Start Suite Setup Export
${nic_model_list}= Create list ${nic_name}
&{info}= Compute Circular Topology ${nodes} filter_list=${nic_model_list} nic_pfs=${nic_pfs} always_same_link=${True} topo_has_tg=${False}
Set suite variable &{topology_info} &{info}
Create suite topology variables @{actions}
Finalize Suite Setup Export
3.11.5.4. Setup suite topology interfaces with no DUT¶
Common suite setup for single link tests with no device under test node.
Compute path for testing on given topology nodes in circular topology based on interface model provided as an argument and set corresponding suite variables. Arguments: - ${actions} - Additional setup action. Type: list
Start Suite Setup Export
${nic_model_list}= Create list ${nic_name}
&{info}= Compute Circular Topology ${nodes} filter_list=${nic_model_list} nic_pfs=${nic_pfs} always_same_link=${True} topo_has_tg=${True} topo_has_dut=${False}
Set suite variable &{topology_info} &{info}
Create suite topology variables @{actions}
Finalize Suite Setup Export
3.11.5.5. Additional Suite Setup Action For scapy¶
Additional Setup for suites which uses scapy as Traffic generator.
Export TG Type And Version scapy 2.4.3
FOR ${dut} IN @{duts}
\ Set Suite Variable ${${dut}_vf1} ${${dut}_${int}1}
\ Set Suite Variable ${${dut}_vf2} ${${dut}_${int}2}
Set Interface State ${tg} ${TG_pf1}[0] up
Set Interface State ${tg} ${TG_pf2}[0] up
3.11.5.6. Additional Suite Setup Action For dpdk¶
Additional Setup for suites which uses dpdk.
${version} = Get Dpdk Version ${nodes}[DUT1]
Export Dut Type And Version dpdk ${version}
FOR ${dut} IN @{duts}
\ Initialize DPDK Framework ${nodes['${dut}']} ${${dut}_${int}1}[0] ${${dut}_${int}2}[0] ${nic_driver}
3.11.5.7. Additional Suite Setup Action For performance vf¶
Additional Setup for suites which uses performance measurement for single DUT (inner loop). Arguments: - dut - DUT node. Type: string Example: | Additional Suite Setup Action For performance vf | DUT1 |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ ${_vf}= Run Keyword Init interface ${nodes['${dut}']} ${${dut}_pf${pf}}[0] driver=${nic_driver} numvfs=${nic_vfs} osi_layer=${osi_layer}
\ ${_mac}= Create List ${EMPTY}
\ ${_ip4_addr}= Create List ${EMPTY}
\ ${_ip4_prefix}= Create List ${EMPTY}
\ ${_pci}= Create List ${EMPTY}
\ ${_vlan}= Create List ${EMPTY}
\ Set Suite Variable ${${dut}_prevf${pf}} ${_vf}
\ Set Suite Variable ${${dut}_prevf${pf}_ip4_addr} ${_ip4_addr}
\ Set Suite Variable ${${dut}_prevf${pf}_ip4_prefix} ${_ip4_prefix}
\ Set Suite Variable ${${dut}_prevf${pf}_mac} ${_mac}
\ Set Suite Variable ${${dut}_prevf${pf}_pci} ${_pci}
\ Set Suite Variable ${${dut}_prevf${pf}_vlan} ${_vlan}
Set Suite Variable ${int} prevf
3.11.5.8. Additional Suite Setup Action For performance pf¶
Additional Setup for suites which uses performance measurement for single DUT (inner loop). Arguments: - dut - DUT node. Type: string Example: | Additional Suite Setup Action For performance pf | DUT1 |
FOR ${pf} IN RANGE 1 ${nic_pfs} + 1
\ Run Keyword Init interface ${nodes['${dut}']} ${${dut}_pf${pf}}[0] driver=${nic_driver} numvfs=${0} osi_layer=${osi_layer}
3.11.5.9. Additional Suite Setup Action For performance¶
Additional Setup for suites which uses performance measurement.
FOR ${dut} IN @{duts}
\ Run Keyword If ${nic_vfs} > 0 Additional Suite Setup Action For performance vf ${dut} ELSE Additional Suite Setup Action For performance pf ${dut}
${type} = Get TG Type ${nodes}[TG]
${version} = Get TG Version ${nodes}[TG]
Export TG Type And Version ${type} ${version}
Initialize traffic generator ${tg} ${TG_pf1}[0] ${TG_pf2}[0] ${dut1} ${DUT1_${int}1}[0] ${dut${duts_count}} ${DUT${duts_count}_${int}2}[0] ${osi_layer}
3.11.5.10. Additional Suite Setup Action For performance_tg_nic¶
Additional Setup for suites which uses performance measurement for L1 cross connect tests
${type} = Get TG Type ${nodes}[TG]
${version} = Get TG Version ${nodes}[TG]
Export Dut Type And Version ${type} ${version}
Export TG Type And Version ${type} ${version}
Initialize traffic generator ${tg} ${TG_pf1}[0] ${TG_pf2}[0] ${tg} ${TG_pf2}[0] ${tg} ${TG_pf1}[0] ${osi_layer}
3.11.5.11. Additional Suite Setup Action For iPerf3¶
Additional Setup for suites which uses performance measurement over iPerf3.
${type} = Get iPerf Type ${nodes}[TG]
${version} = Get iPerf Version ${nodes}[TG]
Export TG Type And Version ${type} ${version}
3.11.5.12. Additional Suite Setup Action For ipsechw¶
Additional Setup for suites which uses QAT HW.
${numvfs}= Set Variable If '${crypto_type}' == 'HW_DH895xcc' ${32} '${crypto_type}' == 'HW_C3xxx' ${16}
Configure crypto device on all DUTs ${crypto_type} numvfs=${numvfs} force_init=${True}
Configure kernel module on all DUTs vfio_pci force_load=${True}
3.11.5.13. Additional Suite Setup Action For nginx¶
Additional Setup for suites which uses Nginx.
Install NGINX framework on all DUTs ${nodes} ${packages_dir} ${nginx_version}
3.11.5.14. Additional Suite Setup Action For vppecho¶
Additional Setup for suites which uses performance measurement over VPP Echo.
Export DUT Type And Version ${DUT_TYPE} ${DUT_VERSION}
Export TG Type And Version ${DUT_TYPE} ${DUT_VERSION}
3.11.5.15. Additional Suite Setup Action For ab¶
Additional Setup for suites which uses ab TG.
Iface update numa node ${tg}
${running}= Is TRex running ${tg}
Run keyword if ${running}==${True} Teardown traffic generator ${tg}
${curr_driver}= Get PCI dev driver ${tg} ${tg['interfaces']['${tg_if1}']['pci_address']}
Run keyword if '${curr_driver}'!='${None}' PCI Driver Unbind ${tg} ${tg['interfaces']['${tg_if1}']['pci_address']}
${driver}= Get Variable Value ${tg['interfaces']['${tg_if1}']['driver']}
PCI Driver Bind ${tg} ${tg['interfaces']['${tg_if1}']['pci_address']} ${driver}
${intf_name}= Get Linux interface name ${tg} ${tg['interfaces']['${tg_if1}']['pci_address']}
FOR ${ip_addr} IN @{ab_ip_addrs}
\ ${ip_addr_on_intf}= Linux interface has IP ${tg} ${intf_name} ${ip_addr} ${ab_ip_prefix}
\ Run Keyword If ${ip_addr_on_intf}==${False} Set Linux interface IP ${tg} ${intf_name} ${ip_addr} ${ab_ip_prefix}
Set Linux interface up ${nodes}[TG] ${intf_name}
Check AB ${tg}
${type} = Get AB Type ${nodes}[TG]
${version} = Get AB Version ${nodes}[TG]
Export TG Type And Version ${type} ${version}
3.11.6. suite_teardown suite¶
3.11.6.1. Tear down suite¶
Common suite teardown for tests. Arguments: - ${actions} - Additional teardown action. Type: list
Start Suite Teardown Export
FOR ${action} IN @{actions}
\ Run Keyword Additional Suite Tear Down Action For ${action}
Remove All Added VIF Ports On All DUTs From Topology ${nodes}
Finalize Suite Teardown Export
3.11.6.2. Additional Suite Tear Down Action For ab¶
Additional teardown for suites which uses ab.
${intf_name}= Get Linux interface name ${tg} ${tg['interfaces']['${tg_if1}']['pci_address']}
FOR ${ip_addr} IN @{ab_ip_addrs}
\ ${ip_addr_on_intf}= Linux Interface Has IP ${tg} ${intf_name} ${ip_addr} ${ab_ip_prefix}
\ Run Keyword If ${ip_addr_on_intf}==${True} Delete Linux Interface IP ${tg} ${intf_name} ${ip_addr} ${ab_ip_prefix}
3.11.6.3. Additional Suite Tear Down Action For performance¶
Additional teardown for suites which uses performance measurement.
Run Keyword And Ignore Error Teardown traffic generator ${tg}
3.11.6.4. Additional Suite Tear Down Action For dpdk¶
Additional teardown for suites which uses dpdk.
FOR ${dut} IN @{duts}
\ Cleanup DPDK Framework ${nodes['${dut}']} ${${dut}_${int}1}[0] ${${dut}_${int}2}[0]
3.11.6.5. Additional Suite Tear Down Action For hoststack¶
Additional teardown for suites which uses hoststack test programs. Ensure all hoststack test programs are no longer running on all DUTS.
FOR ${dut} IN @{duts}
\ Kill Program ${nodes['${dut}']} iperf3
\ Kill Program ${nodes['${dut}']} vpp_echo
3.11.7. test_setup suite¶
3.11.7.1. Setup test¶
Common test setup for VPP tests. Arguments: - ${actions} - Additional setup action. Type: list
Start Test Export
Reset PAPI History On All DUTs ${nodes}
${int} = Set Variable If ${nic_vfs} > 0 prevf pf
Create base startup configuration of VPP on all DUTs
FOR ${action} IN @{actions}
\ Run Keyword Additional Test Setup Action For ${action}
3.11.7.2. Additional Test Setup Action For namespace¶
Additional Setup for tests which uses namespace.
FOR ${dut} IN @{duts}
\ Clean Up Namespaces ${nodes['${dut}']}
3.11.7.3. Additional Test Setup Action For performance¶
Additional Setup for tests which uses namespace.
${trex_running}= Is Trex Running ${tg}
Run Keyword Unless ${trex_running} Startup Trex ${tg} ${osi_layer}
3.11.8. test_teardown suite¶
3.11.8.1. Tear down test¶
Common test teardown for VPP tests. Arguments: - ${actions} - Additional teardown action. Type: list
Remove All Added Ports On All DUTs From Topology ${nodes}
Show PAPI History On All DUTs ${nodes}
Run Keyword If Test Failed Show Log On All DUTs ${nodes}
Run Keyword If Test Failed Get Core Files on All Nodes ${nodes}
Run Keyword If Test Failed Verify VPP PID in Teardown
Run Keyword If Test Failed VPP Show Memory On All DUTs ${nodes}
FOR ${action} IN @{actions}
\ Run Keyword Additional Test Tear Down Action For ${action}
Clean Sockets On All Nodes ${nodes}
Finalize Test Export
3.11.8.2. Tear down test raw¶
Common test teardown for raw tests. Arguments: - ${actions} - Additional teardown action. Type: list
Remove All Added Ports On All DUTs From Topology ${nodes}
FOR ${action} IN @{actions}
\ Run Keyword Additional Test Tear Down Action For ${action}
Clean Sockets On All Nodes ${nodes}
Finalize Test Export
3.11.8.3. Additional Test Tear Down Action For acl¶
Additional teardown for tests which uses ACL feature.
Run Keyword If Test Failed Vpp Log Plugin Acl Settings ${dut1}
Run Keyword If Test Failed Vpp Log Plugin Acl Interface Assignment ${dut1}
3.11.8.4. Additional Test Tear Down Action For classify¶
Additional teardown for tests which uses classify tables.
Run Keyword If Test Failed Show Classify Tables Verbose on all DUTs ${nodes}
3.11.8.5. Additional Test Tear Down Action For container¶
Additional teardown for tests which uses containers.
FOR ${container_group} IN @{container_groups}
\ Destroy all '${container_group}' containers
3.11.8.6. Additional Test Tear Down Action For nginx¶
Additional teardown for tests which uses nginx.
FOR ${dut} IN @{duts}
\ Kill Program ${nodes['${dut}']} nginx
3.11.8.7. Additional Test Tear Down Action For det44¶
Additional teardown for tests which uses DET44 feature.
FOR ${dut} IN @{duts}
\ Run Keyword If Test Failed Show DET44 verbose ${nodes['${dut}']}
3.11.8.8. Additional Test Tear Down Action For geneve4¶
Additional teardown for tests which uses GENEVE IPv4 tunnel.
FOR ${dut} IN @{duts}
\ Run Keyword If Test Failed Show Geneve Tunnel Data ${nodes['${dut}']}
3.11.8.9. Additional Test Tear Down Action For iPerf3¶
Additional teardown for test which uses iPerf3 server.
Run Keyword And Ignore Error Teardown iPerf ${nodes['${iperf_server_node}']}
3.11.8.10. Additional Test Tear Down Action For ipsec_sa¶
Additional teardown for tests which uses IPSec security association.
FOR ${dut} IN @{duts}
\ Run Keyword If Test Failed Show Ipsec Security Association ${nodes['${dut}']}
3.11.8.11. Additional Test Tear Down Action For ipsec_all¶
Additional teardown for tests which use varied IPSec configuration. Databases.
FOR ${dut} IN @{duts}
\ Run Keyword If Test Failed Vpp Ipsec Show All ${nodes['${dut}']}
3.11.8.12. Additional Test Tear Down Action For linux_bridge¶
Additional teardown for tests which uses linux_bridge.
FOR ${dut} IN @{duts}
\ Linux Del Bridge ${nodes['${dut}']} ${bid_TAP}
3.11.8.13. Additional Test Tear Down Action For macipacl¶
Additional teardown for tests which uses MACIP ACL feature.
Run Keyword If Test Failed Vpp Log Macip Acl Settings ${dut1}
Run Keyword If Test Failed Vpp Log Macip Acl Interface Assignment ${dut1}
3.11.8.14. Additional Test Tear Down Action For namespace¶
Additional teardown for tests which uses namespace.
FOR ${dut} IN @{duts}
\ Clean Up Namespaces ${nodes['${dut}']}
3.11.8.15. Additional Test Tear Down Action For nat-ed¶
Additional teardown for tests which uses NAT feature.
FOR ${dut} IN @{duts}
\ Show NAT44 Config ${nodes['${dut}']}
\ Show NAT44 Summary ${nodes['${dut}']}
\ Show NAT Base Data ${nodes['${dut}']}
\ Vpp Get Ip Table Summary ${nodes['${dut}']}
3.11.8.16. Additional Test Tear Down Action For packet_trace¶
Additional teardown for tests which uses packet trace.
Show Packet Trace on All DUTs ${nodes}
3.11.8.17. Additional Test Tear Down Action For telemetry¶
Additional teardown for tests which uses telemetry reads.
Run Telemetry On All DUTs ${nodes} profile=${telemetry_profile}.yaml
3.11.8.18. Additional Test Tear Down Action For performance¶
Additional teardown for tests which uses performance measurement. Optionally, call ${resetter} (if defined) to reset DUT state.
Run Keyword If Test Passed Return From Keyword
${use_latency} = Get Use Latency
${rate_for_teardown} = Get Rate For Teardown
Call Resetter
Set Test Variable \${extended_debug} ${True}
Send traffic at specified rate trial_duration=${1.0} rate=${rate_for_teardown} trial_multiplicity=${1} use_latency=${use_latency} duration_limit=${1.0}
3.11.8.19. Additional Test Tear Down Action For srv6¶
Additional teardown for tests which uses SRv6.
Run Keyword If Test Failed Show SR Policies on all DUTs ${nodes}
Run Keyword If Test Failed Show SR Steering Policies on all DUTs ${nodes}
Run Keyword If Test Failed Show SR LocalSIDs on all DUTs ${nodes}
3.11.8.20. Additional Test Tear Down Action For vhost¶
Additional teardown for tests which uses vhost(s) and VM(s).
Show VPP vhost on all DUTs ${nodes}
${vnf_status} ${value}= Run Keyword And Ignore Error Keyword Should Exist vnf_manager.Kill All VMs
Run Keyword If '${vnf_status}' == 'PASS' vnf_manager.Kill All VMs
3.11.8.21. Additional Test Tear Down Action For vhost-pt¶
Additional teardown for tests which uses pci-passtrough and VM(s).
${vnf_status} ${value}= Run Keyword And Ignore Error Keyword Should Exist vnf_manager.Kill All VMs
Run Keyword If '${vnf_status}' == 'PASS' vnf_manager.Kill All VMs
3.11.9. traffic suite¶
3.11.9.1. Send packet and verify headers¶
Sends packet from IP (with source mac) to IP(with dest mac). There has to be 4 MAC addresses when using2-node + xconnect (one for each eth). Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2) - tg_node - Node to execute scripts on (TG). Type: dictionary - src_ip - IP of source interface (TG-if1). Type: string - dst_ip - IP of destination interface (TG-if2). Type: string - tx_src_port - Interface of TG-if1. Type: string - tx_src_mac - MAC address of TG-if1. Type: string - tx_dst_mac - MAC address of DUT-if1. Type: string - rx_dst_port - Interface of TG-if1. Type: string - rx_src_mac - MAC address of DUT1-if2. Type: string - rx_dst_mac - MAC address of TG-if2. Type: string - encaps_tx - Expected encapsulation on TX side: Dot1q or Dot1ad (Optional). Type: string - vlan_tx - VLAN (inner) tag on TX side (Optional). Type: integer - vlan_outer_tx - .1AD VLAN (outer) tag on TX side (Optional). Type: integer - encaps_rx - Expected encapsulation on RX side: Dot1q or Dot1ad (Optional). Type: string - vlan_rx - VLAN (inner) tag on RX side (Optional). Type: integer - vlan_outer_rx - .1AD VLAN (outer) tag on RX side (Optional). Type: integer - traffic_script - Scapy Traffic script used for validation. Type: string Return: - No value returned Example: | Send packet and verify headers | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:ee:fd:b3 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:7d:fd:10 |
${tx_port_name}= Get interface name ${tg_node} ${tx_src_port}
${rx_port_name}= Get interface name ${tg_node} ${rx_dst_port}
${args}= Catenate --tg_src_mac ${tx_src_mac} --tg_dst_mac ${rx_dst_mac} --dut_if1_mac ${tx_dst_mac} --dut_if2_mac ${rx_src_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --tx_if ${tx_port_name} --rx_if ${rx_port_name}
${args}= Run Keyword If '${encaps_tx}' == '${EMPTY}' Set Variable ${args} ELSE Catenate ${args} --encaps_tx ${encaps_tx} --vlan_tx ${vlan_tx}
${args}= Run Keyword If '${encaps_rx}' == '${EMPTY}' Set Variable ${args} ELSE Catenate ${args} --encaps_rx ${encaps_rx} --vlan_rx ${vlan_rx}
${args}= Run Keyword If '${vlan_outer_tx}' == '${EMPTY}' Set Variable ${args} ELSE Catenate ${args} --vlan_outer_tx ${vlan_outer_tx}
${args}= Run Keyword If '${vlan_outer_rx}' == '${EMPTY}' Set Variable ${args} ELSE Catenate ${args} --vlan_outer_rx ${vlan_outer_rx}
Run Traffic Script On Node ${traffic_script}.py ${tg_node} ${args}
3.11.9.2. Packet transmission from port to port should fail¶
Sends packet from ip (with specified mac) to ip(with dest mac). Using keyword : Send packet And Check Headersand subsequently checks the return value. Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2) - tg_node - Node to execute scripts on (TG). Type: dictionary - src_ip - IP of source interface (TG-if1). Type: string - dst_ip - IP of destination interface (TG-if2). Type: string - tx_src_port - Interface of TG-if1. Type: string - tx_src_mac - MAC address of TG-if1. Type: string - tx_dst_mac - MAC address of DUT-if1. Type: string - rx_port - Interface of TG-if1. Type: string - rx_src_mac - MAC address of DUT1-if2. Type: string - rx_dst_mac - MAC address of TG-if2. Type: string Return: - No value returned Example: | Packet transmission from port to port should fail | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:ee:fd:b3 | 08:00:27:7d:fd:10 |
${tx_port_name}= Get interface name ${tg_node} ${tx_src_port}
${rx_port_name}= Get interface name ${tg_node} ${rx_port}
${args}= Catenate --tg_src_mac ${tx_src_mac} --tg_dst_mac ${rx_dst_mac} --dut_if1_mac ${tx_dst_mac} --dut_if2_mac ${rx_src_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --tx_if ${tx_port_name} --rx_if ${rx_port_name}
Run Keyword And Expect Error IP packet Rx timeout Run Traffic Script On Node send_ip_check_headers.py ${tg_node} ${args}
3.11.9.3. Send packet and verify marking¶
Send packet and verify DSCP of the received packet. Arguments: - node - TG node. Type: dictionary - tx_if - TG transmit interface. Type: string - rx_if - TG receive interface. Type: string - src_mac - Packet source MAC. Type: string - dst_mac - Packet destination MAC. Type: string - src_ip - Packet source IP address. Type: string - dst_ip - Packet destination IP address. Type: string Example: | Send packet and verify marking | ${nodes[‘TG’]} | eth1 | eth2 | 08:00:27:87:4d:f7 | 52:54:00:d4:d8:22 | 192.168.122.2 | 192.168.122.1 |
${tx_if_name}= Get Interface Name ${node} ${tx_if}
${rx_if_name}= Get Interface Name ${node} ${rx_if}
${args}= Traffic Script Gen Arg ${rx_if_name} ${tx_if_name} ${src_mac} ${dst_mac} ${src_ip} ${dst_ip}
${dscp_num}= Get DSCP Num Value ${dscp}
${args}= Set Variable ${args} --dscp ${dscp_num}
Run Traffic Script On Node policer.py ${node} ${args}
3.11.9.4. Send VXLAN encapsulated packet and verify received packet¶
Send VXLAN encapsulated Ethernet frame and check received one. Arguments: - tg_node - Node where to run traffic script. Type: dictionary - tx_if - Interface from where send VXLAN packet. Type: string - rx_if - Interface where receive VXLAN packet. Type: string - tx_src_mac - Source MAC address of sent packet. Type: string - tx_dst_mac - Destination MAC address of sent packet. Type: string - tx_src_ip - Source IP address of sent VXLAN packet. Type: string - tx_dst_ip - Destination IP address of sent VXLAN packet. Type: string - tx_vni - VNI of sent VXLAN packet. Type: string - rx_src_ip - Source IP address of received VXLAN packet. Type: string - rx_dst_ip - Destination IP address of received VXLAN packet. Type: string - rx_vni - VNI of received VXLAN packet. Type: string Return: - No value returned Example: | Send VXLAN encapsulated packet and verify received packet | ${tg_node} | port4 | port4 | fa:16:3e:6d:f9:c5 | fa:16:3e:e6:6d:9a | 192.168.0.1 | 192.168.0.2 | ${101} | 192.168.0.2 | 192.168.0.1 | ${102} |
${tx_if_name}= Get interface name ${tg_node} ${tx_if}
${rx_if_name}= Get interface name ${tg_node} ${rx_if}
${args}= Catenate --tx_if ${tx_if_name} --rx_if ${rx_if_name} --tx_src_mac ${tx_src_mac} --tx_dst_mac ${tx_dst_mac} --tx_src_ip ${tx_src_ip} --tx_dst_ip ${tx_dst_ip} --tx_vni ${tx_vni} --rx_src_ip ${rx_src_ip} --rx_dst_ip ${rx_dst_ip} --rx_vni ${rx_vni}
Run Traffic Script On Node send_vxlan_check_vxlan.py ${tg_node} ${args}
3.11.9.5. Send ICMP echo request and verify answer¶
Run traffic script that waits for ICMP reply and ignores all other packets. Arguments: - tg_node - TG node where run traffic script. Type: dictionary - tg_interface - TG interface where send ICMP echo request. Type: string - dst_mac - Destination MAC address. Type: string - src_mac - Source MAC address. Type: string - dst_ip - Destination IP address. Type: string - src_ip - Source IP address. Type: string - timeout - Wait timeout in seconds (Default: 10). Type: integer Example: | Send ICMP echo request and verify answer | ${nodes[‘TG’]} | eth2 | 08:00:27:46:2b:4c | 08:00:27:66:b8:57 | 192.168.23.10 | 192.168.23.1 | 10 |
${tg_interface_name}= Get interface name ${tg_node} ${tg_interface}
${args}= Catenate --rx_if ${tg_interface_name} --tx_if ${tg_interface_name} --dst_mac ${dst_mac} --src_mac ${src_mac} --dst_ip ${dst_ip} --src_ip ${src_ip} --timeout ${timeout}
Run Traffic Script On Node send_icmp_wait_for_reply.py ${tg_node} ${args}
3.11.9.6. Send IPsec Packet and verify ESP encapsulation in received packet¶
Send IPsec packet from TG to DUT. Receive IPsec packetfrom DUT on TG and verify ESP encapsulation. Arguments: - node - TG node. Type: dictionary - tx_interface - TG Interface 1. Type: string - rx_interface - TG Interface 2. Type: string - tx_dst_mac - Destination MAC for TX interface / DUT interface 1 MAC. Type: string - rx_src_mac - Source MAC for RX interface / DUT interface 2 MAC. Type: string - crypto_alg - Encrytion algorithm. Type: enum - crypto_key - Encryption key. Type: string - integ_alg - Integrity algorithm. Type: enum - integ_key - Integrity key. Type: string - l_spi - Local SPI. Type: integer - r_spi - Remote SPI. Type: integer - l_ip - Local IP address. Type: string - r_ip - Remote IP address. Type: string - l_tunnel - Local tunnel IP address (optional). Type: string - r_tunnel - Remote tunnel IP address (optional). Type: string Example: | ${encr_alg}= | Crypto Alg AES CBC 128 | | ${auth_alg}= | Integ Alg SHA1 96 | | Send IPsec Packet and verify ESP encapsulation in received packet| ${nodes[‘TG’]} | eth1 | eth2 | 52:54:00:d4:d8:22 | 52:54:00:d4:d8:3e | ${encr_alg} | sixteenbytes_key | ${auth_alg} | twentybytessecretkey | ${1001} | 00} | 192.168.3.3 | 192.168.4.4 | 192.168.100.2 | 192.168.100.3 |
${tx_src_mac}= Get Interface Mac ${node} ${tx_interface}
${tx_if_name}= Get Interface Name ${node} ${tx_interface}
${rx_dst_mac}= Get Interface Mac ${node} ${rx_interface}
${rx_if_name}= Get Interface Name ${node} ${rx_interface}
${args}= Catenate --rx_if ${rx_if_name} --tx_if ${tx_if_name} --tx_src_mac ${tx_src_mac} --tx_dst_mac ${tx_dst_mac} --rx_src_mac ${rx_src_mac} --rx_dst_mac ${rx_dst_mac} --src_ip ${l_ip} --dst_ip ${r_ip}
${crypto_alg_str}= Get Crypto Alg Scapy Name ${crypto_alg}
${integ_alg_str}= Get Integ Alg Scapy Name ${integ_alg}
${args}= Catenate ${args} --crypto_alg ${crypto_alg_str} --crypto_key ${crypto_key} --integ_alg ${integ_alg_str} --integ_key ${integ_key} --l_spi ${l_spi} --r_spi ${r_spi}
${args}= Set Variable If "${l_tunnel}" == "${None}" ${args} ${args} --src_tun ${l_tunnel}
${args}= Set Variable If "${r_tunnel}" == "${None}" ${args} ${args} --dst_tun ${r_tunnel}
Run Traffic Script On Node ipsec_policy.py ${node} ${args}
3.11.9.7. Send packet and verify LISP encap¶
Send ICMP packet to DUT out one interface and receivea LISP encapsulated packet on the other interface. Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2) - tg_node - Node to execute scripts on (TG). Type: dictionary - src_ip - IP of source interface (TG-if1). Type: string - dst_ip - IP of destination interface (TG-if2). Type: string - tx_src_port - Interface of TG-if1. Type: string - tx_src_mac - MAC address of TG-if1. Type: string - tx_dst_mac - MAC address of DUT-if1. Type: string - rx_port - Interface of TG-if1. Type: string - rx_src_mac - MAC address of DUT1-if2. Type: string - rx_dst_mac - MAC address of TG-if2. Type: string - src_rloc - configured RLOC source address. Type: string - dst_rloc - configured RLOC destination address. Type: string Return: - No value returned Example: | Send packet and verify LISP encap | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:ee:fd:b3 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:7d:fd:10 | 10.0.1.1 | 10.0.1.2 |
${tx_port_name}= Get interface name ${tg_node} ${tx_src_port}
${rx_port_name}= Get interface name ${tg_node} ${rx_port}
${args}= Catenate --tg_src_mac ${tx_src_mac} --tg_dst_mac ${rx_dst_mac} --dut_if1_mac ${tx_dst_mac} --dut_if2_mac ${rx_src_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --tx_if ${tx_port_name} --rx_if ${rx_port_name} --src_rloc ${src_rloc} --dst_rloc ${dst_rloc}
Run Traffic Script On Node lisp/lisp_check.py ${tg_node} ${args}
3.11.9.8. Send IP Packet and verify ESP encapsulation in received packet¶
Send IP packet from TG to DUT. Receive IPsec packetfrom DUT on TG and verify ESP encapsulation. Send IPsec packet inopposite direction and verify received IP packet. Arguments: - node - TG node. Type: dictionary - tx_interface - TG Interface 1. Type: string - rx_interface - TG Interface 2. Type: string - tx_dst_mac - Destination MAC for TX interface / DUT interface 1 MAC. Type: string - rx_src_mac - Source MAC for RX interface / DUT interface 2 MAC. Type: string - crypto_alg - Encrytion algorithm. Type: enum - crypto_key - Encryption key. Type: string - integ_alg - Integrity algorithm. Type: enum - integ_key - Integrity key. Type: string - l_spi - Local SPI. Type: integer - r_spi - Remote SPI. Type: integer - src_ip - Source IP address. Type: string - dst_ip - Destination IP address. Type: string - src_tun - Source tunnel IP address. Type: string - dst_tun - Destination tunnel IP address. Type: string Example: | ${encr_alg}= | Crypto Alg AES CBC 128 | | ${auth_alg}= | Integ Alg SHA1 96 | | Send IPsec Packet and verify ESP encapsulation in received packet| ${nodes[‘TG’]} | eth1 | eth2 | 52:54:00:d4:d8:22 | 52:54:00:d4:d8:3e | ${encr_alg} | sixteenbytes_key | ${auth_alg} | twentybytessecretkey | ${1001} | ${1000} | 192.168.3.3 | 192.168.4.4 | 192.168.100.2 | 192.168.100.3 |
${tx_src_mac}= Get Interface Mac ${node} ${tx_interface}
${tx_if_name}= Get Interface Name ${node} ${tx_interface}
${rx_dst_mac}= Get Interface Mac ${node} ${rx_interface}
${rx_if_name}= Get Interface Name ${node} ${rx_interface}
${crypto_alg_str}= Get Crypto Alg Scapy Name ${crypto_alg}
${integ_alg_str}= Get Integ Alg Scapy Name ${integ_alg}
${args}= Catenate --rx_if ${rx_if_name} --tx_if ${tx_if_name} --tx_src_mac ${tx_src_mac} --tx_dst_mac ${tx_dst_mac} --rx_src_mac ${rx_src_mac} --rx_dst_mac ${rx_dst_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --crypto_alg ${crypto_alg_str} --crypto_key ${crypto_key} --integ_alg ${integ_alg_str} --integ_key ${integ_key} --l_spi ${l_spi} --r_spi ${r_spi} --src_tun ${src_tun} --dst_tun ${dst_tun}
Run Traffic Script On Node ipsec_interface.py ${node} ${args}
3.11.9.9. Send packet and verify LISP GPE encap¶
Send ICMP packet to DUT out one interface and receivea LISP-GPE encapsulated packet on the other interface. Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2) - tg_node - Node to execute scripts on (TG). Type: dictionary - src_ip - IP of source interface (TG-if1). Type: string - dst_ip - IP of destination interface (TG-if2). Type: string - tx_src_port - Interface of TG-if1. Type: string - tx_src_mac - MAC address of TG-if1. Type: string - tx_dst_mac - MAC address of DUT-if1. Type: string - rx_port - Interface of TG-if1. Type: string - rx_src_mac - MAC address of DUT1-if2. Type: string - rx_dst_mac - MAC address of TG-if2. Type: string - src_rloc - configured RLOC source address. Type: string - dst_rloc - configured RLOC destination address. Type: string Return: - No value returned Example: | Send packet and verify LISP GPE encap | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:ee:fd:b3 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:7d:fd:10 | 10.0.1.1 | 10.0.1.2 |
${tx_port_name}= Get interface name ${tg_node} ${tx_src_port}
${rx_port_name}= Get interface name ${tg_node} ${rx_port}
${args}= Catenate --tg_src_mac ${tx_src_mac} --tg_dst_mac ${rx_dst_mac} --dut_if1_mac ${tx_dst_mac} --dut_if2_mac ${rx_src_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --tx_if ${tx_port_name} --rx_if ${rx_port_name} --src_rloc ${src_rloc} --dst_rloc ${dst_rloc}
Run Traffic Script On Node lisp/lispgpe_check.py ${tg_node} ${args}
3.11.9.10. Send packet and verify LISPoTunnel encap¶
Send ICMP packet to DUT out one interface and receivea LISP encapsulated packet on the other interface. Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2) - tg_node - Node to execute scripts on (TG). Type: dictionary - src_ip - IP of source interface (TG-if1). Type: string - dst_ip - IP of destination interface (TG-if2). Type: string - tx_src_port - Interface of TG-if1. Type: string - tx_src_mac - MAC address of TG-if1. Type: string - tx_dst_mac - MAC address of DUT-if1. Type: string - rx_port - Interface of TG-if1. Type: string - rx_src_mac - MAC address of DUT1-if2. Type: string - rx_dst_mac - MAC address of TG-if2. Type: string - src_rloc - configured RLOC source address. Type: string - dst_rloc - configured RLOC destination address. Type: string - ot_mode - overlay tunnel mode. Type: string Return: - No value returned Example: | Send packet and verify LISP encap | ${nodes[‘TG’]} | 10.0.0.1 | 32.0.0.1 | eth2 | 08:00:27:ee:fd:b3 | 08:00:27:a2:52:5b | eth3 | 08:00:27:4d:ca:7a | 08:00:27:7d:fd:10 | 10.0.1.1 | 10.0.1.2 |
${tx_port_name}= Get interface name ${tg_node} ${tx_src_port}
${rx_port_name}= Get interface name ${tg_node} ${rx_port}
${args}= Catenate --tg_src_mac ${tx_src_mac} --tg_dst_mac ${rx_dst_mac} --dut_if1_mac ${tx_dst_mac} --dut_if2_mac ${rx_src_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --tx_if ${tx_port_name} --rx_if ${rx_port_name} --src_rloc ${src_rloc} --dst_rloc ${dst_rloc} --ot_mode ${ot_mode}
Run Traffic Script On Node lisp/lispgpe_check.py ${tg_node} ${args}
3.11.9.11. Send IPv6 Packet and verify SRv6 encapsulation in received packet¶
Send IP packet from TG to DUT. Receive IPv6 packet withSRv6 extension header from DUT on TG and verify SRv6 encapsulation.Send IPv6 packet with SRv6 extension header in opposite direction andverify received IP packet. Arguments: - node - TG node. Type: dictionary - tx_interface - TG Interface 1. Type: string - rx_interface - TG Interface 2. Type: string - tx_dst_mac - Destination MAC for TX interface / DUT interface 1 MAC. Type: string - rx_src_mac - Source MAC for RX interface / DUT interface 2 MAC. Type: string - src_ip - Source IP address. Type: string - dst_ip - Destination IP address. Type: string - dut_srcsid - Source SID on DUT (dir0). Type: string - dut_dstsid1 - The first destination SID on DUT (dir1). Type: string - tg_srcsid - Source SID on TG (dir1). Type: string - tg_dstsid1 - The first destination SID on TG (dir0). Type: string - dut_dstsid2 - The second destination SID on DUT (dir1). Type: string - tg_dstsid2 - The second destination SID on TG (dir0). Type: string - decap - True if decapsulation expected, false if encapsulated packet expected on receiving interface (Optional). Type: boolean - tg_dstsid3 - The third destination SID on TG (dir0) (Optional). Type: string - dut_dstsid3 - The third destination SID on DUT (dir1) (Optional). Type: string - static_proxy - Switch for SRv6 with endpoint to SR-unaware Service Function via static proxy (Optional). Type: boolean Example: | Send IPv6 Packet and verify SRv6 encapsulation in received packet| ${nodes[‘TG’]} | eth1 | eth2 | 52:54:00:d4:d8:22 | 52:54:00:d4:d8:3e | 2002:1:: | 2003:2:: | 2003:1:: | 2002:2:: | decap=${False} | tg_dstsid3=2002:4:: | dut_dstsid3=2003:4:: | static_proxy=${True} |
${tx_src_mac}= Get Interface Mac ${node} ${tx_interface}
${tx_if_name}= Get Interface Name ${node} ${tx_interface}
${rx_dst_mac}= Get Interface Mac ${node} ${rx_interface}
${rx_if_name}= Get Interface Name ${node} ${rx_interface}
${args}= Catenate --rx_if ${rx_if_name} --tx_if ${tx_if_name} --tx_src_mac ${tx_src_mac} --tx_dst_mac ${tx_dst_mac} --rx_src_mac ${rx_src_mac} --rx_dst_mac ${rx_dst_mac} --src_ip ${src_ip} --dst_ip ${dst_ip} --dir0_srcsid ${dut_srcsid} --dir0_dstsid1 ${tg_dstsid1} --dir0_dstsid2 ${tg_dstsid2} --dir1_srcsid ${tg_srcsid} --dir1_dstsid1 ${dut_dstsid1} --dir1_dstsid2 ${dut_dstsid2} --decap ${decap} --dir0_dstsid3 ${tg_dstsid3} --dir1_dstsid3 ${dut_dstsid3} --static_proxy ${static_proxy}
Run Traffic Script On Node srv6_encap.py ${node} ${args}
3.11.9.12. Send TCP or UDP packet and verify network address translations¶
Send TCP or UDP packet from TG-if1 to TG-if2 and responsein opposite direction via DUT with configured NAT. Check packetheaders on both sides. Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT(if2)->TG(if2) - tg_node - Node where to run traffic script. Type: dictionary - tx_interface - TG Interface 1. Type: string - rx_interface - TG Interface 2. Type: string - tx_dst_mac - Destination MAC for TX interface / DUT interface 1 MAC. Type: string - rx_src_mac - Source MAC for RX interface / DUT interface 2 MAC. Type: string - src_ip_in - Internal source IP address. Type: string - src_ip_out - External source IP address. Type: string - dst_ip - Destination IP address. Type: string - protocol - TCP or UDP protocol. Type: string - src_port_in - Internal source TCP/UDP port. Type: string or integer - src_port_out - External source TCP/UDP port; default value: unknown. Type: string or integer - dst_port - Destination TCP/UDP port. Type: string or integer Return: - No value returned Example: | Send TCP or UDP packet and verify network address translations | ${nodes[‘TG’]} | port1 | port2 | 08:00:27:cc:4f:54 | 08:00:27:c9:6a:d5 | 192.168.0.0 | 68.142.68.0 | 20.0.0.0 | TCP | 1024 | 8080 |
${tx_src_mac}= Get Interface Mac ${tg_node} ${tx_interface}
${tx_if_name}= Get Interface Name ${tg_node} ${tx_interface}
${rx_dst_mac}= Get Interface Mac ${tg_node} ${rx_interface}
${rx_if_name}= Get Interface Name ${tg_node} ${rx_interface}
${args}= Catenate --rx_if ${rx_if_name} --tx_if ${tx_if_name} --tx_src_mac ${tx_src_mac} --tx_dst_mac ${tx_dst_mac} --rx_src_mac ${rx_src_mac} --rx_dst_mac ${rx_dst_mac} --src_ip_in ${src_ip_in} --src_ip_out ${src_ip_out} --dst_ip ${dst_ip} --protocol ${protocol} --src_port_in ${src_port_in} --src_port_out ${src_port_out} --dst_port ${dst_port}
Run Traffic Script On Node nat.py ${tg_node} ${args}
3.11.9.13. Send IP packet and verify GENEVE encapsulation in received packets¶
Send IP packet from TG to DUT. Receive GENEVE packetfrom DUT on TG and verify GENEVE encapsulation. Send GENEVE packet inopposite direction and verify received IP packet. Arguments: - node - TG node. Type: dictionary - tx_interface - TG Interface 1. Type: string - rx_interface - TG Interface 2. Type: string - tx_dst_mac - Destination MAC for TX interface / DUT interface 1 MAC. Type: string - rx_src_mac - Source MAC for RX interface / DUT interface 2 MAC. Type: string - tun_local_ip - GENEVE tunnel source IP address. Type: string - tun_remote_ip - GENEVE tunnel destination IP address. Type: string - tun_vni - GENEVE tunnel VNI. Type: integer - tun_src_ip - Source IP address of original IP packet / inner source IP address of GENEVE packet. Type: string - tun_dst_ip - Destination IP address of original IP packet / inner destination IP address of GENEVE packet. Type: string Example: | Send IP packet and verify GENEVE encapsulation in received packets| ${nodes[‘TG’]} | eth1 | eth2 | 52:54:00:d4:d8:22 | 52:54:00:d4:d8:3e | 1.1.1.2 | 1.1.1.1 | 1 | 10.128.1.0 | 10.0.1.0 | 24 |11.0.1.2|
${tx_src_mac}= Get Interface Mac ${node} ${tx_interface}
${tx_if_name}= Get Interface Name ${node} ${tx_interface}
${rx_dst_mac}= Get Interface Mac ${node} ${rx_interface}
${rx_if_name}= Get Interface Name ${node} ${rx_interface}
${args}= Catenate --rx_if ${rx_if_name} --tx_if ${tx_if_name} --tx_src_mac ${tx_src_mac} --tx_dst_mac ${tx_dst_mac} --rx_src_mac ${rx_src_mac} --rx_dst_mac ${rx_dst_mac} --tun_local_ip ${tun_local_ip} --tun_remote_ip ${tun_remote_ip} --tun_vni ${tun_vni} --tun_src_ip ${tun_src_ip} --tun_dst_ip ${tun_dst_ip}
Run Traffic Script On Node geneve_tunnel.py ${node} ${args}
3.11.9.14. Send flow packet and verify action¶
Send packet and verify the correctness of flow action. Arguments: _NOTE:_ Arguments are based on topology: TG(if1)->(if1)DUT - tg_node - Node to execute scripts on (TG). Type: dictionary - tx_interface - TG Interface 1. Type: string - tx_dst_mac - MAC address of DUT-if1. Type: string - flow_type - Flow packet type. Type: string - proto - Flow packet protocol. Type: string - src_ip - Source ip address. Type: string - dst_ip - Destination IP address. Type: string - src_port - Source port. Type: int - dst_port - Destination port. Type: int - value - Additional packet value. Type: integer - traffic_script - Traffic script that send packet. Type: string - action - drop, mark or redirect-to-queue. Type: string - action_value - action value. Type: integer Return: - No value returned Example: | Send flow packet and verify actions | ${nodes[‘TG’]} | eth2 | 08:00:27:a2:52:5b | IP4 | UDP | src_ip=1.1.1.1 | dst_ip=2.2.2.2 | src_port=${100} | dst_port=${200} | traffic_script=send_flow_packet |action=mark | action_value=${3} |
${tx_src_mac}= Get Interface Mac ${tg_node} ${tx_interface}
${tx_if_name}= Get interface name ${tg_node} ${tx_interface}
${args}= Catenate --tg_if1_mac ${tx_src_mac} --dut_if1_mac ${tx_dst_mac} --tx_if ${tx_if_name} --flow_type ${flow_type} --proto ${proto} --src_ip ${src_ip} --dst_ip ${dst_ip} --src_port ${src_port} --dst_port ${dst_port} --value ${value}
Run Traffic Script On Node ${traffic_script}.py ${tg_node} ${args}
Vpp Verify Flow action ${dut1} ${action} ${action_value} ${tx_src_mac} ${tx_dst_mac} ${src_ip} ${dst_ip}
3.11.10. vm suite¶
3.11.10.1. Configure chains of NFs connected via vhost-user¶
Start 1..N chains of 1..N QEMU guests (VNFs) with two vhost-userinterfaces and interconnecting NF. Arguments: - nf_chains - Number of chains of NFs. Type: integer - nf_nodes - Number of NFs nodes per chain. Type: integer - jumbo - Jumbo frames are used (True) or are not used (False) in the test. Type: boolean - perf_qemu_qsz - Virtio Queue Size. Type: integer - use_tuned_cfs - Set True if CFS RR should be used for Qemu SMP. Type: boolean - auto_scale - Whether to use same amount of RXQs for memif interface in containers as vswitch, otherwise use single RXQ. Type: boolean - fixed_auto_scale - Enable fixed auto_scale (nf_dtc). Type: boolean - vnf - Network function as a payload. Type: string - pinning - Whether to pin QEMU VMs to specific cores Example: | Configure chains of VMs connected via vhost-user | 1 | 1 | False | 1024 | False | False | vpp | True |
${enable_gso} = Get Variable Value ${enable_gso} ${False}
${enable_csum} = Get Variable Value ${enable_csum} ${False}
${virtio_feature_mask}= Create Virtio feature mask gso=${enable_gso} csum=${enable_csum}
Import Library resources.libraries.python.QemuManager ${nodes} WITH NAME vnf_manager
Run Keyword vnf_manager.Construct VMs on all nodes nf_chains=${nf_chains} nf_nodes=${nf_nodes} jumbo=${jumbo} perf_qemu_qsz=${perf_qemu_qsz} use_tuned_cfs=${use_tuned_cfs} auto_scale=${auto_scale} fixed_auto_scale=${fixed_auto_scale} vnf=${vnf} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_mac=${TG_pf2_mac}[0] vs_dtc=${cpu_count_int} nf_dtc=${nf_dtc} nf_dtcr=${nf_dtcr} rxq_count_int=${rxq_count_int} virtio_feature_mask=${virtio_feature_mask} page_size=${page_size}
${cpu_wt}= Run Keyword vnf_manager.Start All VMs pinning=${pinning}
${cpu_alloc_str}= Catenate SEPARATOR=, ${cpu_alloc_str} ${cpu_wt}
Set Test Variable ${cpu_alloc_str}
All VPP Interfaces Ready Wait ${nodes} retries=${300}
VPP round robin RX placement on all DUTs ${nodes} prefix=Virtual
3.11.10.2. Configure chains of NFs connected via vhost-user on single node¶
Start 1..N chains of 1..N QEMU guests (VNFs) with two vhost-userinterfaces and interconnecting NF on single DUT node. Arguments: - node - DUT node. Type: dictionary - nf_chains - Number of chains of NFs. Type: integer - nf_nodes - Number of NFs nodes per chain. Type: integer - jumbo - Jumbo frames are used (True) or are not used (False) in the test. Type: boolean - perf_qemu_qsz - Virtio Queue Size. Type: integer - use_tuned_cfs - Set True if CFS RR should be used for Qemu SMP. Type: boolean - auto_scale - Whether to use same amount of RXQs for memif interface in containers as vswitch, otherwise use single RXQ. Type: boolean - fixed_auto_scale - Enable override auto_scale. Type: boolean - vnf - Network function as a payload. Type: string - pinning - Whether to pin QEMU VMs to specific cores - validate - Validate interfaces are up. Type: boolean Example: | Configure chains of NFs connected via vhost-user on single node | DUT1 | 1 | 1 | False | 1024 | False | False | vpp | True |
${enable_gso}= Get Variable Value ${enable_gso} ${False}
${enable_csum}= Get Variable Value ${enable_csum} ${False}
${virtio_feature_mask}= Create Virtio feature mask gso=${enable_gso} csum=${enable_csum}
Import Library resources.libraries.python.QemuManager ${nodes} WITH NAME vnf_manager
Run Keyword vnf_manager.Initialize
Run Keyword vnf_manager.Construct VMs on node node=${node} nf_chains=${nf_chains} nf_nodes=${nf_nodes} jumbo=${jumbo} perf_qemu_qsz=${perf_qemu_qsz} use_tuned_cfs=${use_tuned_cfs} auto_scale=${auto_scale} fixed_auto_scale=${fixed_auto_scale} vnf=${vnf} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_mac=${TG_pf2_mac}[0] vs_dtc=${cpu_count_int} nf_dtc=${nf_dtc} nf_dtcr=${nf_dtcr} rxq_count_int=${rxq_count_int} virtio_feature_mask=${virtio_feature_mask} page_size=${page_size}
${cpu_wt}= Run Keyword vnf_manager.Start All VMs pinning=${pinning}
${cpu_alloc_str}= Catenate SEPARATOR=, ${cpu_alloc_str} ${cpu_wt}
Set Test Variable ${cpu_alloc_str}
Run Keyword If ${validate} All VPP Interfaces Ready Wait ${nodes} retries=${300}
VPP round robin RX placement on all DUTs ${nodes} prefix=Virtual
3.11.10.3. Configure chains of NFs connected via passtrough¶
Start 1..N chains of 1..N QEMU guests (VNFs) with two pci passtroughinterfaces and interconnecting NF. Arguments: - nf_chains - Number of chains of NFs. Type: integer - nf_nodes - Number of NFs nodes per chain. Type: integer - jumbo - Jumbo frames are used (True) or are not used (False) in the test. Type: boolean - perf_qemu_qsz - Virtio Queue Size. Type: integer - use_tuned_cfs - Set True if CFS RR should be used for Qemu SMP. Type: boolean - auto_scale - Whether to use same amount of RXQs for memif interface in containers as vswitch, otherwise use single RXQ. Type: boolean - fixed_auto_scale - Enable override auto_scale. Type: boolean - vnf - Network function as a payload. Type: string - pinning - Whether to pin QEMU VMs to specific cores Example: | Configure chains of VMs connected via passtrough | 1 | 1 | False | 1024 | False | False | vpp | True |
${enable_gso} = Get Variable Value ${enable_gso} ${False}
${enable_csum} = Get Variable Value ${enable_csum} ${False}
${virtio_feature_mask}= Create Virtio feature mask gso=${enable_gso} csum=${enable_csum}
Import Library resources.libraries.python.QemuManager ${nodes} WITH NAME vnf_manager
Run Keyword vnf_manager.Construct VMs on all nodes nf_chains=${nf_chains} nf_nodes=${nf_nodes} jumbo=${jumbo} perf_qemu_qsz=${perf_qemu_qsz} use_tuned_cfs=${use_tuned_cfs} auto_scale=${auto_scale} fixed_auto_scale=${fixed_auto_scale} vnf=${vnf} tg_pf1_mac=${TG_pf1_mac}[0] tg_pf2_mac=${TG_pf2_mac}[0] vs_dtc=${cpu_count_int} nf_dtc=${nf_dtc} nf_dtcr=${nf_dtcr} rxq_count_int=${rxq_count_int} virtio_feature_mask=${virtio_feature_mask} page_size=${page_size} if1=${DUT1_${int}1}[0] if2=${DUT1_${int}2}[0]
${cpu_wt}= Run Keyword vnf_manager.Start All VMs pinning=${pinning}
${cpu_alloc_str}= Catenate SEPARATOR=, ${cpu_alloc_str} ${cpu_wt}
Set Test Variable ${cpu_alloc_str}