ovn-org / ovn-heater

Mega script to deploy/configure/run OVN scale tests.
Apache License 2.0
12 stars 12 forks source link

ovn-heater

Mega script to install/configure/run a simulated OVN cluster deployed with ovn-fake-multinode.

NOTE: This script is designed to be used on test machines only. It performs disruptive changes to the machines it is run on (e.g.,cleanup existing containers).

Prerequisites

Physical topology

The initial provisioning for all the nodes is performed by the do.sh install command. The simulated OVN chassis containers and central container are spawned by the test scripts in ovn-tester/.

NOTE: ovn-fake-multinode assumes that all nodes (OVN-CENTRAL, TESTER and OVN-WORKER-NODEs) have an additional Ethernet interface connected to a single L2 switch. This interface will be used for traffic to/from the Northbound and Southbound databases and for tunneled traffic.

NOTE: there's no restriction regarding physical machine roles so for debugging issues the ORCHESTRATOR, TESTER, OVN-CENTRAL and OVN-WORKER-NODEs can all be the same physical machine in which case there's no need for the secondary Ethernet interface to exist.

Sample physical topology:

TESTER, OVN-CENTRAL and OVN-WORKER-NODEs all have Ethernet interface eno1 connected to a physical switch in a separate VLAN, as untagged interfaces.

NOTE: The hostnames specified in the physical topology are used by both the ORCHESTRATOR and by the ovn-tester container running in the TESTER. Therefore, the values need to be resolvable by both of these entities and need to resolve to the same host. localhost will not work since this does not resolve to a unique host.

Minimal requirements on the ORCHESTRATOR node (tested on Fedora 38 and Ubuntu 22.10)

Install required packages:

RPM-based

dnf install -y git ansible \
    ansible-collection-ansible-posix ansible-collection-ansible-utils

DEB-based

sudo apt -y install ansible

Installation

All the following installation steps are run on ORCHESTRATOR.

Ensure all nodes can be accessed passwordless via SSH by ORCHESTRATOR and TESTER

On Fedora 33 RSA keys are not considered secure enough, an alternative is:

ssh-keygen -t ed25519 -a 64 -N '' -f ~/.ssh/id_ed25519

Then append ~/.ssh/id_ed25519.pub to ~/.ssh/authorized_keys on all physical nodes.

Get the code:

cd
git clone https://github.com/ovn-org/ovn-heater.git

Write the physical deployment description yaml file:

A sample file written for the deployment described above is available at physical-deployments/physical-deployment.yml.

The file should contain the following mandatory sections and fields:

Global optional fields:

In case some of the physical machines in the setup have different capabilities (e.g, could host more containers, or use a different ethernet interface), the following per-node fields can be used to customize the deployment. Except for fake-nodes which is valid only in the context of worker nodes, all others are valid both for the central-nodes and also for worker-nodes:

Perform the installation step:

This must be run on the ORCHESTRATOR node and generates a runtime directory, a runtime/hosts ansible inventory and installs all test components on all other nodes.

cd ~/ovn-heater
./do.sh install

This step will:

To override the OVS, OVN or ovn-fake-multinode repos/branches use the following environment variables:

For example, installing components with custom OVS/OVN code:

cd ~/ovn-heater
OVS_REPO=https://github.com/dceara/ovs OVS_BRANCH=tmp-branch OVN_REPO=https://github.com/dceara/ovn OVN_BRANCH=tmp-branch-2 ./do.sh install

To override base image of ovn-fake-multinode, which is by default fedora:latest, you can use following environment variables:

For example, to use latest Ubuntu image you can run:

cd ~/ovn-heater
OS_BASE=ubuntu OS_IMAGE_OVERRIDE=ubuntu:rolling ./do.sh install

Perform a reinstallation (e.g., new OVS/OVN versions are needed):

For OVS, OVN or ovn-fake-multinode code changes to be reflected the ovn/ovn-multi-node container image must be rebuilt. The simplest way to achieve that is to remove the current runtime directory and reinstall:

cd ~/ovn-heater
rm -rf runtime
OVS_REPO=... OVS_BRANCH=... OVN_REPO=... OVN_BRANCH=... ./do.sh install

Perform a reinstallation (e.g., install OVS/OVN from rpm packages):

cd ~/ovn-heater
rm -rf runtime

Run the installation with rpm packages parameters specified:

cd ~/ovn-heater
RPM_SELINUX=$rpm_url_openvswitch-selinux-extra-policy RPM_OVS=$rpm_url_openvswitch RPM_OVN_COMMON=$rpm_url_ovn RPM_OVN_HOST=$rpm_url_ovn-host RPM_OVN_CENTRAL=$rpm_url_ovn-central ./do.sh install

Update Tester code

To update code in Tester container run:

cd ~/ovn-heater
./do.sh refresh-tester

This is handy if you are just making changes to the code inside ovn-tester package, and you don't need to rebuild OVN/OVS packages or fake-multinode image.

Regenerate the ansible inventory:

If the physical topology has changed then update physical-deployment/physical-deployment.yml to reflect the new physical deployment.

Then generate the new ansible inventory:

cd ~/ovn-heater
./do.sh generate

Running tests:

Testing steps are executed on ORCHESTRATOR node.

Scenario definitions

Scenarios are defined in ovn-tester/ovn_tester.py and are configurable through YAML files. Sample scenario configurations are available in test-scenarios/*.yml.

Scenario execution

cd ~/ovn-heater
./do.sh run <scenario> <results-dir>

This executes <scenario> on the physical deployment (specifically on the ovn-tester container on the TESTER). Current scenarios also cleanup the environment, i.e., remove all containers from all physical nodes. NOTE: If the environment needs to be explictly cleaned up, we can also execute before running the scenario:

cd ~/ovn-heater
./do.sh init

The results will be stored in test_results/<results-dir>. The results consist of:

Example: run 20 nodes "density light"

cd ~/ovn-heater
./do.sh run test-scenarios/ocp-20-density-light.yml test-20-density-light

This test consists of two stages:

Results will be stored in ~ovn-heater/test_results/test-20-density-light*/:

Example: run 20 nodes "density heavy"

cd ~/ovn-heater
./do.sh run test-scenarios/ocp-20-density-heavy.yml test-20-density-heavy

This test consists of two stages:

Results will be stored in ~ovn-heater/test_results/test-20-density-heavy*/:

Example: run 20 nodes "cluster density"

cd ~/ovn-heater
./do.sh run test-scenarios/ocp-20-cluster-density.yml test-20-cluster-density

This test consists of two stages:

Results will be stored in ~ovn-heater/test_results/test-20-cluster-density*/:

Scenario execution with DBs in standalone mode

By default tests configure NB/SB ovsdb-servers to run in clustered mode (RAFT). If instead tests should be run in standalone mode then the test scenarios must be adapted by setting clustered_db: false in the cluster section of the test scenario YAML file.

Scenario execution with ovsdb-etcd in standalone node

This test requires ovn-fake-multinode, etcd and ovsdb-etcd

to build and run with ETCD

USE_OVSDB_ETCD=yes ./do.sh install

cd ~/ovn-heater
./do.sh run test-scenarios/ovn-etcd-low-scale.yml etcd-test-low-scale

The following fields are important for ovn-fake-node to detect and run ovsdb-etcd

  enable_ssl: False
  use_ovsdb_etcd: true

Contributing to ovn-heater

Please check out our contributing guidelines for instructions about contributing patches to ovn-heater. Please open GitHub issues for reporting any potential bugs or for requesting new ovn-heater features.