Closed lixuna closed 5 years ago
Audience: Person who wants to recreate the results themselves on Packet dual mellanox machines
High level requirements
Pre-req steps:
General steps:
git clone git@github.com:cncf/cnfs.git
cd cnfs/comparison/kubecon18-chained_nf_test
# create environment .env with Packet, SSH and test case information
# load .env
. .env
./deploy_k8s_test_case
./run_k8s_test_case # connects to traffic generator, runs tests, collects result & shows summarized results on console
Audience: Internal dev
What we have right now
K8s cluster
# create environment .env with Packet, SSH and test case information
# load .env
. .env
./deploy_k8s_cluster # using cross-cloud - terraform, cloud-init
./setup_k8s_l2 # sets up l2 for Packet switch and worker nodes
./deploy_cnfs # using helm charts
Traffic generator
# create environment .env with Packet, SSH and test case information
# load .env
. .env
./deploy_traffic_generator # terraform + ansible from the docker container
Run k8s test with traffic generator
./run_k8s_testcase # ansible running nfvbench on the traffic generator
Similar steps for OpenStack
Breaking down some of the steps
deploy_k8s_cluster
Steps for the Packet Generator
git clone https://github.com/cncf/cnfs.git
cd cnfs/comparison/cnf_edge_throughput/packet_generator
export PACKET_PROJECT_ID=YOUR_PACKET_PROJECT_ID
export PACKET_AUTH_TOKEN=YOUR_PACKET_API_KEY
export PACKET_FACILITY="sjc1"
./deploy_packet_generator.sh dual_mellanox
Full Readme is here https://github.com/cncf/cnfs/tree/master/comparison/kubecon18-chained_nf_test/packet_generator
We can create additional steps for the quad intel x710 machines.
Long term, maybe not for Kubecon
We can also, have steps for performance tuning that require reserved instances. This will include pre-reqs for setup of the reserved instances and BIOS configuration.
Alternative, would be adding support for pre-created machines and do not delete them. k8s would have to provision existing instances.
Add a prominent notice to the README that the end-user is responsible for all Packet charges and updates in the project chosen. (eg. instances deleted or left around)
Images showing the configuration from an end-user perspective (no mention of internal configuration, i.e. MACs or L3)
Packet systems specs are in https://github.com/cncf/cnfs/issues/117#issuecomment-436271074
Adding network overview of different setups here too:
updated readme adding in details and leaving large markers for what is still unknown (or just not known by me)
Closing, will resume updates to README in #166
Create README for KubeCon keynote comparison
[x] Create https://github.com/cncf/cnfs/tree/master/comparison/kubecon18-chained_nf_test/README.md
[x] Short description (one line: ie “Comparing performance of NFs on OpenStack and K8s”)
[ ] Add an aspirational message of test bed (ie. Ideally, you'll only need an API key from Packet/bare metal to deploy and replicate results. Currently, some customization is needed on machines and BIOS configuration, etc.)
[ ] What is this test comparison? (Add a description of test comparison)
[ ] Where are we right now? (compared to aspiration)
[x] Document the requirements for the workstation that does the deployment and runs the tests
[ ] Describe the K8s and OpenStack testbed
[ ] Describe the test cases (eg. snake and pipeline)
[ ] Test results
Steps to recreate test results
Add steps to recreate test results, including manual steps for configuring reserved instances and BIOS tasks
Common
Reserved instances
TBD
On-Demand instances
TBD