Closed taylor closed 5 years ago
The Kubernetes and Openstack clusters will have 2-3 machines which run the network functions.
The system hardware configuration is based on the Packet m2.xlarge.x86.
The default dual port Mellanox ConnectX-4 NIC has been replaced by quad port Intel x710 NIC.. The NIC ports are connected to 10GbE ports on the top-of-rack switches.
Specs at a glance:
TBD
Same as worker node - eg. m2.xlarge.x86 based machine with a quad port Intel x710 NIC
Additional NIC information https://github.com/cncf/cnfs/issues/94
@michaelspedersen, @pmikus, @mackonstan this ticket will contain the specs for the systems being used at Packet for the Kubecon comparison test case.
Xeon Gold 5120 = 14 cores total of 28 cores (2 sockets) How are you planning to map these cores?
keep 1/socket for host / kernel that leaves you 2x13 cores to play with, between VPP and NFs
keep 1/socket for host / kernel that leaves you 2x13 cores to play with, between VPP and NFs
This has also been my go-to approach for configuration so far. We've been trying out a few different configurations (scaling both VPP and VNFs/CNFs)
in general it would be reasonable to find the "best performing system" given the same amount of resources and preferably in conditions that are realistic and close to production conditions.
BIOS details are in #129
This provides details on the machines used in the K8s and OpenStack clusters as well as those used for generating traffic.