Closed lixuna closed 6 years ago
Physical server details:
From lscpu
and Packet.net system overview
Packet instance id 2f11ef1a-7e5f-4a01-a89c-1f612eccdeaf
Ubuntu: Kernel version: 4.15.0-20-generic
root@cnfdev06:~/cnfs/comparison/box-by-box-kvm-docker/vDNS/build# uname -a
Linux cnfdev06.cncf.ci 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
root@cnfdev06:~/cnfs/comparison/box-by-box-kvm-docker/vDNS/build# cat /etc/os-release
NAME="Ubuntu"
VERSION="18.04 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
Moved detailed check list to top of issue
libvirt version: 4.0.0-1ubuntu8.3 vagrant version: 2.1.1
root@cnfdev07:~/cnfs# vagrant --version
Vagrant 2.1.1
root@cnfdev06:~/cnfs/comparison/box-by-box-kvm-docker/vDNS/build# dpkg -l |grep virt
ii libsys-virt-perl 4.0.0-1 amd64 Perl module providing an extension for the libvirt library
ii libvirt-bin 4.0.0-1ubuntu8.3 amd64 programs for the libvirt library
ii libvirt-clients 4.0.0-1ubuntu8.3 amd64 Programs for the libvirt library
ii libvirt-daemon 4.0.0-1ubuntu8.3 amd64 Virtualization daemon
ii libvirt-daemon-driver-storage-rbd 4.0.0-1ubuntu8.3 amd64 Virtualization daemon RBD storage driver
ii libvirt-daemon-system 4.0.0-1ubuntu8.3 amd64 Libvirt daemon configuration files
ii libvirt-dev:amd64 4.0.0-1ubuntu8.3 amd64 development files for the libvirt library
ii libvirt0:amd64 4.0.0-1ubuntu8.3 amd64 library for interfacing with different virtualization systems
Some additional info on libvirt and virtualization:
root@cnfdev07:~# virsh version
Compiled against library: libvirt 4.0.0
Using library: libvirt 4.0.0
Using API: QEMU 4.0.0
Running hypervisor: QEMU 2.11.1
CPU info:
root@cnfdev07:~/cnfs# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 10
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2640 v4 @ 2.40GHz
Stepping: 1
CPU MHz: 1201.651
CPU max MHz: 3400.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.17
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 25600K
NUMA node0 CPU(s): 0-9,20-29
NUMA node1 CPU(s): 10-19,30-39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm c
onstant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pc
id dca sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin tpr_shadow vnmi fl
expriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_lo
cal dtherm ida arat pln pts
Container environment:
--cpus 4 --cpuset-cpus 5-8
)VM environment:
Container test:
comparison/box-by-box-kvm-docker
./vDNS_container_test.sh <packets per second, per iteration>
docker rm <vDNS/vDNSgen> <-f>
docker image rm <vdns/vdnsgen>
docker network rm dns-net
VM test:
comparison/box-by-box-kvm-docker
./vDNS_vm_test.sh <packets per second, per iteration>
comparison/box-by-box-kvm-docker/vDNS/build
./build_vm.sh
comparison/box-by-box-kvm-docker/<vDNS/vDNSgen>
vagrant destroy -f
vagrant box remove vDNS
Output (using 4 core vDNS):
Notes:
cat /sys/fs/cgroup/memory/docker/<vDNS container ID>/memory.stat | grep rss
(byte)virsh dommemstat <vDNS VM ID> | grep rss
(kibibyte)Packet generator VM uses hugepages as a requirement for DPDK portion of VPP.
We are allocating 2048kB pages.
echo 2048 > /sys/devices/system/node/${i}/hugepages/hugepages-2048kB/nr_hugepages
mount -t hugetlbfs -o pagesize=2M none /dev/hugepages
CPU core isolation
2 core vDNS
1 core vDNS
2 core vDNS
1 core vDNS
Performance (throughput) between VNF and CNF using 1-2 cores is very similar for the vDNS NF. The biggest difference is in the RSS (memory usage), where CNF uses a lot less memory (great with current prices on RAM)
Also, somewhat as expected, there is a slight drop-off in performance when adding more cores.
To draw some sort of conclusion:
If your primary and most expensive resource is CPU cores, then it might be better to spawn additional VNFs/CNFs with a lower number of cores, e.g. 2.
If on the other hand you are more worried about memory, it is possible to scale the number of cores used in a single VNF/CNF to gain a very reasonable performance increase, without having to worry about additional memory being used
Tasks:
Details: