Closed hoaivan closed 6 years ago
@hoaivan VPP and NFF-Go are not direct competitors, though they both can be used for accomplishing similar tasks. Here is my view on this:
Pros:
If you want to try and will encounter issues, let us know and we will try to support.
Thank you @aregm for your input. I tried to setup NAT example https://github.com/intel-go/nff-go/wiki/NAT-example in 3 different scenarios:
ab -c 2 -n 1000 http://192.168.16.2/10k.bin
Time taken for tests: 19.929 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 10506000 bytes
HTML transferred: 10240000 bytes
Requests per second: 50.18 [#/sec] (mean)
Time per request: 39.857 [ms] (mean)
Time per request: 19.929 [ms] (mean, across all concurrent requests)
Transfer rate: 514.82 [Kbytes/sec] received
ab -c 2 -n 1000 http://192.168.26.2/10k.bin
Time taken for tests: 1.100 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 10506000 bytes
HTML transferred: 10240000 bytes
Requests per second: 909.41 [#/sec] (mean)
Time per request: 2.199 [ms] (mean)
Time per request: 1.100 [ms] (mean, across all concurrent requests)
Transfer rate: 9330.35 [Kbytes/sec] received
same configuration, enviroment as nff-go and iptable. I just put vpp in the middle as a nat44
ab -c 2 -n 1000 http://192.168.26.2/10k.bin
Time taken for tests: 0.755 seconds
Complete requests: 1000
Failed requests: 0
Total transferred: 10506000 bytes
HTML transferred: 10240000 bytes
Requests per second: 1323.69 [#/sec] (mean)
Time per request: 1.511 [ms] (mean)
Time per request: 0.755 [ms] (mean, across all concurrent requests)
Transfer rate: 13580.73 [Kbytes/sec] received
Could you advice how to make nff-go better performance? I personally prefer nff-go because of its beauty and what i'm trying to achieve is to build a software firewall that supports big amount of rules.
Usually we have performance better than Linux iptables. Can you provide details about your platforms configuration, network connectivity and NFF-Go code version that you used?
I'm on nff-go master > latest. Virtualbox 5.2.6. Updated Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
unless Vagrant.has_plugin?("vagrant-reload")
raise 'Plugin vagrant-reload is not installed!'
end
if Vagrant.has_plugin?("vagrant-proxyconf")
config.proxy.http = ENV.fetch('http_proxy', false)
config.proxy.https = ENV.fetch('https_proxy', false)
end
vm_name = ENV.fetch('VM_NAME', "nff-go")
vm_group_size = ENV.fetch('VM_GROUP_SIZE', 3).to_i
vm_total_number = ENV.fetch("VM_TOTAL_NUMBER", 3).to_i
vm_links_number = ENV.fetch("VM_LINKS_NUMBER", 2).to_i
# config.vm.box = "ubuntu/xenial64"
config.vm.box = "generic/fedora27"
# Docker server port
config.vm.network "forwarded_port", guest: 2375, host: 2375, auto_correct: true
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
config.vm.box_check_update = false
config.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "512"
vb.cpus = 1
(1..vm_links_number * 2).each do |j|
vb.customize ["modifyvm", :id, "--nicpromisc#{j + 1}", "allow-all"]
end
end
$provision_fedora = <<SHELL
echo Installing system packages
sudo dnf update
sudo dnf install -y redhat-lsb-core net-tools numactl-devel libpcap-devel elfutils-libelf-devel
SHELL
$provision_ubuntu = <<SHELL
echo Installing system packages
sudo apt-get update
sudo apt-get install -y python make gcc git libnuma-dev libpcap0.8-dev libelf-dev network-manager
sudo systemctl enable network-manager
sudo systemctl start network-manager
SHELL
$provision_common = <<SHELL
echo Unpacking Go language into /opt
(cd /opt; sudo sh -c 'curl -L -s https://dl.google.com/go/go1.9.4.linux-amd64.tar.gz | tar zx')
mkdir go
chmod +x ~/scripts.sh
. ~/scripts.sh
echo . ~/scripts.sh >> .bashrc
# why need docker
# setupdocker
echo Downloading and building NFF-GO
go get -d -v github.com/intel-go/nff-go
(cd \"$GOPATH\"/src/github.com/intel-go/nff-go; git checkout develop; ./scripts/get-depends.sh; make)
echo Setting up 512 huge pages
sudo sh -c 'echo 512 > /sys/devices/system/node/node0/hugepages/hugepages-1024kB/nr_hugepages'
sudo sh -c 'echo vm.nr_hugepages=512 >> /etc/sysctl.conf'
echo IMPORTANT MESSAGE:
echo If kernel was updated during provisioning, it is highly recommended to reboot this VM before using it!!!
echo Use functions from scripts.sh to further setup NFF-GO environment.
SHELL
config.vm.provision "file", source: "scripts.sh", destination: "scripts.sh"
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision "shell", privileged: false, inline: $provision_fedora + $provision_common
# Optional Ubuntu provisioning, use if you want to work in Ubuntu
# environment.
config.vm.provision "shell", privileged: false, run: "never", inline: $provision_ubuntu + $provision_common
# Reboot VM after provisioning
config.vm.provision :reload
config.vm.define "nff-0" do |node|
node.vm.hostname = "nff-0"
node.vm.network "private_network", auto_config: false,
ip: "192.168.14.2", virtualbox__intnet: "private"
node.vm.network "private_network", auto_config: false,
ip: "192.168.24.2", virtualbox__intnet: "private"
node.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "256"
vb.cpus = 1
end
end
config.vm.define "nff-1" do |node|
node.vm.hostname = "nff-1"
node.vm.network "private_network", auto_config: false,
ip: "192.168.14.1", virtualbox__intnet: "private"
node.vm.network "private_network", auto_config: false,
ip: "192.168.16.1", virtualbox__intnet: "public"
node.vm.network "private_network", auto_config: false,
ip: "192.168.24.1", virtualbox__intnet: "private"
node.vm.network "private_network", auto_config: false,
ip: "192.168.26.1", virtualbox__intnet: "public"
node.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "1824"
vb.cpus = 8
end
end
config.vm.define "nff-2" do |node|
node.vm.hostname = "nff-2"
node.vm.network "private_network", auto_config: false,
ip: "192.168.16.2", virtualbox__intnet: "public"
node.vm.network "private_network", auto_config: false,
ip: "192.168.26.2", virtualbox__intnet: "public"
node.vm.provider "virtualbox" do |vb|
vb.gui = false
vb.memory = "256"
vb.cpus = 1
end
end
end
lshw
description: Notebook
product: HP ProBook 450 G4 (Z6T21PA#UUF)
vendor: HP
serial: 5CD65178KF
width: 64 bits
capabilities: smbios-3.0 dmi-3.0 vsyscall32
*-core
description: Motherboard
product: 8231
vendor: HP
physical id: 0
version: KBC Version 42.6D
serial: PGDZT038J5723P
*-memory
description: System Memory
physical id: 0
slot: System board or motherboard
size: 8GiB
*-bank:0
description: SODIMM Synchronous 2133 MHz (0,5 ns)
product: M471A5244BB0-CRC
vendor: Samsung
physical id: 0
serial: 170C72BE
slot: Bottom-Slot 1(top)
size: 4GiB
width: 64 bits
clock: 2133MHz (0.5ns)
*-bank:1
description: SODIMM Synchronous 2133 MHz (0,5 ns)
product: M471A5244BB0-CRC
vendor: Samsung
physical id: 1
serial: 1624033A
slot: Bottom-Slot 2(under)
size: 4GiB
width: 64 bits
clock: 2133MHz (0.5ns)
*-firmware
description: BIOS
vendor: HP
physical id: 4
version: P85 Ver. 01.14
date: 01/22/2018
size: 64KiB
capacity: 15MiB
capabilities: pci pcmcia upgrade shadowing cdboot bootselect edd int5printscreen int9keyboard int14serial int17printer acpi usb smartbattery biosbootspecification netboot uefi
*-cache:0
description: L1 cache
physical id: a
slot: L1 Cache
size: 128KiB
capacity: 128KiB
capabilities: synchronous internal write-back unified
configuration: level=1
*-cache:1
description: L2 cache
physical id: b
slot: L2 Cache
size: 512KiB
capacity: 512KiB
capabilities: synchronous internal write-back unified
configuration: level=2
*-cache:2
description: L3 cache
physical id: c
slot: L3 Cache
size: 3MiB
capacity: 3MiB
capabilities: synchronous internal write-back unified
configuration: level=3
*-cpu
description: CPU
product: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
vendor: Intel Corp.
physical id: d
bus info: cpu@0
version: Intel(R) Core(TM) i5-7200U CPU @ 2.50GHz
serial: To Be Filled By O.E.M.
slot: U3E1
size: 2836MHz
capacity: 3100MHz
width: 64 bits
clock: 100MHz
capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti intel_pt spec_ctrl tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp cpufreq
configuration: cores=2 enabledcores=2 threads=4
*-pci
description: Host bridge
product: Intel Corporation
vendor: Intel Corporation
physical id: 100
bus info: pci@0000:00:00.0
version: 02
width: 32 bits
clock: 33MHz
*-memory
description: Memory controller
product: Intel Corporation
vendor: Intel Corporation
physical id: 1f.2
bus info: pci@0000:00:1f.2
version: 21
width: 32 bits
clock: 33MHz (30.3ns)
configuration: driver=intel_pmc_core latency=0
resources: irq:0 memory:f0310000-f0313fff
*-network DISABLED
description: Ethernet interface
physical id: 2
logical name: virbr0-nic
serial: 52:54:00:84:c7:bd
size: 10Mbit/s
capabilities: ethernet physical
configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full link=no multicast=yes port=twisted pair speed=10Mbit/s
You changed network connection identifiers and I don't know how VirtualBox works when there are several private and several public internal networks. Can you specify them in pairs, e.g. private1 <-> private1 and private2 <-> private2 so that there are no repetitions?
Here they are:
Interfaces for nff-go: 192.168.14.2 \<--private--> [ 192.168.14.1 NFF-GO NAT 192.168.16.1 ] \<--public--> 192.168.16.2
Interfaces for iptables: 192.168.24.2 \<--private--> [ 192.168.24.1 IPTABLES 192.168.26.1 ] \<--public--> 192.168.26.2
Yes, what I mean that before your change to Vagrantfile network identifiers were assigned in pairs for each connection end. Now you have for "private" network connection ends and four "public".
How many CPU cores do you have on this laptop? My only guess could be that NFF-Go requires too many CPU cores (NAT uses 7 in basic configuration) which makes their work very ineffective when VMs are constrained by lesser number. We currently have scheduler changes in development which will make CPU cores allocations more effective.
You're correct. My laptop has 4 cores and i had to assign 8 virtual cores for nff-go, During the test those cores are 100% immediately after starting nat.go although no request yet.
That's how DPDK poll mode drivers work. They constantly loop polling for new incoming packets.
The three VM environment is a reference environment intended for developers who don't have access to big iron hardware so that they were able to write working code. Unfortunately it isn't suitable for testing performance. Even on a 48-core server where every virtual core can be assigned to a physical CPU core, performance results in VMs aren't that great although they are usually better than results for iptables. Never tried to run VPP yet.
Thank you very much for your support @gshimansky @aregm
@aregm
Is there any reference to the mentioned CBN Component, or is this just a design concept how to build NFF-GO apps and how to get them connected to the outside world?
Many thanks
@hwinkel Here is the example https://github.com/intel-go/nff-go/tree/gregory/ngic/examples/ngic There is a gap in the application of the rules, but it is fully functional.
@aregm thanks, however is this the reference for the Cloud Boundary Node?
I'm evaluating nff-go and vpp (https://wiki.fd.io/view/VPP). Please tell me advantages and disadvantages of each one.