Closed infinitydon closed 2 years ago
I discovered the SMF had QoS values hard-coded in the configuration
What was confusing is that vpp-upf does not currently implement QoS enforcement so I could not pin point where the throttling was taking place but apparently it seems this is been done at the gNB.
I increased the QoS in the SMF and now I am getting a more realistic throughput values:
ubuntu@ip-10-0-2-51:~/UERANSIM/build$ ./nr-binder 12.1.1.2 iperf3 -c 10.0.7.167 -i 1 -t 5
Connecting to host 10.0.7.167, port 5201
[ 5] local 12.1.1.2 port 43895 connected to 10.0.7.167 port 5201
[ ID] Interval Transfer Bitrate Retr Cwnd
[ 5] 0.00-1.00 sec 231 MBytes 1.94 Gbits/sec 0 3.02 MBytes
[ 5] 1.00-2.00 sec 225 MBytes 1.89 Gbits/sec 293 781 KBytes
[ 5] 2.00-3.00 sec 232 MBytes 1.95 Gbits/sec 0 968 KBytes
[ 5] 3.00-4.00 sec 211 MBytes 1.77 Gbits/sec 2 799 KBytes
[ 5] 4.00-5.00 sec 230 MBytes 1.93 Gbits/sec 0 983 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-5.00 sec 1.10 GBytes 1.90 Gbits/sec 295 sender
[ 5] 0.00-5.00 sec 1.10 GBytes 1.89 Gbits/sec receiver
iperf Done.
Hello, could you please share with me the helm chart you used to deploy UPF with DPDK ?
@Salhi-K this is available as our (Travelping's) commercial offer.
Hi,
I am currently trying to use upf with DPDK using Kubernetes for orchestration but the iperf results I am getting seems not to be realistic..
I have enabled both HugePages and guaranteed CPU Qos, so no other process is sharing the CPU with the vpp-upf pod..
My understanding is that the upf-vpp is independent of the DPDK logic but I have not been to know why the throughput test is so low, I have played around with the MTU settings but there is no improvement.