aligungr / UERANSIM

Open source 5G UE and RAN (gNodeB) implementation.
GNU General Public License v3.0
775 stars 319 forks source link

TCP throughput over UERANSIM vs VM #438

Open kwondaejang opened 2 years ago

kwondaejang commented 2 years ago

Hi, I am measuring latency using free5GC+UERANSIM.

UERANSIM : VM1 (Virtual machine 1) UPF : VM2 MEC : VM3

With above setup, I am measuring download speed from MEC to UERANSIM using iperf TCP. It is only 140Mbps.

But if I use host interface, it is around 2Gbps. Do you know what the root cause of this issue?

aligungr commented 2 years ago

Hi @kwondaejang

I'll try to investigate it.

Could you send your iperf commands for testing?

kwondaejang commented 2 years ago

Hi, @aligungr

Thank you so much for your reply. In fact, same issue is discussed on below link. https://github.com/aligungr/UERANSIM/discussions/443

I used below two commands for the testing.

iperf3 -c 172.16.6.2 -i 1 -t 10 iperf3 -c 172.16.6.2 -i 1 -t 10 -R

172.16.6.2 is another VM as a MEC, connected by internal interface with UPF.

Thanks:)

infinitydon commented 2 years ago

@aligungr - Sorry to bump into this current discussion but I am having similar issues in testing the throughput but I am getting very low results

ubuntu@ip-10-0-2-51:~/UERANSIM/build$ ./nr-binder 12.1.1.8 iperf3 -c 10.0.7.167 -i 1 -t 20
Connecting to host 10.0.7.167, port 5201
[  5] local 12.1.1.8 port 59369 connected to 10.0.7.167 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  7.73 MBytes  64.8 Mbits/sec  498   13.2 KBytes
[  5]   1.00-2.00   sec  2.50 MBytes  21.0 Mbits/sec  253   13.2 KBytes
[  5]   2.00-3.00   sec  2.50 MBytes  21.0 Mbits/sec  366   15.8 KBytes
[  5]   3.00-4.00   sec  1.25 MBytes  10.5 Mbits/sec  252   3.95 KBytes
[  5]   4.00-5.00   sec  2.50 MBytes  21.0 Mbits/sec  263   17.1 KBytes
[  5]   5.00-6.00   sec  2.50 MBytes  21.0 Mbits/sec  255   31.6 KBytes
[  5]   6.00-7.00   sec  2.50 MBytes  21.0 Mbits/sec  259   17.1 KBytes
[  5]   7.00-8.00   sec  2.50 MBytes  21.0 Mbits/sec  283   15.8 KBytes
[  5]   8.00-9.00   sec  2.50 MBytes  21.0 Mbits/sec  218   18.4 KBytes
[  5]   9.00-10.00  sec  1.25 MBytes  10.5 Mbits/sec  229   18.4 KBytes
[  5]  10.00-11.00  sec  2.50 MBytes  21.0 Mbits/sec  279   11.8 KBytes
[  5]  11.00-12.00  sec  2.50 MBytes  21.0 Mbits/sec  234   9.21 KBytes
[  5]  12.00-13.00  sec  2.50 MBytes  21.0 Mbits/sec  298   9.21 KBytes
[  5]  13.00-14.00  sec  2.50 MBytes  21.0 Mbits/sec  244   23.7 KBytes
[  5]  14.00-15.00  sec  1.25 MBytes  10.5 Mbits/sec  262   23.7 KBytes
[  5]  15.00-16.00  sec  3.75 MBytes  31.5 Mbits/sec  295   17.1 KBytes
[  5]  16.00-17.00  sec  1.25 MBytes  10.5 Mbits/sec  198   25.0 KBytes
[  5]  17.00-18.00  sec  2.50 MBytes  21.0 Mbits/sec  309   31.6 KBytes
[  5]  18.00-19.00  sec  2.50 MBytes  21.0 Mbits/sec  265   2.63 KBytes
[  5]  19.00-20.00  sec  2.50 MBytes  21.0 Mbits/sec  225   14.5 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-20.00  sec  51.5 MBytes  21.6 Mbits/sec  5485             sender
[  5]   0.00-20.00  sec  47.9 MBytes  20.1 Mbits/sec                  receiver

iperf Done.

I am using OpenAirInterface VPP-UPF, I thought maybe it was the CPU but I have changed the UERANSIM EC2 instance to 16 CPU cores and 32GB Ram but the speed remain low..

What do you advise I check again? The VPP-UPF is also running with dedicated CPUs and Ram (it is running with DPDK also which gives very low latency and high throughput)

infinitydon commented 2 years ago

@aligungr - I think I found where the issue is, the OAI SMF has some hard-coded QoS values in the Docker container:

image

So I changed the values to 2000Mbps, now am able to get something close to that:

ubuntu@ip-10-0-2-51:~/UERANSIM/build$ ./nr-binder 12.1.1.2 iperf3 -c 10.0.7.167 -i 1 -t 5
Connecting to host 10.0.7.167, port 5201
[  5] local 12.1.1.2 port 43895 connected to 10.0.7.167 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   231 MBytes  1.94 Gbits/sec    0   3.02 MBytes
[  5]   1.00-2.00   sec   225 MBytes  1.89 Gbits/sec  293    781 KBytes
[  5]   2.00-3.00   sec   232 MBytes  1.95 Gbits/sec    0    968 KBytes
[  5]   3.00-4.00   sec   211 MBytes  1.77 Gbits/sec    2    799 KBytes
[  5]   4.00-5.00   sec   230 MBytes  1.93 Gbits/sec    0    983 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec  1.10 GBytes  1.90 Gbits/sec  295             sender
[  5]   0.00-5.00   sec  1.10 GBytes  1.89 Gbits/sec                  receiver

iperf Done.

I will like to ask if ueransim gNB performs QoS handling if it received it from the AMF?

kwondaejang commented 2 years ago

@aligungr - I think I found where the issue is, the OAI SMF has some hard-coded QoS values in the Docker container:

image

So I changed the values to 2000Mbps, now am able to get something close to that:

ubuntu@ip-10-0-2-51:~/UERANSIM/build$ ./nr-binder 12.1.1.2 iperf3 -c 10.0.7.167 -i 1 -t 5
Connecting to host 10.0.7.167, port 5201
[  5] local 12.1.1.2 port 43895 connected to 10.0.7.167 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   231 MBytes  1.94 Gbits/sec    0   3.02 MBytes
[  5]   1.00-2.00   sec   225 MBytes  1.89 Gbits/sec  293    781 KBytes
[  5]   2.00-3.00   sec   232 MBytes  1.95 Gbits/sec    0    968 KBytes
[  5]   3.00-4.00   sec   211 MBytes  1.77 Gbits/sec    2    799 KBytes
[  5]   4.00-5.00   sec   230 MBytes  1.93 Gbits/sec    0    983 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-5.00   sec  1.10 GBytes  1.90 Gbits/sec  295             sender
[  5]   0.00-5.00   sec  1.10 GBytes  1.89 Gbits/sec                  receiver

iperf Done.

I will like to ask if ueransim gNB performs QoS handling if it received it from the AMF?

Hi, congratulations.

I am still struggling with this low throughput issue, so I want to ask you some idea.

  1. I am not using docker, so do you think it can be related with docker?
  2. Do you use virtual machine or physical machine?
  3. Your throughput is great. Did you check the TCP throughput go through TUN interface?

Thank you

infinitydon commented 2 years ago

@kwondaejang - Docker is not the issue, actually I am running the VPP-UPF in Kubernetes using DPDK for userplane acceleration, using DPDK will ehance the switching of the packet internally in the UPF, this is likely why I was able to get the high throughput.

I took a trace on the UPF and actually see that the iperf traffic is coming from the tun interface..

Packet 39

00:07:29:825782: dpdk-input
  VirtualFunctionEthernet0/8/0 rx queue 0
  buffer 0xfec4cf: current data 0, length 67, buffer-pool 0, ref-count 1, totlen-nifb 0, trace handle 0x26
                   ext-hdr-valid
                   l4-cksum-computed l4-cksum-correct
  PKT MBUF: port 2, nb_segs 1, pkt_len 67
    buf_len 2176, data_len 67, ol_flags 0x180, data_off 128, phys_addr 0xbfb13440
    packet_type 0x110 l2_len 0 l3_len 0 outer_l2_len 0 outer_l3_len 0
    rss 0xaafaaafa fdir.hi 0x0 fdir.lo 0xaafaaafa
    Packet Offload Flags
      PKT_RX_IP_CKSUM_GOOD (0x0080) IP cksum of RX pkt. is valid
      PKT_RX_L4_CKSUM_GOOD (0x0100) L4 cksum of RX pkt. is valid
    Packet Types
      RTE_PTYPE_L3_IPV4 (0x0010) IPv4 packet without extension headers
      RTE_PTYPE_L4_TCP (0x0100) TCP packet
  IP4: 02:5b:de:c1:7c:ac -> 02:20:55:04:c1:76
  TCP: 10.0.7.167 -> 12.1.1.2
    tos 0x00, ttl 64, length 53, checksum 0xb4ec dscp CS0 ecn NON_ECN
    fragment id 0x672d, flags DONT_FRAGMENT
  TCP: 5201 -> 51017
    seq. 0x4d836e7f ack 0x05af016c
    flags 0x18 PSH ACK, tcp header: 32 bytes
    window 490, checksum 0x38ee
00:07:29:825783: ethernet-input
  frame: flags 0x3, hw-if-index 3, sw-if-index 3
  IP4: 02:5b:de:c1:7c:ac -> 02:20:55:04:c1:76
00:07:29:825783: ip4-input-no-checksum
  TCP: 10.0.7.167 -> 12.1.1.2
    tos 0x00, ttl 64, length 53, checksum 0xb4ec dscp CS0 ecn NON_ECN
    fragment id 0x672d, flags DONT_FRAGMENT
  TCP: 5201 -> 51017
    seq. 0x4d836e7f ack 0x05af016c
    flags 0x18 PSH ACK, tcp header: 32 bytes
    window 490, checksum 0x38ee
00:07:29:825783: ip4-lookup
  fib 2 dpo-idx 0 flow hash: 0x00000000
  TCP: 10.0.7.167 -> 12.1.1.2
    tos 0x00, ttl 64, length 53, checksum 0xb4ec dscp CS0 ecn NON_ECN
    fragment id 0x672d, flags DONT_FRAGMENT
  TCP: 5201 -> 51017
    seq. 0x4d836e7f ack 0x05af016c
    flags 0x18 PSH ACK, tcp header: 32 bytes

I am using AWS EC2 VMs..

I suggest you check the 5g core (UPF)and ueransim if it has enough system resources like cpu/ram.

You can also do iperf without using the ueransim TUN interface and see if you are getting good results without using TUN interface

kwondaejang commented 2 years ago

Hi, @infinitydon

I appreciate your advice, but I don't understand well. One more time, can you explain which other interface can I use instead of TUN?

Thank you very much.

noormohammedli commented 2 years ago

Hello @infinitydon @aligungr,

I am using Free5Gc and UERANSIM.

When I run the UE connect to the RAN and then to the 5G core and a virtual tunnel is created with an IP address. My question is, How can I know that is the link capacity for the virtual tunnel TUN interface[uesimtun0, 60.61.0.1] between the UE and the UPF.

Thanks Noor

linhanphan commented 1 year ago

Hi, @infinitydon

I am sorry for activating an old issue, but could you share the hardware specifications of your UERANSIM server? I understand that you are using DPDK in 5G Core to get a high throughput, but I wonder how much CPU/RAM resources we need for UERANSIM to get that performance (in your test). Thank you

infinitydon commented 1 year ago

@linhanphan - I was testing on AWS platform, it's been a while now since I did the test but I think I used a c5.4xlarge instance for the UERANSIM, this has 16vCPUs/32G memory.

linhanphan commented 1 year ago

Hi, @infinitydon Thank you so much for the information.

ghost commented 1 year ago

Hi Team, hope ur doing good!

Can you please temme how to start throughput testing and commands for them. Mentions: @infinitydon @aligungr @linhanphan @noormohammedli @kwondaejang

Thanks in advance!