Gradiant / 5g-charts

Helm charts for 5G Technologies
Apache License 2.0
119 stars 47 forks source link

Free5GC charts - Basic ping to Internet fails #194

Open pantmal opened 1 week ago

pantmal commented 1 week ago

Hello again, I have temporarily put aside my attempts fro the Open5GS helm chart, if you have seen my previous issue. I will come back to the Open setup but in the meantime, I want to try a basic 5G setup using the Free5GC helm charts,

First of all, I suggest creating a tutorial section, like the ones that are provided for Open5GS. Free5GC is rapidly growing, so it would be nice to have a detailed guide on how to set them up as well. Personally, I encountered a few issues while setting up the Free5GC helm charts, but I'll get to them later. I want to understand what's wrong with a basic ping to the internet that I'm trying.

I have two VMs. VM#1 runs the Free5GC core and VM#2 runs a native UERANSIM. Now, I am able to register a UE to the Free5G core, and the uesimtun0 interface is successfully created. But when I try to ping i.e. 8.8.8.8 I get no response. I am attaching the relevant logs.

First I execute sudo build/nr-gnb -c config/free5gc-gnb.yaml on VM#2

UERANSIM v3.2.6
[2024-10-22 16:21:37.499] [sctp] [info] Trying to establish SCTP connection... (192.168.5.147:30412)
[2024-10-22 16:21:37.502] [sctp] [info] SCTP connection established (192.168.5.147:30412)
[2024-10-22 16:21:37.502] [sctp] [debug] SCTP association setup ascId[18]
[2024-10-22 16:21:37.502] [ngap] [debug] Sending NG Setup Request
[2024-10-22 16:21:37.503] [ngap] [debug] NG Setup Response received
[2024-10-22 16:21:37.503] [ngap] [info] NG Setup procedure is successful
[2024-10-22 16:21:41.127] [rrc] [debug] UE[1] new signal detected
[2024-10-22 16:21:41.128] [rrc] [info] RRC Setup for UE[1]
[2024-10-22 16:21:41.128] [ngap] [debug] Initial NAS message received from UE[1]
[2024-10-22 16:21:41.150] [ngap] [debug] Initial Context Setup Request received
[2024-10-22 16:21:41.474] [ngap] [info] PDU session resource(s) setup for UE[1] count[1]

And the sudo build/nr-ue -c config/free5gc-ue.yaml shows:

UERANSIM v3.2.6
[2024-10-22 16:21:41.127] [nas] [info] UE switches to state [MM-DEREGISTERED/PLMN-SEARCH]
[2024-10-22 16:21:41.127] [rrc] [debug] New signal detected for cell[1], total [1] cells in coverage
[2024-10-22 16:21:41.128] [nas] [info] Selected plmn[208/93]
[2024-10-22 16:21:41.128] [rrc] [info] Selected cell plmn[208/93] tac[1] category[SUITABLE]
[2024-10-22 16:21:41.128] [nas] [info] UE switches to state [MM-DEREGISTERED/PS]
[2024-10-22 16:21:41.128] [nas] [info] UE switches to state [MM-DEREGISTERED/NORMAL-SERVICE]
[2024-10-22 16:21:41.128] [nas] [debug] Initial registration required due to [MM-DEREG-NORMAL-SERVICE]
[2024-10-22 16:21:41.128] [nas] [debug] UAC access attempt is allowed for identity[0], category[MO_sig]
[2024-10-22 16:21:41.128] [nas] [debug] Sending Initial Registration
[2024-10-22 16:21:41.128] [nas] [info] UE switches to state [MM-REGISTER-INITIATED]
[2024-10-22 16:21:41.128] [rrc] [debug] Sending RRC Setup Request
[2024-10-22 16:21:41.128] [rrc] [info] RRC connection established
[2024-10-22 16:21:41.128] [rrc] [info] UE switches to state [RRC-CONNECTED]
[2024-10-22 16:21:41.128] [nas] [info] UE switches to state [CM-CONNECTED]
[2024-10-22 16:21:41.140] [nas] [debug] Authentication Request received
[2024-10-22 16:21:41.140] [nas] [debug] Received SQN [00000000002E]
[2024-10-22 16:21:41.140] [nas] [debug] SQN-MS [000000000000]
[2024-10-22 16:21:41.144] [nas] [debug] Security Mode Command received
[2024-10-22 16:21:41.144] [nas] [debug] Selected integrity[2] ciphering[0]
[2024-10-22 16:21:41.151] [nas] [debug] Registration accept received
[2024-10-22 16:21:41.151] [nas] [info] UE switches to state [MM-REGISTERED/NORMAL-SERVICE]
[2024-10-22 16:21:41.151] [nas] [debug] Sending Registration Complete
[2024-10-22 16:21:41.151] [nas] [info] Initial Registration is successful
[2024-10-22 16:21:41.151] [nas] [debug] Sending PDU Session Establishment Request
[2024-10-22 16:21:41.151] [nas] [debug] UAC access attempt is allowed for identity[0], category[MO_sig]
[2024-10-22 16:21:41.365] [nas] [debug] Configuration Update Command received
[2024-10-22 16:21:41.474] [nas] [debug] PDU Session Establishment Accept received
[2024-10-22 16:21:41.474] [nas] [info] PDU Session establishment is successful PSI[1]
[2024-10-22 16:21:41.480] [app] [info] Connection setup for PDU session[1] is successful, TUN interface[uesimtun0, 10.60.0.2] is up.

Going back to VM#1, the AMF logs show:

...
2024-10-22T13:21:41.364754062Z [INFO][AMF][Gmm] Handle event[Gmm Message], transition from [Registered] to [Registered]
2024-10-22T13:21:41.364767077Z [INFO][AMF][Gmm][amf_ue_ngap_id:RU:1,AU:12(3GPP)][supi:SUPI:imsi-208930000000001] Handle UL NAS Transport
2024-10-22T13:21:41.364773910Z [INFO][AMF][Gmm][amf_ue_ngap_id:RU:1,AU:12(3GPP)][supi:SUPI:imsi-208930000000001] Transport 5GSM Message to SMF
2024-10-22T13:21:41.364789950Z [INFO][AMF][Gmm][amf_ue_ngap_id:RU:1,AU:12(3GPP)][supi:SUPI:imsi-208930000000001] Select SMF [snssai: {Sst:1 Sd:ffffff}, dnn: internet]
2024-10-22T13:21:41.365905225Z [WARN][AMF][Gmm][amf_ue_ngap_id:RU:1,AU:12(3GPP)][supi:SUPI:imsi-208930000000001] nsiInformation is still nil, use default NRF[http://free5gc-nrf-sbi:8000]
2024-10-22T13:21:41.386574295Z [INFO][AMF][Gmm][amf_ue_ngap_id:RU:1,AU:12(3GPP)][supi:SUPI:imsi-208930000000001] create smContext[pduSessionID: 1] Success
2024-10-22T13:21:41.457055745Z [INFO][AMF][Producer] Handle N1N2 Message Transfer Request
2024-10-22T13:21:41.457091472Z [INFO][AMF][Ngap][amf_ue_ngap_id:RU:1,AU:12(3GPP)][ran_addr:192.168.5.147:52730] Send PDU Session Resource Setup Request
2024-10-22T13:21:41.457533180Z [INFO][AMF][GIN] | 200 |   10.10.225.105 | POST    | /namf-comm/v1/ue-contexts/imsi-208930000000001/n1-n2-messages |
2024-10-22T13:21:41.473820502Z [INFO][AMF][Ngap][ran_addr:192.168.5.147:52730] Handle PDUSessionResourceSetupResponse
2024-10-22T13:21:41.473835080Z [INFO][AMF][Ngap][amf_ue_ngap_id:RU:1,AU:12(3GPP)][ran_addr:192.168.5.147:52730] Handle PDUSessionResourceSetupResponse (RAN UE NGAP ID: 1)

And the UPF logs give us:

...
2024-10-22T12:36:50.673781980Z [INFO][UPF][Main] UPF started
2024-10-22T12:37:23.902158879Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805] handleAssociationSetupRequest
2024-10-22T12:37:23.902205628Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805][CPNodeID:free5gc-smf-pfcp] New node
2024-10-22T12:49:32.155365031Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805] handleSessionEstablishmentRequest
2024-10-22T12:49:32.155408504Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805][CPNodeID:free5gc-smf-pfcp][CPSEID:0x1][UPSEID:0x1] New session
2024-10-22T12:49:32.246251273Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805] handleSessionModificationRequest
2024-10-22T13:21:41.361574344Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805] handleSessionDeletionRequest
2024-10-22T13:21:41.361628807Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805][CPNodeID:free5gc-smf-pfcp][CPSEID:0x1][UPSEID:0x1] sess deleted
2024-10-22T13:21:41.388078037Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805] handleSessionEstablishmentRequest
2024-10-22T13:21:41.388103114Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805][CPNodeID:free5gc-smf-pfcp][CPSEID:0x2][UPSEID:0x1] New session
2024-10-22T13:21:41.475581120Z [INFO][UPF][PFCP][LAddr:0.0.0.0:8805] handleSessionModificationRequest

Now, after all of this, I run ping 8.8.8.8 -I uesimtun0 on VM#2:

PING 8.8.8.8 (8.8.8.8) from 10.60.0.2 uesimtun0: 56(84) bytes of data.

Running a tcpdump on the uesimtun0 interface:

tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on uesimtun0, link-type RAW (Raw IP), snapshot length 262144 bytes
16:29:43.666042 IP karmada-2 > dns.google: ICMP echo request, id 30, seq 29, length 64
16:29:44.686049 IP karmada-2 > dns.google: ICMP echo request, id 30, seq 30, length 64
16:29:45.710030 IP karmada-2 > dns.google: ICMP echo request, id 30, seq 31, length 64
16:29:46.734018 IP karmada-2 > dns.google: ICMP echo request, id 30, seq 32, length 64
16:29:47.758043 IP karmada-2 > dns.google: ICMP echo request, id 30, seq 33, length 64
16:29:48.782020 IP karmada-2 > dns.google: ICMP echo request, id 30, seq 34, length 64
...

And finally I run a tcpdump on the upfgtp interface from inside the UPF pod:

13:31:08.653200 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 112, length 64
13:31:09.677195 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 113, length 64
13:31:10.701162 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 114, length 64
13:31:11.725132 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 115, length 64
13:31:12.749121 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 116, length 64
13:31:13.773061 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 117, length 64
13:31:14.797085 IP 10.60.0.2 > dns.google: ICMP echo request, id 30, seq 118, length 64
...

So, I understand that the connection between the Free5G core and the gNB and UE seems to be correct. And the packets from the uesimtun0 seem to reach the UPF pod. However the UPF pod appears to be unable to reach the Internet, since the ICMP requests have no response.

I would appreciate any help and guidance on what I may have missed. I believe I'm stuck somewhere on the very last step, since all the other steps appear to have worked correctly.

Best regards, pantmal

avrodriguezgrad commented 1 week ago

Hi @pantmal

With the logs and the setup you have described, it's difficult for me to think where the error might be because I believe it is not related to the Helm Chart.

I have some questions:

BR, Álvaro

pantmal commented 1 week ago

Hi @avrodriguezgrad

I also believe that the issue most likely lies within the forwarding part. The helm chart components don't seem to have any errors.

Regarding your questions:

I am attaching the logs, in case there is anything interesting.

Assume ping on 8.8.8.8 is running on the VM#2. Also 192-168-5-147 is the IP of VM#1.

Running a tcpdump on the eth0 interface, from inside the UPF pod (before adding the iptables rules):

09:15:03.675335 IP 192-168-5-147.calico-typha.calico-system.svc.cluster.local.44898 > free5gc-upf-699697b8c6-rtrwg.2152: UDP, length 100

Running a tcpdump on the upfgtp interface, from inside the UPF pod (before adding the iptables rules):

09:15:47.069057 IP (tos 0x0, ttl 64, id 24241, offset 0, flags [DF], proto ICMP (1), length 84)
    10.60.0.1 > dns.google: ICMP echo request, id 39, seq 1, length 64

Running a tcpdump on the eth0 interface, from inside the UPF pod (after adding the iptables rules):

09:16:43.572433 IP (tos 0x0, ttl 63, id 19647, offset 0, flags [DF], proto UDP (17), length 128)
    192-168-5-147.calico-typha.calico-system.svc.cluster.local.57448 > free5gc-upf-699697b8c6-rtrwg.2152: UDP, length 100
09:16:43.616421 IP (tos 0x0, ttl 64, id 25179, offset 0, flags [DF], proto UDP (17), length 72)
    free5gc-upf-699697b8c6-rtrwg.49660 > kube-dns.kube-system.svc.cluster.local.domain: 64176+ PTR? 147.5.168.192.in-addr.arpa. (44)
09:16:43.616707 IP (tos 0x0, ttl 63, id 34080, offset 0, flags [DF], proto UDP (17), length 260)
    kube-dns.kube-system.svc.cluster.local.domain > free5gc-upf-699697b8c6-rtrwg.49660: 64176*- 2/0/0 147.5.168.192.in-addr.arpa. PTR 192-168-5-147.calico-typha.calico-system.svc.cluster.local., 147.5.168.192.in-addr.arpa. PTR 192-168-5-147.kubernetes.default.svc.cluster.local. (232)
09:16:43.720080 IP (tos 0x0, ttl 64, id 9371, offset 0, flags [DF], proto UDP (17), length 69)
    free5gc-upf-699697b8c6-rtrwg.49558 > kube-dns.kube-system.svc.cluster.local.domain: 38732+ PTR? 10.0.96.10.in-addr.arpa. (41)
09:16:43.720341 IP (tos 0x0, ttl 63, id 34094, offset 0, flags [DF], proto UDP (17), length 144)
    kube-dns.kube-system.svc.cluster.local.domain > free5gc-upf-699697b8c6-rtrwg.49558: 38732*- 1/0/0 10.0.96.10.in-addr.arpa. PTR kube-dns.kube-system.svc.cluster.local. (116)
09:16:48.627960 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has 169.254.1.1 tell free5gc-upf-699697b8c6-rtrwg, length 28
09:16:48.627981 ARP, Ethernet (len 6), IPv4 (len 4), Reply 169.254.1.1 is-at ee:ee:ee:ee:ee:ee (oui Unknown), length 28
09:16:48.627971 ARP, Ethernet (len 6), IPv4 (len 4), Request who-has free5gc-upf-699697b8c6-rtrwg tell 192-168-5-147.calico-typha.calico-system.svc.cluster.local, length 28
09:16:48.628033 ARP, Ethernet (len 6), IPv4 (len 4), Reply free5gc-upf-699697b8c6-rtrwg is-at 6a:be:15:69:23:4a (oui Unknown), length 28
09:16:48.728117 IP (tos 0x0, ttl 64, id 28616, offset 0, flags [DF], proto UDP (17), length 70)
    free5gc-upf-699697b8c6-rtrwg.44408 > kube-dns.kube-system.svc.cluster.local.domain: 28092+ PTR? 1.1.254.169.in-addr.arpa. (42)
09:16:48.743484 IP (tos 0x0, ttl 63, id 43918, offset 0, flags [DF], proto UDP (17), length 70)
    kube-dns.kube-system.svc.cluster.local.domain > free5gc-upf-699697b8c6-rtrwg.44408: 28092 NXDomain 0/0/0 (42)

Running a tcpdump on the upfgtp interface, from inside the UPF pod (before adding the iptables rules):

09:17:50.589407 IP (tos 0x0, ttl 64, id 39622, offset 0, flags [DF], proto ICMP (1), length 84)
    10.60.0.1 > dns.google: ICMP echo request, id 41, seq 1, length 64

Finally here's route -n output on the UPF pod:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         169.254.1.1     0.0.0.0         UG    0      0        0 eth0
10.60.0.0       0.0.0.0         255.255.255.0   U     0      0        0 upfgtp
169.254.1.1     0.0.0.0         255.255.255.255 UH    0      0        0 eth0

I would appreciate any assistance. I would like to see if you happen to have a working Free5GC+UERANSIM setup, even if it's different from what I'm trying.

Best regards, pantmal

avrodriguezgrad commented 1 week ago

Hi @pantmal

I have a working setup in same cluster, same namespace. I attach the screenshot: image

I didn't do anything, just deploy the helm chart and wait until everything is running. As CNI, the only difference with your setup is that we use Cilium with SCTP enabled and not Calico.

Can you try to deploy everything in the same VM and see if it works? Could be a good test to check if the problem is the environment.

BR, Álvaro

pantmal commented 6 days ago

Hello @avrodriguezgrad

I am using a different VM now, that has Flannel enabled. I was able to have a successful setup, however, I had to make quite a few changes to the default given values for the set up to work.

Specifically, I have noticed that the default mcc is '999' and mnc is '70' in the values.yaml files. However, when I open the Free5GC WebUI and create a subscriber, the default options are 208 and 93 (as well as different keys, op codes etc). So naturally there are two options: Either configure the subscriber to use the default chart values, or change the chart values to use the default subscriber options from the UI.

I took the approach of using the default subscriber options (208, 93) in all of the related files. Even in this case, however, I had to make one more change. The default S-NSSAI SD in the UI is 010203. This value cannot be assigned in the related section of the smf values.yaml file. When the pod deploys, it throws an error of 'invalid hex value'. I believe this is a bug. In any case, I set ffffff as the SD value in the subscriber form.

Now, this setup works. I am able to ping 8.8.8.8 (and other addresses) from the UE. And I can see packets being relayed correctly through the UPF pod.

However, I still have trouble connecting the UERANSIM from an external machine. I have tried passing the same values.yaml I have, using the IP of the Free5G VM and the AMF NodePort. Now, again, I am able to have a successful registration and the uesimtun0 interface does come up. But I am unable to ping to the Internet. I am not even sure how to properly debug it. The tcpdump from uesimtun0 doesn't reveal anything, other than having ICMP requests to 8.8.8.8. I am not getting any helpful logs in the tcpdump of the gnb pod either. Lastly, the requests don't seem to reach the UPF pod, even though the pod logs do show that a new session has been established. So while the registration is successful, there seems to be something missing between the connection of the gNB and the UPF. I have tried all of this, using the ueransim charts provided from this repo.

Could you help me somehow and point me in the right direction for the setup I need? Actually the connection with the UERANSIM isn't even a hard requirement in my case. I just need to isolate the free5gc chart and test if a UE can successfully connect to it. UERANSIM seems to be the most popular tool for this case, however, I am open to test this with other alternatives.

Let me know if you need me to provide any values.yaml files I have edited, logs, etc.

Best regards, pantmal

avrodriguezgrad commented 6 days ago

Hi @pantmal

I think the different configurations of the values are out of the networking problem. We have mcc 999, mnc 70, sst 1 and sd 0xfffff; because there are "test" values and, also, there are the ones that we use in Open5GS. If you configure all NFs with the correct values and the same values, you will never have any problem.

On the other hand, I believe I know what your problem is, and I think it is related with the other issue you have opened. How do you deploy the UPF for "opening" the GTPU port? Do you give to the UPF an secondary IP that is reachable from the external machine?

It is a bit difficult to point in the right direction from here, but I'll try.

pantmal commented 5 days ago

Hi @avrodriguezgrad

First, I agree that the difference in values is most likely not a problem. After all, the setup works in a single node setup now, so I guess I will have to focus on the connection between the UEs and the UPF.

In order to share the UPF and its GTPU port I have to use NodePort. I am not able to set a LoadBalancer or some other IP for the UPF, using the cluster I have.

Here are my services and pods from the Free5g VM:

NAME                                   READY   STATUS    RESTARTS   AGE
pod/free5gc-amf-5454c9f8d9-ps4tp       1/1     Running   0          10m
pod/free5gc-ausf-774d6bf74d-wxgrz      1/1     Running   0          10m
pod/free5gc-chf-5ffbcb4c7d-bmgzn       1/1     Running   0          10m
pod/free5gc-mongodb-5956cbcc8c-l75ng   1/1     Running   0          10m
pod/free5gc-nrf-5df786859d-6jhmp       1/1     Running   0          10m
pod/free5gc-nssf-77b79c4878-tdxf5      1/1     Running   0          10m
pod/free5gc-pcf-6bb96df7b6-bdt58       1/1     Running   0          10m
pod/free5gc-smf-c877cbdff-rnbjz        1/1     Running   0          10m
pod/free5gc-udm-56fd4f4466-67tr6       1/1     Running   0          10m
pod/free5gc-udr-5d8458c758-dsz6l       1/1     Running   0          10m
pod/free5gc-upf-65c5785bcd-zqvcd       1/1     Running   0          10m
pod/free5gc-webui-5f7f998848-lz57j     1/1     Running   0          10m

NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)            AGE
service/free5gc-amf-ngap   NodePort    10.110.245.228   <none>        38412:30412/SCTP   10m
service/free5gc-amf-sbi    ClusterIP   10.110.166.63    <none>        8000/TCP           10m
service/free5gc-ausf-sbi   ClusterIP   10.110.128.176   <none>        8000/TCP           10m
service/free5gc-chf-sbi    ClusterIP   10.110.25.166    <none>        8000/TCP           10m
service/free5gc-mongodb    ClusterIP   10.110.245.130   <none>        27017/TCP          10m
service/free5gc-nrf-sbi    ClusterIP   10.110.20.167    <none>        8000/TCP           10m
service/free5gc-nssf-sbi   ClusterIP   10.110.248.110   <none>        8000/TCP           10m
service/free5gc-pcf-sbi    ClusterIP   10.110.25.93     <none>        8000/TCP           10m
service/free5gc-smf-pfcp   ClusterIP   10.110.242.162   <none>        8805/UDP           10m
service/free5gc-smf-sbi    ClusterIP   10.110.58.103    <none>        8000/TCP           10m
service/free5gc-udm-sbi    ClusterIP   10.110.103.149   <none>        8000/TCP           10m
service/free5gc-udr-sbi    ClusterIP   10.110.176.230   <none>        8000/TCP           10m
service/free5gc-upf-gtpu   NodePort    10.110.208.53    <none>        2152:32152/UDP     10m
service/free5gc-upf-pfcp   ClusterIP   10.110.189.120   <none>        8805/UDP           10m
service/free5gc-webui      NodePort    10.110.121.125   <none>        5000:30500/TCP     10m
service/kubernetes         ClusterIP   10.110.0.1       <none>        443/TCP            27h

And here's also the same resources from the gNB VM:

NAME                                    READY   STATUS    RESTARTS   AGE
pod/ueransim-gnb-847d9ffd85-5r85j       1/1     Running   0          5m52s
pod/ueransim-gnb-ues-58ffd746cc-5j8pj   1/1     Running   0          5m52s

NAME                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE
service/ueransim-gnb   NodePort   10.99.147.246   <none>        4997:32036/UDP,2152:32152/UDP   5m52s

Now regarding the exposure of the UPF pod, I have only set the NodePort. For the IP I want to use 192.168.5.225. However, I have noticed that in the resources/upfcfg.yaml that the following configuration exists:

gtpu:
  forwarder: gtp5g
  # The IP list of the N3/N9 interfaces on this UPF
  # If there are multiple connection, set addr to 0.0.0.0 or list all the addresses
  ifList:
    - addr: 0.0.0.0
      type: N3
      name: {{ default (printf "%s-upf-gtpu.%s.svc.cluster.local" $free5gcName $free5gcNamespace) .Values.config.gtpu.ifList.name }}
      # ifname: gtpif
      # mtu: 1400

I suppose that instead of 0.0.0.0 I have to use 192.168.5.225 (which is the IP of my VM). But I am unable to set this IP as the upfcfg assumes 2152 as the GTPU port. While in my case, I have to use 32152 somehow. I believe that my problems lie in this part here, most likely.

Is it possible to use a single IP in my setup and use the NodePort for the UPF communication?

Before trying out this helm chart I was able to set up a Free5GC + UERANSIM in separate VMs, but they ran as Linux processes, meaning outside K8S. I used the following guide: https://free5gc.org/guide/5-install-ueransim/#7-testing-ueransim-against-free5gc

In this guide, a single IP is used for the Free5GC and the UERANSIM VMs respectively. However port 2152 is assumed for the GTPU connection. Can I have a similar setup using the helm charts but with port 32152 instead? Or do I have to use separate IPs due to having a K8s setup?

I really appreciate your effort and your time. Let me know if you need me to share any values, logs etc.

Best regards, pantmal

avrodriguezgrad commented 1 day ago

Hi @pantmal

Is it possible to use a single IP in my setup and use the NodePort for the UPF communication?

This is a problem related to Free5GC and its config file. I don't know exactly (we don't use a lot free5GC yet) if the GTPU port can be changed.

Can I have a similar setup using the helm charts but with port 32152 instead? Or do I have to use separate IPs due to having a K8s setup?

The answer is the same as above. I don't know why you cannot use LoadBalancer in your cluster and/or environment, but I believe this would be the easiest solution.

BR, Álvaro