cncf / cnf-testbed

ARCHIVED: 🧪🛏️Cloud-native Network Function (CNF) Testbed --> See LFN Cloud Native Telecom Initiative https://wiki.lfnetworking.org/pages/viewpage.action?pageId=113213592
https://wiki.lfnetworking.org/pages/viewpage.action?pageId=113213592
Apache License 2.0
162 stars 51 forks source link

How to control the behavior of enb? #370

Open JC1O24 opened 3 years ago

JC1O24 commented 3 years ago

For example, I do not want to start enb when the container started, what can I do? I try to remove this line: https://github.com/cncf/cnf-testbed/blob/master/examples/use_case/gogtp-k8s/k8s_bridged_networks/gogtp/templates/deployment.yaml#L19 But no luck, the container of enb pod started then become failed, any ideas?

michaelspedersen commented 3 years ago

Hi @JC1O24. Instead of removing the line, try replacing it with tail -f /dev/null or another "simple" task that will keep the container occupied

DAIGOR1024 commented 3 years ago

Hi @JC1O24. Instead of removing the line, try replacing it with tail -f /dev/null or another "simple" task that will keep the container occupied

Tried. It works. Thank you so much!

JC1O24 commented 3 years ago

Hi, @michaelspedersen. I manually start enb process by /usr/local/bin/enb -config /root/enb.yml. Then I kill it by pid running kill -9 {pid_of enb}. After around 3 times of starting and stopping, an error occurs like:

[eNB] 2020/12/01 09:46:42 Established S1-MME connection with 172.21.1.12:36412
[eNB] 2020/12/01 09:46:42 FATAL: failed to add device: gtp-enb: file exists

Then I can not start the enb process again. Any ideas? Thanks!

The process after starting:

PID   USER     TIME  COMMAND
    1 root      0:00 {enb_setup.sh} /bin/sh ./root/enb_setup.sh
    7 root      0:00 tail -f /dev/null
   14 root      0:00 /bin/sh
   33 root      0:00 /usr/local/bin/enb -config /root/enb.yml
   43 root      0:00 ps

The process after stoping:

PID   USER     TIME  COMMAND
    1 root      0:00 {enb_setup.sh} /bin/sh ./root/enb_setup.sh
    7 root      0:00 tail -f /dev/null
   14 root      0:00 /bin/sh
   75 root      0:00 ps -A
michaelspedersen commented 3 years ago

@JC1O24 If you are running in a container I would try redeploying it. It looks like the issue is ENB trying to set up a GTP endpoint and failing to do so. Alternatively you will need to mess around with netlink, which can be a bit difficult directly through the terminal.

DAIGOR1024 commented 3 years ago

@JC1O24 If you are running in a container I would try redeploying it. It looks like the issue is ENB trying to set up a GTP endpoint and failing to do so. Alternatively you will need to mess around with netlink, which can be a bit difficult directly through the terminal.

In k8s pod

michaelspedersen commented 3 years ago

Yeah then try redeploying the pod. If that doesn't help, and you don't want to mess around with netlink, you can try rebooting the node

DAIGOR1024 commented 3 years ago

Yeah then try redeploying the pod. If that doesn't help, and you don't want to mess around with netlink, you can try rebooting the node

Tried. But this is not what I want. May need further studying

michaelspedersen commented 3 years ago

Yeah then try redeploying the pod. If that doesn't help, and you don't want to mess around with netlink, you can try rebooting the node

Tried. But this is not what I want. May need further studying

Otherwise going forward (while not being entirely sure if possible) it might be worth considering a different signal than SIGKILL, so the app has a chance to do a graceful termination :)

JC1O24 commented 3 years ago

Hi @michaelspedersen, when running the pgw, I found this error:

[P-GW] 2020/12/04 03:53:32 Started serving S5-C on 172.25.1.14:2123
[P-GW] 2020/12/04 03:53:32 FATAL: failed to add device: gtp-pgw: operation not supported
[P-GW] 2020/12/04 03:53:32 error reading from Conn 172.25.1.14:2123: read udp 172.25.1.14:2123: use of closed network connection

Also for sgw:

[S-GW] 2020/12/04 03:56:24 Started serving S11 on 172.22.0.13:2123
[S-GW] 2020/12/04 03:56:24 Started serving S5-C on 172.25.1.13:2123
[S-GW] 2020/12/04 03:56:24 FATAL: failed to add device: gtp-sgw-s1: operation not supported
[S-GW] 2020/12/04 03:56:24 error reading from Conn 172.22.0.13:2123: read udp 172.22.0.13:2123: use of closed network connection
[S-GW] 2020/12/04 03:56:24 error reading from Conn 172.25.1.13:2123: read udp 172.25.1.13:2123: use of closed network connection

Also for enb:

[eNB] 2020/12/04 07:03:39 Established S1-MME connection with 172.21.1.12:36412
[eNB] 2020/12/04 07:03:39 FATAL: failed to add device: gtp-enb: operation not supported

Any ideas?

michaelspedersen commented 3 years ago

@JC1O24 Are the bridge networks being created and available inside of the containers? It's not an error I recall ever seeing, and I'm wondering if it is due to the interfaces not being added. Regardless, I'll setup a cluster and try it out myself

wmnsk commented 3 years ago

FYI, I’ve seen it when the kernel GTP-U module is not loaded. So, checking the kernel version(4.12+) and the list of loaded module with lsmod to see if there are gtp and udp_tunnel might help.

DAIGOR1024 commented 3 years ago

FYI, I’ve seen it when the kernel GTP-U module is not loaded. So, checking the kernel version(4.12+) and the list of loaded module with lsmod to see if there are gtp and udp_tunnel might help.

Only kernel version 4.12+ can work?

electrocucaracha commented 3 years ago

Hi @michaelspedersen, when running the pgw, I found this error:

[P-GW] 2020/12/04 03:53:32 Started serving S5-C on 172.25.1.14:2123
[P-GW] 2020/12/04 03:53:32 FATAL: failed to add device: gtp-pgw: operation not supported
[P-GW] 2020/12/04 03:53:32 error reading from Conn 172.25.1.14:2123: read udp 172.25.1.14:2123: use of closed network connection

Also for sgw:

[S-GW] 2020/12/04 03:56:24 Started serving S11 on 172.22.0.13:2123
[S-GW] 2020/12/04 03:56:24 Started serving S5-C on 172.25.1.13:2123
[S-GW] 2020/12/04 03:56:24 FATAL: failed to add device: gtp-sgw-s1: operation not supported
[S-GW] 2020/12/04 03:56:24 error reading from Conn 172.22.0.13:2123: read udp 172.22.0.13:2123: use of closed network connection
[S-GW] 2020/12/04 03:56:24 error reading from Conn 172.25.1.13:2123: read udp 172.25.1.13:2123: use of closed network connection

Also for enb:

[eNB] 2020/12/04 07:03:39 Established S1-MME connection with 172.21.1.12:36412
[eNB] 2020/12/04 07:03:39 FATAL: failed to add device: gtp-enb: operation not supported

Any ideas?

@JoeLeeWu In my case, I have a script that waits until the other services are available. Using this approach, I can start the pods in any order.

wmnsk commented 3 years ago

@JoeLeeWu

Only kernel version 4.12+ can work?

Yes, because it depends on Linux Kernel GTP-U introduced around that version.

JC1O24 commented 3 years ago

Hi @michaelspedersen, when running the pgw, I found this error:

[P-GW] 2020/12/04 03:53:32 Started serving S5-C on 172.25.1.14:2123
[P-GW] 2020/12/04 03:53:32 FATAL: failed to add device: gtp-pgw: operation not supported
[P-GW] 2020/12/04 03:53:32 error reading from Conn 172.25.1.14:2123: read udp 172.25.1.14:2123: use of closed network connection

Also for sgw:

[S-GW] 2020/12/04 03:56:24 Started serving S11 on 172.22.0.13:2123
[S-GW] 2020/12/04 03:56:24 Started serving S5-C on 172.25.1.13:2123
[S-GW] 2020/12/04 03:56:24 FATAL: failed to add device: gtp-sgw-s1: operation not supported
[S-GW] 2020/12/04 03:56:24 error reading from Conn 172.22.0.13:2123: read udp 172.22.0.13:2123: use of closed network connection
[S-GW] 2020/12/04 03:56:24 error reading from Conn 172.25.1.13:2123: read udp 172.25.1.13:2123: use of closed network connection

Also for enb:

[eNB] 2020/12/04 07:03:39 Established S1-MME connection with 172.21.1.12:36412
[eNB] 2020/12/04 07:03:39 FATAL: failed to add device: gtp-enb: operation not supported

Any ideas?

@JoeLeeWu In my case, I have a script that waits until the other services are available. Using this approach, I can start the pods in any order.

After my testing with Linux version which is 4.12+, the issue is gone. Thanks. But I am still finding a graceful solution for start and stop enb: https://github.com/cncf/cnf-testbed/issues/370#issuecomment-739446345

JC1O24 commented 3 years ago

@JoeLeeWu

Only kernel version 4.12+ can work?

Yes, because it depends on Linux Kernel GTP-U introduced around that version.

Thanks. And any ideas for gracefully start and stop enb? As I mentioned it here: https://github.com/cncf/cnf-testbed/issues/370#issuecomment-736350730. This also happens in docker env when I kill enb in enb container

michaelspedersen commented 3 years ago

Thanks for the pointer @wmnsk, greatly appreciated 👍

My local testbed is running kernel 5.4, and no issues seen there. Good that this also solved it for @JC1O24.

JC1O24 commented 3 years ago

@michaelspedersen Hi, I try to combine enb, mme, sgw into one contanier called ems within one pod, I attached the networks that enb, mme, sgw needed. And I also added the ip addresses they need. When I start enb, the tunnels for subscribers are created succcessfully, but can not send u plane from enb to ext. The error is like: [eNB] 2020/12/17 08:00:42 WARN: failed to GET http://10.0.1.201/: Get "http://10.0.1.201/": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Any ideas?

Details:

[eNB] 2020/12/17 08:03:19 Established S1-MME connection with 172.21.1.12:36412
[eNB] 2020/12/17 08:03:19 Started serving S1-U on 172.21.0.11:2152
[eNB] 2020/12/17 08:03:19 Started serving Prometheus on 172.17.0.11:58080
[eNB] 2020/12/17 08:03:19 listen tcp 172.17.0.11:58080: bind: cannot assign requested addr ess
[MME] 2020/12/17 08:03:19 Sent Create Session Request for 001010000000001
[S-GW] 2020/12/17 08:03:19 Received Create Session Request from 172.22.0.12:2123
[S-GW] 2020/12/17 08:03:19 Sent Create Session Request to 172.25.1.14:2123 for 00101000000 0001
[S-GW] 2020/12/17 08:03:19 Received Create Session Response from 172.25.1.14:2123
[S-GW] 2020/12/17 08:03:19 Session created with MME and P-GW for Subscriber: 0010100000000 01;
        S11 MME:  172.22.0.12:2123, TEID->: 0xe6e569f8, TEID<-: 0x3fea1284
        S5C P-GW: 172.25.1.14:2123, TEID->: 0xe9725cd0, TEID<-: 0x6a305447
[MME] 2020/12/17 08:03:19 Received Create Session Response from 172.22.0.13:2123
[MME] 2020/12/17 08:03:19 Session created with S-GW for Subscriber: 001010000000001;
        S11 S-GW: 172.22.0.13:2123, TEID->: 0x3fea1284, TEID<-: 0xe6e569f8
[MME] 2020/12/17 08:03:19 Sent Modify Bearer Request for 001010000000001
[S-GW] 2020/12/17 08:03:19 Received Modify Bearer Request from 172.22.0.12:2123
[S-GW] 2020/12/17 08:03:19 Started listening on U-Plane for Subscriber: 001010000000001;
        S1-U: 172.21.0.13:2152
        S5-U: 172.25.0.13:2152
[MME] 2020/12/17 08:03:19 Received Modify Bearer Response from 172.22.0.13:2123
[MME] 2020/12/17 08:03:19 Bearer modified with S-GW for Subscriber: 001010000000001
[eNB] 2020/12/17 08:03:19 Successfully established tunnel for 001010000000001
[MME] 2020/12/17 08:03:19 Sent Create Session Request for 001010000000002
[S-GW] 2020/12/17 08:03:19 Received Create Session Request from 172.22.0.12:2123
[S-GW] 2020/12/17 08:03:19 Sent Create Session Request to 172.25.1.14:2123 for 00101000000 0002
[S-GW] 2020/12/17 08:03:19 Received Create Session Response from 172.25.1.14:2123
[S-GW] 2020/12/17 08:03:19 Session created with MME and P-GW for Subscriber: 0010100000000 02;
        S11 MME:  172.22.0.12:2123, TEID->: 0x9af57dbd, TEID<-: 0xbb565cb2
        S5C P-GW: 172.25.1.14:2123, TEID->: 0x71665c30, TEID<-: 0xe6cda782
[MME] 2020/12/17 08:03:19 Received Create Session Response from 172.22.0.13:2123
[MME] 2020/12/17 08:03:19 Session created with S-GW for Subscriber: 001010000000002;
        S11 S-GW: 172.22.0.13:2123, TEID->: 0xbb565cb2, TEID<-: 0x9af57dbd
[MME] 2020/12/17 08:03:19 Sent Modify Bearer Request for 001010000000002
[S-GW] 2020/12/17 08:03:19 Received Modify Bearer Request from 172.22.0.12:2123
[S-GW] 2020/12/17 08:03:19 Started listening on U-Plane for Subscriber: 001010000000002;
        S1-U: 172.21.0.13:2152
        S5-U: 172.25.0.13:2152
[MME] 2020/12/17 08:03:19 Received Modify Bearer Response from 172.22.0.13:2123
[MME] 2020/12/17 08:03:19 Bearer modified with S-GW for Subscriber: 001010000000002
[eNB] 2020/12/17 08:03:19 Successfully established tunnel for 001010000000002
[eNB] 2020/12/17 08:03:27 WARN: failed to GET http://10.0.1.201/: Get "http://10.0.1.201/" : context deadline exceeded (Client.Timeout exceeded while awaiting headers)
[eNB] 2020/12/17 08:03:35 WARN: failed to GET http://10.0.1.201/: Get "http://10.0.1.201/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
[eNB] 2020/12/17 08:03:43 WARN: failed to GET http://10.0.1.201/: Get "http://10.0.1.201/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)

ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
6: net1@if326: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether d6:d6:cc:b4:6a:e9 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.22.0.13/24 brd 172.22.0.255 scope global net1
       valid_lft forever preferred_lft forever
8: net2@if327: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 46:c3:5a:78:8b:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.0.13/24 brd 172.21.0.255 scope global net2
       valid_lft forever preferred_lft forever
10: net3@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether a6:88:0a:73:67:e5 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.25.0.13/24 brd 172.25.0.255 scope global net3
       valid_lft forever preferred_lft forever
12: net4@if328: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 8e:0a:ba:65:f7:3e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.25.1.13/24 brd 172.25.1.255 scope global net4
       valid_lft forever preferred_lft forever
14: net5@if13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 1a:69:4c:ff:88:a4 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.22.0.12/24 brd 172.22.0.255 scope global net5
       valid_lft forever preferred_lft forever
16: net6@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 66:fa:4e:13:91:40 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.1.12/24 brd 172.21.1.255 scope global net6
       valid_lft forever preferred_lft forever
18: net7@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 52:82:52:e7:e2:6c brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.0.254/24 brd 10.0.0.255 scope global net7
       valid_lft forever preferred_lft forever
20: net8@if329: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 5a:c3:19:43:22:c0 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.0.11/24 brd 172.21.0.255 scope global net8
       valid_lft forever preferred_lft forever
22: net9@if330: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 96:e9:47:f3:e9:ed brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.21.1.11/24 brd 172.21.1.255 scope global net9
       valid_lft forever preferred_lft forever
43: gtp-sgw-s1: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/none
44: gtp-sgw-s5: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/none
52: gtp-enb: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default
    link/none

ip r

10.0.0.0/24 dev net7 proto kernel scope link src 10.0.0.254
172.21.0.0/24 dev net2 proto kernel scope link src 172.21.0.13
172.21.0.0/24 dev net8 proto kernel scope link src 172.21.0.11
172.21.1.0/24 dev net6 proto kernel scope link src 172.21.1.12
172.21.1.0/24 dev net9 proto kernel scope link src 172.21.1.11
172.22.0.0/24 dev net1 proto kernel scope link src 172.22.0.13
172.22.0.0/24 dev net5 proto kernel scope link src 172.22.0.12
172.25.0.0/24 dev net3 proto kernel scope link src 172.25.0.13
172.25.1.0/24 dev net4 proto kernel scope link src 172.25.1.13

tcpdump -i gtp-enb

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gtp-enb, link-type RAW (Raw IP), capture size 262144 bytes
08:13:52.060816 IP 10.0.0.202.52855 > 10.0.1.201.80: Flags [S], seq 307211265, win 64240, options [mss 1460,sackOK,TS val 3140634887 ecr 0,nop,wscale 7], length 0
08:13:53.096738 IP 10.0.0.202.52855 > 10.0.1.201.80: Flags [S], seq 307211265, win 64240, options [mss 1460,sackOK,TS val 3140635923 ecr 0,nop,wscale 7], length 0
08:13:55.144728 IP 10.0.0.202.52855 > 10.0.1.201.80: Flags [S], seq 307211265, win 64240, options [mss 1460,sackOK,TS val 3140637971 ecr 0,nop,wscale 7], length 0
08:13:59.176763 IP 10.0.0.202.52855 > 10.0.1.201.80: Flags [S], seq 307211265, win 64240, options [mss 1460,sackOK,TS val 3140642003 ecr 0,nop,wscale 7], length 0
08:14:00.061196 IP 10.0.0.202.57901 > 10.0.1.201.80: Flags [S], seq 1328629072, win 65459, options [mss 65459,sackOK,TS val 3140642887 ecr 0,nop,wscale 7], length 0
08:14:01.096664 IP 10.0.0.202.57901 > 10.0.1.201.80: Flags [S], seq 1328629072, win 65459, options [mss 65459,sackOK,TS val 3140643922 ecr 0,nop,wscale 7], length 0

tcpdump -i gtp-sgw-s1

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on gtp-sgw-s1, link-type RAW (Raw IP), capture size 262144 bytes
08:15:35.176721 IP 10.0.0.202.38629 > 10.0.1.201.80: Flags [S], seq 372501654, win 65459, options [mss 65459,sackOK,TS val 3140738002 ecr 0,nop,wscale 7], length 0
08:15:35.432782 IP 10.0.0.202.56629 > 10.0.1.201.80: Flags [S], seq 1744372131, win 65459, options [mss 65459,sackOK,TS val 3140738259 ecr 0,nop,wscale 7], length 0
08:15:35.944777 IP 10.0.0.202.43077 > 10.0.1.201.80: Flags [S], seq 2112806489, win 65459, options [mss 65459,sackOK,TS val 3140738771 ecr 0,nop,wscale 7], length 0
08:15:35.944790 IP 10.0.0.202.44653 > 10.0.1.201.80: Flags [S], seq 3185598703, win 65459, options [mss 65459,sackOK,TS val 3140738771 ecr 0,nop,wscale 7], length 0
08:15:36.067571 IP 10.0.0.202.55845 > 10.0.1.201.80: Flags [S], seq 3241891658, win 65459, options [mss 65459,sackOK,TS val 3140738893 ecr 0,nop,wscale 7], length 0
08:15:37.096691 IP 10.0.0.202.55845 > 10.0.1.201.80: Flags [S], seq 3241891658, win 65459, options [mss 65459,sackOK,TS val 3140739922 ecr 0,nop,wscale 7], length 0

No packets is sent via gtp-sgw-s5 Seems it is trying to communicate with 10.0.1.201, but no luck.

tcpdump -i net2 which ip is 172.25.1.14in pgw to check if tunnel established

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on net2, link-type EN10MB (Ethernet), capture size 262144 bytes
08:19:28.458322 IP 172.25.1.13.2123 > 172.25.1.14.2123: UDP, length 246
08:19:28.458801 IP 172.25.1.14.2123 > 172.25.1.13.2123: UDP, length 92
08:19:28.473046 IP 172.25.1.13.2123 > 172.25.1.14.2123: UDP, length 246
08:19:28.473389 IP 172.25.1.14.2123 > 172.25.1.13.2123: UDP, length 92

tcpdump -i net4 which ip is 172.25.1.13in ems for sgw to check if tunnel established

tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on net4, link-type EN10MB (Ethernet), capture size 262144 bytes
08:24:13.259310 IP 172.25.1.13.2123 > 172.25.1.14.2123: UDP, length 246
08:24:13.260111 IP 172.25.1.14.2123 > 172.25.1.13.2123: UDP, length 92
08:24:13.369249 IP 172.25.1.13.2123 > 172.25.1.14.2123: UDP, length 246
08:24:13.369691 IP 172.25.1.14.2123 > 172.25.1.13.2123: UDP, length 92
08:24:18.696698 ARP, Request who-has 172.25.1.13 tell 172.25.1.14, length 28
08:24:18.696727 ARP, Reply 172.25.1.13 is-at 8e:0a:ba:65:f7:3e (oui Unknown), length 28
08:24:18.702698 ARP, Request who-has 172.25.1.14 tell 172.25.1.13, length 28
08:24:18.702807 ARP, Reply 172.25.1.14 is-at f2:6d:e3:fc:9a:72 (oui Unknown), length 28
michaelspedersen commented 3 years ago

@JC1O24 From looking over this it seems that the communication goes all the way to the pgw. Do you have any logs available for that, and have you made sure that the EXT http server is running (which has the 10.0.1.201 endpoint)?

DAIGOR1024 commented 3 years ago

@JC1O24 From looking over this it seems that the communication goes all the way to the pgw. Do you have any logs available for that, and have you made sure that the EXT http server is running (which has the 10.0.1.201 endpoint)?

EXT is running. From the tcpdump of gtp-enb above-mentioned, it try to connect the EXT. If I split enb, mme, sgw into 3 pods, then they work well, and EXT can reply to ENB. I want to use network namespace to isolate their communication to simulate the situation of being 3 pods. But I do not know what is the better way to do that.

DAIGOR1024 commented 3 years ago

I try combine mme and sgw into one pod then leave enb in another pod, then they work well. But this is not what I want.

michaelspedersen commented 3 years ago

I try combine mme and sgw into one pod then leave enb in another pod, then they work well. But this is not what I want.

For namespaces, you can try looking at the example here: https://github.com/cncf/cnf-testbed/blob/master/examples/use_case/nsmcon-ext-pf/README.md#testing-the-nsmcon-external-packet-filtering-example

Off the top of my head I can't see how this could be done, but there is probably a way.

DAIGOR1024 commented 3 years ago

I try combine mme and sgw into one pod then leave enb in another pod, then they work well. But this is not what I want.

For namespaces, you can try looking at the example here: https://github.com/cncf/cnf-testbed/blob/master/examples/use_case/nsmcon-ext-pf/README.md#testing-the-nsmcon-external-packet-filtering-example

Off the top of my head I can't see how this could be done, but there is probably a way.

If I split enb, mme, sgw into 3 namespaces, how the program know to find the specific ip of enb, mme or sgw? The topology may be changed completely?