cilium / cilium

eBPF-based Networking, Security, and Observability
https://cilium.io
Apache License 2.0
20.15k stars 2.96k forks source link

After remove cilium and install flannel,pod can't resolv domain.(how to cleanup cilium bpf rule) #17292

Closed vsxen closed 3 years ago

vsxen commented 3 years ago

Bug report

General Information

root@ubuntu-focal:/home/vagrant# uname -a
Linux ubuntu-focal 5.4.0-81-generic #91-Ubuntu SMP Thu Jul 15 19:09:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

root@ubuntu-focal:/home/vagrant# kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:18:45Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:12:29Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}

root@ubuntu-focal:/home/vagrant# ./cilium version
Client: 1.9.7 f993696 2021-05-12T18:21:30-07:00 go version go1.15.12 linux/amd64

How to reproduce the issue

# install cilium and remove
kubeadm init  --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address 192.168.2.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/24 --kubernetes-version 1.21.1
kubectl create -f https://raw.githubusercontent.com/cilium/cilium/v1.9/install/kubernetes/quick-install.yaml
kubeadm reset -f

# instal flannel
kubeadm init  --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address 192.168.2.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/24 --kubernetes-version 1.21.1
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl  create deployment --image=nginx:alpine --port 80 nginx

root@ubuntu-focal:/home/vagrant# kubectl  get po -owide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
nginx-7fb7fd49b4-xx7qt   1/1     Running   1          65m   10.244.0.7   ubuntu-focal   <none>           <none>

root@ubuntu-focal:/home/vagrant# kubectl exec -it nginx-7fb7fd49b4-xx7qt sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping d.cn
ping: bad address 'd.cn'

root@ubuntu-focal:/home/vagrant# tcpdump -i cni0 udp -vvv -nn
tcpdump: listening on cni0, link-type EN10MB (Ethernet), capture size 262144 bytes

13:44:53.035297 IP (tos 0x0, ttl 64, id 25778, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.96.0.10.53: [bad udp cksum 0x15ae -> 0x014b!] 7938+ A? z.cn.default.svc.cluster.local. (48)
13:44:53.035347 IP (tos 0x0, ttl 63, id 25778, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.244.0.5.53: [bad udp cksum 0x163d -> 0x00bc!] 7938+ A? z.cn.default.svc.cluster.local. (48)
13:44:53.035851 IP (tos 0x0, ttl 64, id 29110, offset 0, flags [DF], proto UDP (17), length 169)
    10.244.0.5.53 > 10.244.0.7.53471: [bad udp cksum 0x169a -> 0xd1ea!] 7938 NXDomain*- q: A? z.cn.default.svc.cluster.local. 0/1/0 ns: cluster.local. [30s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1630586845 7200 1800 86400 30 (141)
13:44:53.036072 IP (tos 0x0, ttl 64, id 25779, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.96.0.10.53: [bad udp cksum 0x15ae -> 0x001a!] 8216+ AAAA? z.cn.default.svc.cluster.local. (48)
13:44:53.036102 IP (tos 0x0, ttl 63, id 25779, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.244.0.5.53: [bad udp cksum 0x163d -> 0xff8a!] 8216+ AAAA? z.cn.default.svc.cluster.local. (48)
13:44:53.039233 IP (tos 0x0, ttl 64, id 29111, offset 0, flags [DF], proto UDP (17), length 169)
    10.244.0.5.53 > 10.244.0.7.53471: [bad udp cksum 0x169a -> 0xd0b9!] 8216 NXDomain*- q: AAAA? z.cn.default.svc.cluster.local. 0/1/0 ns: cluster.local. [30s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1630586845 7200 1800 86400 30 (141)
13:44:55.537732 IP (tos 0x0, ttl 64, id 26071, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.96.0.10.53: [bad udp cksum 0x15ae -> 0x014b!] 7938+ A? z.cn.default.svc.cluster.local. (48)
13:44:55.537795 IP (tos 0x0, ttl 63, id 26071, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.244.0.5.53: [bad udp cksum 0x163d -> 0x00bc!] 7938+ A? z.cn.default.svc.cluster.local. (48)
13:44:55.537839 IP (tos 0x0, ttl 64, id 26072, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.96.0.10.53: [bad udp cksum 0x15ae -> 0x001a!] 8216+ AAAA? z.cn.default.svc.cluster.local. (48)
13:44:55.537854 IP (tos 0x0, ttl 63, id 26072, offset 0, flags [DF], proto UDP (17), length 76)
    10.244.0.7.53471 > 10.244.0.5.53: [bad udp cksum 0x163d -> 0xff8a!] 8216+ AAAA? z.cn.default.svc.cluster.local. (48)
13:44:55.538200 IP (tos 0x0, ttl 64, id 29653, offset 0, flags [DF], proto UDP (17), length 169)
    10.244.0.5.53 > 10.244.0.7.53471: [bad udp cksum 0x169a -> 0xd2b9!] 8216 NXDomain*- q: AAAA? z.cn.default.svc.cluster.local. 0/1/0 ns: cluster.local. [28s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1630586845 7200 1800 86400 30 (141)
13:44:55.538309 IP (tos 0x0, ttl 64, id 29654, offset 0, flags [DF], proto UDP (17), length 169)
    10.244.0.5.53 > 10.244.0.7.53471: [bad udp cksum 0x169a -> 0xd3ea!] 7938 NXDomain*- q: A? z.cn.default.svc.cluster.local. 0/1/0 ns: cluster.local. [28s] SOA ns.dns.cluster.local. hostmaster.cluster.local. 1630586845 7200 1800 86400 30 (141)

root@ubuntu-focal:/home/vagrant# kubectl  -n kube-system get svc kube-dns
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   79m

root@ubuntu-focal:/home/vagrant# kubectl  -n kube-system get po -owide
NAME                                   READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
coredns-57d4cbf879-5jcxs               1/1     Running   1          79m   10.244.0.5   ubuntu-focal   <none>           <none>
coredns-57d4cbf879-tm7fh               1/1     Running   1          79m   10.244.0.6   ubuntu-focal   <none>           <none>
etcd-ubuntu-focal                      1/1     Running   1          79m   10.0.2.15    ubuntu-focal   <none>           <none>
kube-apiserver-ubuntu-focal            1/1     Running   1          79m   10.0.2.15    ubuntu-focal   <none>           <none>
kube-controller-manager-ubuntu-focal   1/1     Running   1          79m   10.0.2.15    ubuntu-focal   <none>           <none>
kube-flannel-ds-hcxc6                  1/1     Running   1          76m   10.0.2.15    ubuntu-focal   <none>           <none>
kube-proxy-q76cz                       1/1     Running   1          79m   10.0.2.15    ubuntu-focal   <none>           <none>
kube-scheduler-ubuntu-focal            1/1     Running   1          79m   10.0.2.15    ubuntu-focal   <none>           <none>

dig d.cn @10.96.0.10 timeout 

dig d.cn @10.244.0.5 is ok

# use cilium cleanup also not work

root@ubuntu-focal:/home/vagrant# ./cilium cleanup -f
Warning: Destructive operation. You are about to remove:
- mounted cgroupv2 at /var/run/cilium/cgroupv2
- library code in /var/lib/cilium
- endpoint state in /var/run/cilium
- CNI configuration at /etc/cni/net.d/10-cilium-cni.conf, /etc/cni/net.d/00-cilium-cni.conf, /etc/cni/net.d/05-cilium-cni.conf
- all BPF maps in /sys/fs/bpf/tc/globals containing 'cilium_' and 'cilium_tunnel_map'
- mounted bpffs at /sys/fs/bpf
pchaigno commented 3 years ago

Could you share the outputs of bpftool net, bpftool cgroup tree, ip a, and ip r show table all?

vsxen commented 3 years ago
root@ubuntu-focal:/home/vagrant# bpftool net
xdp:

tc:

flow_dissector:

root@ubuntu-focal:/home/vagrant# bpftool cgroup tree
CgroupPath
ID       AttachType      AttachFlags     Name
/sys/fs/cgroup/unified/system.slice/systemd-udevd.service
    6        ingress
    5        egress
/sys/fs/cgroup/unified/system.slice/systemd-journald.service
    4        ingress
    3        egress
/sys/fs/cgroup/unified/system.slice/systemd-logind.service
    8        ingress
    7        egress
root@ubuntu-focal:/home/vagrant# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 02:b7:1d:9c:e0:75 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3
       valid_lft 80097sec preferred_lft 80097sec
    inet6 fe80::b7:1dff:fe9c:e075/64 scope link
       valid_lft forever preferred_lft forever
3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:ee:5b:d1 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.2/24 brd 192.168.2.255 scope global enp0s8
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:feee:5bd1/64 scope link
       valid_lft forever preferred_lft forever
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
    link/ether 5e:28:7c:3a:39:51 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.0/32 brd 10.244.0.0 scope global flannel.1
       valid_lft forever preferred_lft forever
    inet6 fe80::5c28:7cff:fe3a:3951/64 scope link
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether a2:f1:c7:0d:ce:1b brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.1/24 brd 10.244.0.255 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::a0f1:c7ff:fe0d:ce1b/64 scope link
       valid_lft forever preferred_lft forever
6: veth2c6e4fca@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 5a:96:52:d0:0a:2a brd ff:ff:ff:ff:ff:ff link-netns cni-55a94570-b30e-1159-bade-0a0ad13b3aa2
    inet6 fe80::5896:52ff:fed0:a2a/64 scope link
       valid_lft forever preferred_lft forever
7: vethe1870759@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether a2:31:c4:17:0e:22 brd ff:ff:ff:ff:ff:ff link-netns cni-fd802ac6-7a70-d297-a431-b6d5aa1e2aaa
    inet6 fe80::a031:c4ff:fe17:e22/64 scope link
       valid_lft forever preferred_lft forever
8: veth7efb195d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master cni0 state UP group default
    link/ether 56:66:ef:b6:21:40 brd ff:ff:ff:ff:ff:ff link-netns cni-904782d5-75fc-4978-56a7-8aee0575c5e9
    inet6 fe80::5466:efff:feb6:2140/64 scope link
       valid_lft forever preferred_lft forever
root@ubuntu-focal:/home/vagrant# ip r show table all
default via 10.0.2.2 dev enp0s3 proto dhcp src 10.0.2.15 metric 100
10.0.2.0/24 dev enp0s3 proto kernel scope link src 10.0.2.15
10.0.2.2 dev enp0s3 proto dhcp scope link src 10.0.2.15 metric 100
10.244.0.0/24 dev cni0 proto kernel scope link src 10.244.0.1
192.168.2.0/24 dev enp0s8 proto kernel scope link src 192.168.2.2
broadcast 10.0.2.0 dev enp0s3 table local proto kernel scope link src 10.0.2.15
local 10.0.2.15 dev enp0s3 table local proto kernel scope host src 10.0.2.15
broadcast 10.0.2.255 dev enp0s3 table local proto kernel scope link src 10.0.2.15
local 10.244.0.0 dev flannel.1 table local proto kernel scope host src 10.244.0.0
broadcast 10.244.0.0 dev flannel.1 table local proto kernel scope link src 10.244.0.0
broadcast 10.244.0.0 dev cni0 table local proto kernel scope link src 10.244.0.1
local 10.244.0.1 dev cni0 table local proto kernel scope host src 10.244.0.1
broadcast 10.244.0.255 dev cni0 table local proto kernel scope link src 10.244.0.1
broadcast 127.0.0.0 dev lo table local proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo table local proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo table local proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo table local proto kernel scope link src 127.0.0.1
broadcast 192.168.2.0 dev enp0s8 table local proto kernel scope link src 192.168.2.2
local 192.168.2.2 dev enp0s8 table local proto kernel scope host src 192.168.2.2
broadcast 192.168.2.255 dev enp0s8 table local proto kernel scope link src 192.168.2.2
::1 dev lo proto kernel metric 256 pref medium
fe80::/64 dev enp0s8 proto kernel metric 256 pref medium
fe80::/64 dev enp0s3 proto kernel metric 256 pref medium
fe80::/64 dev flannel.1 proto kernel metric 256 pref medium
fe80::/64 dev cni0 proto kernel metric 256 pref medium
fe80::/64 dev veth2c6e4fca proto kernel metric 256 pref medium
fe80::/64 dev vethe1870759 proto kernel metric 256 pref medium
fe80::/64 dev veth7efb195d proto kernel metric 256 pref medium
local ::1 dev lo table local proto kernel metric 0 pref medium
local fe80::b7:1dff:fe9c:e075 dev enp0s3 table local proto kernel metric 0 pref medium
local fe80::a00:27ff:feee:5bd1 dev enp0s8 table local proto kernel metric 0 pref medium
local fe80::5466:efff:feb6:2140 dev veth7efb195d table local proto kernel metric 0 pref medium
local fe80::5896:52ff:fed0:a2a dev veth2c6e4fca table local proto kernel metric 0 pref medium
local fe80::5c28:7cff:fe3a:3951 dev flannel.1 table local proto kernel metric 0 pref medium
local fe80::a031:c4ff:fe17:e22 dev vethe1870759 table local proto kernel metric 0 pref medium
local fe80::a0f1:c7ff:fe0d:ce1b dev cni0 table local proto kernel metric 0 pref medium
multicast ff00::/8 dev enp0s8 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev enp0s3 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev flannel.1 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev cni0 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth2c6e4fca table local proto kernel metric 256 pref medium
multicast ff00::/8 dev vethe1870759 table local proto kernel metric 256 pref medium
multicast ff00::/8 dev veth7efb195d table local proto kernel metric 256 pref medium
pchaigno commented 3 years ago

I don't see any traces of Cilium anywhere. There doesn't seem to be any BPF programs from Cilium left or even any Cilium-specific interfaces.

Maybe you need to recreate the CoreDNS pods for Flannel to correctly take them into account? We sometimes need to do that with Cilium.

vsxen commented 3 years ago

I had recreate the CoreDNS pods for Flannel.

I alos try to clean iptables, but not work.

It seems pod can't access CoreDNS Service IP

cilium-dns-egress?

root@ubuntu-focal:/home/vagrant# kubectl  exec -it nginx-7fb7fd49b4-25cpt sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # nslookup d.cn 10.96.0.10
;; connection timed out; no servers could be reached

/ # nslookup d.cn 10.244.0.5
Server:     10.244.0.5
Address:    10.244.0.5:53

Non-authoritative answer:
d.cn    canonical name = d.cn.wsssec.com

Non-authoritative answer:
d.cn    canonical name = d.cn.wsssec.com
Name:   d.cn.wsssec.com
Address: 211.95.52.224

root@ubuntu-focal:/home/vagrant# iptables-save |grep cilium
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_raw" -j CILIUM_PRE_raw
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_raw" -j CILIUM_OUTPUT_raw
-A CILIUM_OUTPUT_raw -o lxc+ -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j NOTRACK
-A CILIUM_OUTPUT_raw -o cilium_host -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: NOTRACK for proxy return traffic" -j NOTRACK
-A CILIUM_PRE_raw -m mark --mark 0x200/0xf00 -m comment --comment "cilium: NOTRACK for proxy traffic" -j NOTRACK
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_mangle" -j CILIUM_PRE_mangle
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_mangle" -j CILIUM_POST_mangle
-A CILIUM_PRE_mangle -m socket --transparent -m comment --comment "cilium: any->pod redirect proxied traffic to host proxy" -j MARK --set-xmark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p tcp -m mark --mark 0xa7a60200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 42663 --on-ip 0.0.0.0 --tproxy-mark 0x200/0xffffffff
-A CILIUM_PRE_mangle -p udp -m mark --mark 0xa7a60200 -m comment --comment "cilium: TPROXY to host cilium-dns-egress proxy" -j TPROXY --on-port 42663 --on-ip 0.0.0.0 --tproxy-mark 0x200/0xffffffff
-A INPUT -m comment --comment "cilium-feeder: CILIUM_INPUT" -j CILIUM_INPUT
-A FORWARD -m comment --comment "cilium-feeder: CILIUM_FORWARD" -j CILIUM_FORWARD
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT" -j CILIUM_OUTPUT
-A CILIUM_FORWARD -o cilium_host -m comment --comment "cilium: any->cluster on cilium_host forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_host -m comment --comment "cilium: cluster->any on cilium_host forward accept (nodeport)" -j ACCEPT
-A CILIUM_FORWARD -i lxc+ -m comment --comment "cilium: cluster->any on lxc+ forward accept" -j ACCEPT
-A CILIUM_FORWARD -i cilium_net -m comment --comment "cilium: cluster->any on cilium_net forward accept (nodeport)" -j ACCEPT
-A CILIUM_INPUT -m mark --mark 0x200/0xf00 -m comment --comment "cilium: ACCEPT for proxy traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark --mark 0xa00/0xfffffeff -m comment --comment "cilium: ACCEPT for proxy return traffic" -j ACCEPT
-A CILIUM_OUTPUT -m mark ! --mark 0xe00/0xf00 -m mark ! --mark 0xd00/0xf00 -m mark ! --mark 0xa00/0xe00 -m comment --comment "cilium: host->any mark as from host" -j MARK --set-xmark 0xc00/0xf00
-A PREROUTING -m comment --comment "cilium-feeder: CILIUM_PRE_nat" -j CILIUM_PRE_nat
-A OUTPUT -m comment --comment "cilium-feeder: CILIUM_OUTPUT_nat" -j CILIUM_OUTPUT_nat
-A POSTROUTING -m comment --comment "cilium-feeder: CILIUM_POST_nat" -j CILIUM_POST_nat
pchaigno commented 3 years ago

Ah right. Forgot about iptables rules. You can try to remove all references to CILIUM_xxx chains.

vsxen commented 3 years ago

I alos try to clean iptables, but not work.

pchaigno commented 3 years ago

Is there anything else left over by Cilium that makes you think this is a Cilium issue?

vsxen commented 3 years ago
root@ubuntu-focal:/home/vagrant# bpftool map
56: lpm_trie  flags 0x1
    key 24B  value 12B  max_entries 512000  memlock 36868096B
57: percpu_hash  flags 0x1
    key 8B  value 16B  max_entries 1024  memlock 114688B
59: lru_hash  flags 0x0
    key 16B  value 8B  max_entries 65536  memlock 5771264B
60: hash  flags 0x1
    key 12B  value 12B  max_entries 65536  memlock 6295552B
61: hash  flags 0x1
    key 2B  value 8B  max_entries 65536  memlock 5246976B
71: hash  flags 0x1
    key 8B  value 1B  max_entries 65536  memlock 5246976B
72: lru_hash  flags 0x0
    key 16B  value 16B  max_entries 65536  memlock 6295552B

root@ubuntu-focal:/home/vagrant# bpftool prog
16: cgroup_skb  tag 6deef7357e7b4530  gpl
    loaded_at 2021-09-03T14:57:39+0000  uid 0
    xlated 64B  jited 61B  memlock 4096B
17: cgroup_skb  tag 6deef7357e7b4530  gpl
    loaded_at 2021-09-03T14:57:39+0000  uid 0
    xlated 64B  jited 61B  memlock 4096B
18: cgroup_skb  tag 6deef7357e7b4530  gpl
    loaded_at 2021-09-03T14:57:39+0000  uid 0
    xlated 64B  jited 61B  memlock 4096B
19: cgroup_skb  tag 6deef7357e7b4530  gpl
    loaded_at 2021-09-03T14:57:39+0000  uid 0
    xlated 64B  jited 61B  memlock 4096B
20: cgroup_skb  tag 6deef7357e7b4530  gpl
    loaded_at 2021-09-03T14:57:39+0000  uid 0
    xlated 64B  jited 61B  memlock 4096B
21: cgroup_skb  tag 6deef7357e7b4530  gpl
    loaded_at 2021-09-03T14:57:39+0000  uid 0
    xlated 64B  jited 61B  memlock 4096B
338: cgroup_sock_addr  tag 15a8c6f9177be7b9  gpl
    loaded_at 2021-09-03T14:59:15+0000  uid 0
    xlated 4640B  jited 2674B  memlock 8192B  map_ids 60,56,72,71,61,57,59
339: cgroup_sock_addr  tag 7f383f31771b3b0d  gpl
    loaded_at 2021-09-03T14:59:16+0000  uid 0
    xlated 4568B  jited 2622B  memlock 8192B  map_ids 60,56,72,71,61,57,59
340: cgroup_sock_addr  tag f776a817138e2b71  gpl
    loaded_at 2021-09-03T14:59:18+0000  uid 0
    xlated 2136B  jited 1229B  memlock 4096B  map_ids 59,60,56,57
341: cgroup_sock_addr  tag a9d709bb5dacdcd3  gpl
    loaded_at 2021-09-03T14:59:19+0000  uid 0
    xlated 4376B  jited 2544B  memlock 8192B  map_ids 60,56,72,71,61,57,59
342: cgroup_sock_addr  tag b377656c1fe66f82  gpl
    loaded_at 2021-09-03T14:59:20+0000  uid 0
    xlated 4304B  jited 2492B  memlock 8192B  map_ids 60,56,72,71,61,57,59
343: cgroup_sock_addr  tag 50e331df2a3f7662  gpl
    loaded_at 2021-09-03T14:59:21+0000  uid 0
    xlated 1856B  jited 1092B  memlock 4096B  map_ids 59,60,56,57
pchaigno commented 3 years ago

Were any of these BPF programs installed by Cilium?

vsxen commented 3 years ago

May be ,after reboot vm bpf map is empty .

aditighag commented 3 years ago

What's the output of sudo bpftool cgroup show /var/run/cilium/cgroupv2?

aditighag commented 3 years ago

Cilium agent should clean up its attached BPF programs using the bpf_clear_cgroup function - https://github.com/cilium/cilium/blob/master/bpf/init.sh#L306-L306.

vsxen commented 3 years ago

It is empty. cilium cleanup command will delete /var/run/cilium & /var/lib/cilium