canonical / microk8s

MicroK8s is a small, fast, single-package Kubernetes for datacenters and the edge.
https://microk8s.io
Apache License 2.0
8.5k stars 772 forks source link

Calico does not start if network interfaces are named ibmveth* (ppc64le) #3458

Closed ittner closed 1 year ago

ittner commented 2 years ago

Summary

Calico, as deployed by microk8s v1.25.0 (from snap "edge" channel), does not work out of box with network interfaces named ibmveth* (ibmvetha, ibmvethb, etc). These are the default names for virtualized network interfaces in Power9 LPARs.

The problem goes away once the interface is renamed to any other recognized name (eth, enp, etc.);

What Should Happen Instead?

Networking should start automatically, as it does on other Power9 deployments (bare metal, kvm, etc.)

Reproduction Steps

Full log of the failure ubuntu@p9g-lpar07:~$ microk8s start ubuntu@p9g-lpar07:~$ microk8s status --wait-ready microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: ha-cluster # (core) Configure high availability on the current node helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes disabled: cert-manager # (core) Cloud native certificate management community # (core) The community addons repository dashboard # (core) The Kubernetes dashboard dns # (core) CoreDNS host-access # (core) Allow Pods connecting to Host services smoothly hostpath-storage # (core) Storage class; allocates storage from host directory metallb # (core) Loadbalancer for your Kubernetes cluster metrics-server # (core) K8s Metrics Server for API access to service metrics rbac # (core) Role-Based Access Control for authorisation registry # (core) Private image registry exposed on localhost:32000 storage # (core) Alias to hostpath-storage add-on, deprecated ubuntu@p9g-lpar07:~$ microk8s kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-749975d956-nmkl4 0/1 ContainerCreating 0 17m kube-system calico-node-qvb47 0/1 CrashLoopBackOff 8 (54s ago) 17m ubuntu@p9g-lpar07:~$ ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ibmvetha: mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether fa:39:12:da:21:0a brd ff:ff:ff:ff:ff:ff inet 10.245.71.133/21 brd 10.245.71.255 scope global ibmvetha valid_lft forever preferred_lft forever inet6 fe80::f839:12ff:feda:210a/64 scope link valid_lft forever preferred_lft forever ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$ microk8s inspect Inspecting system Inspecting Certificates Inspecting services Service snap.microk8s.daemon-cluster-agent is running Service snap.microk8s.daemon-containerd is running Service snap.microk8s.daemon-kubelite is running Service snap.microk8s.daemon-k8s-dqlite is running Service snap.microk8s.daemon-apiserver-kicker is running Copy service arguments to the final report tarball Inspecting AppArmor configuration Gathering system information Copy processes list to the final report tarball Copy disk usage information to the final report tarball Copy memory usage information to the final report tarball Copy server uptime to the final report tarball Copy openSSL information to the final report tarball Copy snap list to the final report tarball Copy VM name (or none) to the final report tarball Copy current linux distribution to the final report tarball Copy network configuration to the final report tarball Inspecting kubernetes cluster Inspect kubernetes cluster Inspecting dqlite Inspect dqlite WARNING: The memory cgroup is not enabled. The cluster may not be functioning properly. Please ensure cgroups are enabled See for example: https://microk8s.io/docs/install-alternatives#heading--arm Building the report tarball Report tarball is at /var/snap/microk8s/3915/inspection-report-20220922_135402.tar.gz ubuntu@p9g-lpar07:~$ sudo journalctl (...) Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:52:47.416722198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:47.417454 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:52:47.494248966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:47.494539 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:47.494622 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:47.494697 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:52:47 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:47.494821 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:52:47 p9g-lpar07 systemd[1]: run-netns-cni\x2d80527bd2\x2d527e\x2de4a0\x2dbcf2\x2dfb6bfec7b144.mount: Deactivated successfully. Sep 22 13:52:58 p9g-lpar07 systemd[1592]: Started snap.microk8s.microk8s.f3603e87-e45b-4760-92fc-3b01f1be18d2.scope. Sep 22 13:52:59 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:52:59.416306580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:52:59 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:52:59.489350172Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:52:59 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:59.489633 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:52:59 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:59.489714 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:52:59 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:59.489777 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:52:59 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:52:59.489899 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:52:59 p9g-lpar07 systemd[1]: run-netns-cni\x2de867f02d\x2d34ce\x2dcb91\x2d7261\x2d24e71da883f6.mount: Deactivated successfully. Sep 22 13:53:00 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:53:00.416722 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:53:00 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:00.417370 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:53:11 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:53:11.415925 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:53:11 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:11.416982 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:53:11 p9g-lpar07 systemd[1592]: Started snap.microk8s.microk8s.38887563-649a-42dc-bed5-c358a53f420a.scope. Sep 22 13:53:13 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:13.416825653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:53:13 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:13.497068564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:53:13 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:13.497440 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:53:13 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:13.497536 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:53:13 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:13.497592 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:53:13 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:13.497699 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:53:13 p9g-lpar07 systemd[1]: run-netns-cni\x2d92d7d8aa\x2d592a\x2dfdd1\x2dd205\x2d059d08d4e37a.mount: Deactivated successfully. Sep 22 13:53:26 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:53:26.416041 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:53:26 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:26.416899 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:53:27 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:27.416526702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:53:27 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:27.490413622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:53:27 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:27.490732 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:53:27 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:27.490820 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:53:27 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:27.490875 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:53:27 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:27.490991 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:53:27 p9g-lpar07 systemd[1]: run-netns-cni\x2d454e36d1\x2dc50c\x2d96fc\x2dae53\x2d400ccb56c7a9.mount: Deactivated successfully. Sep 22 13:53:38 p9g-lpar07 systemd[1592]: Started snap.microk8s.microk8s.72f5d558-be3f-4f8b-bfec-b8e8d01ec991.scope. Sep 22 13:53:38 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:38.416617627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:53:38 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:38.505157075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:53:38 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:38.505476 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:53:38 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:38.505559 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:53:38 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:38.505618 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:53:38 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:38.505747 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:53:38 p9g-lpar07 systemd[1]: run-netns-cni\x2d9a93be1f\x2d6972\x2d1c76\x2d35e3\x2d6bfc4ac3ce88.mount: Deactivated successfully. Sep 22 13:53:40 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:53:40.416605 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:53:40 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:40.417666 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:53:50 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:50.416236387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:53:50 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:53:50.491215968Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:53:50 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:50.491532 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:53:50 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:50.491630 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:53:50 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:50.491703 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:53:50 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:50.491831 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:53:50 p9g-lpar07 systemd[1]: run-netns-cni\x2d2e5c7144\x2dc855\x2d47f7\x2de63e\x2d10853f374881.mount: Deactivated successfully. Sep 22 13:53:55 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:53:55.416030 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:53:55 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:53:55.416941 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:53:57 p9g-lpar07 systemd[1592]: Started snap.microk8s.microk8s.201a1642-4113-441e-bd06-442d2878f1d6.scope. Sep 22 13:53:58 p9g-lpar07 systemd[1592]: Started snap.microk8s.microk8s.54b9c336-269f-4dce-9968-06458cd891c8.scope. Sep 22 13:53:58 p9g-lpar07 sudo[17681]: ubuntu : TTY=pts/0 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/microk8s/3915/inspect.sh Sep 22 13:53:58 p9g-lpar07 sudo[17681]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=1000) Sep 22 13:53:59 p9g-lpar07 sudo[17830]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/bin/microk8s kubectl version Sep 22 13:53:59 p9g-lpar07 sudo[17831]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/tee /var/snap/microk8s/3915/inspection-report/k8s/version Sep 22 13:53:59 p9g-lpar07 sudo[17830]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:53:59 p9g-lpar07 sudo[17831]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:53:59 p9g-lpar07 systemd[1]: Started snap.microk8s.microk8s.cc0be123-c577-4de0-ab32-492d85e26fe2.scope. Sep 22 13:53:59 p9g-lpar07 systemd[1]: snap.microk8s.microk8s.cc0be123-c577-4de0-ab32-492d85e26fe2.scope: Deactivated successfully. Sep 22 13:53:59 p9g-lpar07 sudo[17830]: pam_unix(sudo:session): session closed for user root Sep 22 13:53:59 p9g-lpar07 sudo[17831]: pam_unix(sudo:session): session closed for user root Sep 22 13:53:59 p9g-lpar07 sudo[17880]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/tee /var/snap/microk8s/3915/inspection-report/k8s/cluster-info Sep 22 13:53:59 p9g-lpar07 sudo[17879]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/bin/microk8s kubectl cluster-info Sep 22 13:53:59 p9g-lpar07 sudo[17879]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:53:59 p9g-lpar07 sudo[17880]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:00 p9g-lpar07 systemd[1]: Started snap.microk8s.microk8s.27efd83f-c1e0-45b0-abdd-47113d17b337.scope. Sep 22 13:54:00 p9g-lpar07 systemd[1]: snap.microk8s.microk8s.27efd83f-c1e0-45b0-abdd-47113d17b337.scope: Deactivated successfully. Sep 22 13:54:00 p9g-lpar07 sudo[17879]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:00 p9g-lpar07 sudo[17880]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:00 p9g-lpar07 sudo[17930]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/bin/microk8s kubectl cluster-info dump -A Sep 22 13:54:00 p9g-lpar07 sudo[17930]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:00 p9g-lpar07 sudo[17931]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/tee /var/snap/microk8s/3915/inspection-report/k8s/cluster-info-dump Sep 22 13:54:00 p9g-lpar07 sudo[17931]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:00 p9g-lpar07 systemd[1]: Started snap.microk8s.microk8s.0bc416a3-e4d7-48bd-ac6c-0fc6a242128f.scope. Sep 22 13:54:00 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:00.525384 2702 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"ht> Sep 22 13:54:00 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:00.692123 2702 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"ht> Sep 22 13:54:00 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:00.864107 2702 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"ht> Sep 22 13:54:01 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:01.033935 2702 status.go:71] apiserver received an error that is not an metav1.Status: &url.Error{Op:"Get", URL:"ht> Sep 22 13:54:01 p9g-lpar07 systemd[1]: snap.microk8s.microk8s.0bc416a3-e4d7-48bd-ac6c-0fc6a242128f.scope: Deactivated successfully. Sep 22 13:54:01 p9g-lpar07 sudo[17930]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:01 p9g-lpar07 sudo[17931]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:01 p9g-lpar07 sudo[17981]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/bin/microk8s kubectl get all --all-namespaces -o wide Sep 22 13:54:01 p9g-lpar07 sudo[17981]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:01 p9g-lpar07 sudo[17982]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/tee /var/snap/microk8s/3915/inspection-report/k8s/get-all Sep 22 13:54:01 p9g-lpar07 sudo[17982]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 systemd[1]: Started snap.microk8s.microk8s.8683dc7e-ca53-421c-9e69-b34b17333afb.scope. Sep 22 13:54:02 p9g-lpar07 systemd[1]: snap.microk8s.microk8s.8683dc7e-ca53-421c-9e69-b34b17333afb.scope: Deactivated successfully. Sep 22 13:54:02 p9g-lpar07 sudo[17981]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[17982]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18031]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/bin/microk8s kubectl get pv Sep 22 13:54:02 p9g-lpar07 sudo[18031]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 sudo[18032]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/tee /var/snap/microk8s/3915/inspection-report/k8s/get-pv Sep 22 13:54:02 p9g-lpar07 sudo[18032]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 systemd[1]: Started snap.microk8s.microk8s.da7b1cd6-b9b0-4af2-8dfb-d19c050484bd.scope. Sep 22 13:54:02 p9g-lpar07 systemd[1]: snap.microk8s.microk8s.da7b1cd6-b9b0-4af2-8dfb-d19c050484bd.scope: Deactivated successfully. Sep 22 13:54:02 p9g-lpar07 sudo[18031]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18032]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18080]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/snap/bin/microk8s kubectl get pvc --all-namespaces Sep 22 13:54:02 p9g-lpar07 sudo[18081]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/tee /var/snap/microk8s/3915/inspection-report/k8s/get-pvc Sep 22 13:54:02 p9g-lpar07 sudo[18080]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 sudo[18081]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:54:02.416992231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:54:02 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:54:02.525552612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:54:02 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:02.525877 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:54:02 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:02.525991 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:54:02 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:02.526053 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:54:02 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:02.526182 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:54:02 p9g-lpar07 systemd[1]: Started snap.microk8s.microk8s.822ab22a-4aae-46f2-915c-b49720f9f312.scope. Sep 22 13:54:02 p9g-lpar07 systemd[1]: run-netns-cni\x2d76533bbb\x2d0b3f\x2d1b62\x2d47af\x2d7f3d570e86be.mount: Deactivated successfully. Sep 22 13:54:02 p9g-lpar07 systemd[1]: snap.microk8s.microk8s.822ab22a-4aae-46f2-915c-b49720f9f312.scope: Deactivated successfully. Sep 22 13:54:02 p9g-lpar07 sudo[18080]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18081]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18196]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/cp /var/snap/microk8s/3915/var/kubernetes/backend/cluster.yaml /var/snap/micro> Sep 22 13:54:02 p9g-lpar07 sudo[18196]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 sudo[18196]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18203]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/cp /var/snap/microk8s/3915/var/kubernetes/backend/localnode.yaml /var/snap/mic> Sep 22 13:54:02 p9g-lpar07 sudo[18203]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 sudo[18203]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18210]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/cp /var/snap/microk8s/3915/var/kubernetes/backend/info.yaml /var/snap/microk8s> Sep 22 13:54:02 p9g-lpar07 sudo[18210]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 sudo[18210]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[18217]: root : TTY=pts/1 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/ls -lh /var/snap/microk8s/3915/var/kubernetes/backend/ Sep 22 13:54:02 p9g-lpar07 sudo[18217]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=0) Sep 22 13:54:02 p9g-lpar07 sudo[18217]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 sudo[17681]: pam_unix(sudo:session): session closed for user root Sep 22 13:54:02 p9g-lpar07 systemd[1592]: snap.microk8s.microk8s.54b9c336-269f-4dce-9968-06458cd891c8.scope: Consumed 1.581s CPU time. Sep 22 13:54:08 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:54:08.415888 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:54:08 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:08.416660 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:54:16 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:54:16.416499740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975d> Sep 22 13:54:16 p9g-lpar07 microk8s.daemon-containerd[2723]: time="2022-09-22T13:54:16.493517815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-749975> Sep 22 13:54:16 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:16.493804 2702 remote_runtime.go:222] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown de> Sep 22 13:54:16 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:16.493882 2702 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = > Sep 22 13:54:16 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:16.493941 2702 kuberuntime_manager.go:772] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = > Sep 22 13:54:16 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:16.494058 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-k> Sep 22 13:54:16 p9g-lpar07 systemd[1]: run-netns-cni\x2d5c6f35b7\x2d5e72\x2d54ab\x2dbde1\x2d6a22c6251993.mount: Deactivated successfully. Sep 22 13:54:19 p9g-lpar07 microk8s.daemon-kubelite[2702]: I0922 13:54:19.416583 2702 scope.go:115] "RemoveContainer" containerID="79b3a2aaab4fa7fc7f9e9ad9045e82006225fe400ca41cefd9509fa> Sep 22 13:54:19 p9g-lpar07 microk8s.daemon-kubelite[2702]: E0922 13:54:19.417419 2702 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-nod> Sep 22 13:54:28 p9g-lpar07 sudo[18513]: ubuntu : TTY=pts/0 ; PWD=/home/ubuntu ; USER=root ; COMMAND=/usr/bin/journalctl -n 1000 Sep 22 13:54:28 p9g-lpar07 sudo[18513]: pam_unix(sudo:session): session opened for user root(uid=0) by ubuntu(uid=1000) ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$
Workaround by renaming the interface ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$ ubuntu@p9g-lpar07:~$ sudo vim /etc/netplan/50-cloud-init.yaml ubuntu@p9g-lpar07:~$ cat /etc/netplan/50-cloud-init.yaml # This file is generated from information provided by the datasource. Changes # to it will not persist across an instance reboot. To disable cloud-init's # network configuration capabilities, write a file # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: ethernets: ibmvetha: addresses: - 10.245.71.133/21 gateway4: 10.245.64.1 match: macaddress: fa:39:12:da:21:0a mtu: 1500 nameservers: addresses: - 10.245.71.3 - 10.245.64.1 search: - maas set-name: eth0 version: 2 ubuntu@p9g-lpar07:~$ sudo netplan apply ** (generate:23340): WARNING **: 14:01:15.405: `gateway4` has been deprecated, use default routes instead. See the 'Default routes' section of the documentation for more details. ** (process:23338): WARNING **: 14:01:15.840: `gateway4` has been deprecated, use default routes instead. See the 'Default routes' section of the documentation for more details. ubuntu@p9g-lpar07:~$ ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether fa:39:12:da:21:0a brd ff:ff:ff:ff:ff:ff inet 10.245.71.133/21 brd 10.245.71.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::f839:12ff:feda:210a/64 scope link valid_lft forever preferred_lft forever ubuntu@p9g-lpar07:~$ ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether fa:39:12:da:21:0a brd ff:ff:ff:ff:ff:ff inet 10.245.71.133/21 brd 10.245.71.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::f839:12ff:feda:210a/64 scope link valid_lft forever preferred_lft forever 5: vxlan.calico: mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 66:ae:db:f3:49:68 brd ff:ff:ff:ff:ff:ff inet 10.1.187.128/32 scope global vxlan.calico valid_lft forever preferred_lft forever inet6 fe80::64ae:dbff:fef3:4968/64 scope link valid_lft forever preferred_lft forever 6: calif5d4c944715@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-18ed3b4f-24b0-cf0c-ccf4-cf16ed7d3de6 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever ubuntu@p9g-lpar07:~$ microk8s kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-node-qvb47 1/1 Running 10 (5m50s ago) 27m kube-system calico-kube-controllers-749975d956-nmkl4 1/1 Running 0 27m ubuntu@p9g-lpar07:~$ microk8s enable dns storage dashboard Infer repository core for addon dns Infer repository core for addon storage Infer repository core for addon dashboard Enabling DNS Applying manifest serviceaccount/coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created clusterrole.rbac.authorization.k8s.io/coredns created clusterrolebinding.rbac.authorization.k8s.io/coredns created Restarting kubelet DNS is enabled DEPRECIATION WARNING: 'storage' is deprecated and will soon be removed. Please use 'hostpath-storage' instead. Infer repository core for addon hostpath-storage Enabling default storage class. WARNING: Hostpath storage is not suitable for production environments. deployment.apps/hostpath-provisioner created storageclass.storage.k8s.io/microk8s-hostpath created serviceaccount/microk8s-hostpath created clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created Storage will be available soon. Enabling Kubernetes Dashboard Infer repository core for addon metrics-server Enabling Metrics-Server serviceaccount/metrics-server created clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrole.rbac.authorization.k8s.io/system:metrics-server created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created service/metrics-server created deployment.apps/metrics-server created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin created Metrics-Server is enabled Applying manifest serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created deployment.apps/dashboard-metrics-scraper created secret/microk8s-dashboard-token created If RBAC is not enabled access the dashboard using the token retrieved with: microk8s kubectl describe secret -n kube-system microk8s-dashboard-token Use this token in the https login UI of the kubernetes-dashboard service. In an RBAC enabled setup (microk8s enable RBAC) you need to create a user with restricted permissions as shown in: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md ubuntu@p9g-lpar07:~$ microk8s kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-d489fb88-6jmxv 1/1 Running 0 109s kube-system calico-node-qvb47 1/1 Running 10 (9m25s ago) 30m kube-system calico-kube-controllers-749975d956-nmkl4 1/1 Running 0 30m kube-system metrics-server-fcf58d7c5-8zl56 0/1 ContainerCreating 0 6s kube-system kubernetes-dashboard-8c67656cd-wqxb7 0/1 ContainerCreating 0 6s kube-system dashboard-metrics-scraper-64bcc67c9c-vq659 0/1 ContainerCreating 0 6s kube-system hostpath-provisioner-85ccc46f96-prxtk 0/1 ContainerCreating 0 6s ubuntu@p9g-lpar07:~$ microk8s kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-d489fb88-6jmxv 1/1 Running 0 2m41s kube-system calico-node-qvb47 1/1 Running 10 (10m ago) 31m kube-system calico-kube-controllers-749975d956-nmkl4 1/1 Running 0 31m kube-system hostpath-provisioner-85ccc46f96-prxtk 0/1 ContainerCreating 0 58s kube-system kubernetes-dashboard-8c67656cd-wqxb7 1/1 Running 0 58s kube-system metrics-server-fcf58d7c5-8zl56 1/1 Running 0 58s kube-system dashboard-metrics-scraper-64bcc67c9c-vq659 1/1 Running 0 58s ubuntu@p9g-lpar07:~$ microk8s kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-d489fb88-6jmxv 1/1 Running 0 3m39s kube-system calico-node-qvb47 1/1 Running 10 (11m ago) 32m kube-system calico-kube-controllers-749975d956-nmkl4 1/1 Running 0 32m kube-system kubernetes-dashboard-8c67656cd-wqxb7 1/1 Running 0 116s kube-system metrics-server-fcf58d7c5-8zl56 1/1 Running 0 116s kube-system dashboard-metrics-scraper-64bcc67c9c-vq659 1/1 Running 0 116s kube-system hostpath-provisioner-85ccc46f96-prxtk 1/1 Running 0 116s ubuntu@p9g-lpar07:~$ ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether fa:39:12:da:21:0a brd ff:ff:ff:ff:ff:ff inet 10.245.71.133/21 brd 10.245.71.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::f839:12ff:feda:210a/64 scope link valid_lft forever preferred_lft forever 5: vxlan.calico: mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 66:ae:db:f3:49:68 brd ff:ff:ff:ff:ff:ff inet 10.1.187.128/32 scope global vxlan.calico valid_lft forever preferred_lft forever inet6 fe80::64ae:dbff:fef3:4968/64 scope link valid_lft forever preferred_lft forever 6: calif5d4c944715@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-18ed3b4f-24b0-cf0c-ccf4-cf16ed7d3de6 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 7: cali508e88f88d9@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-7b4b2bcd-3380-4775-4343-d69707b88c7a inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 8: calif297bc8485c@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-401918b6-5ee9-82fb-6a20-7fc24d4bd500 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 9: cali29bbec80733@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-4fd51772-5957-687a-0ebd-2568d8b334e2 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 10: cali43ac0346b45@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-5477f9d6-f2ea-2e6e-e3f6-f3efe65710a2 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever 11: cali2442da7625b@if3: mtu 1500 qdisc noqueue state UP group default link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netns cni-1d3b3e13-c7c8-6d0d-7585-6aaf87f75379 inet6 fe80::ecee:eeff:feee:eeee/64 scope link valid_lft forever preferred_lft forever ubuntu@p9g-lpar07:~$
ittner commented 2 years ago

inspection-report-20220922_135402.tar.gz

balchua commented 2 years ago

I think you can tweak the file /var/snap/microk8s/current/args/cni-network/cni.yaml and try changing the IP detection method, the default is first_found

Example:

IP_AUTODETECTION_METHOD=interface=ibmveth.*
neoaggelos commented 2 years ago

Hi @ittner

This looks like a bug in Calico itself, where autodetection method first-found (the default in MicroK8s) will ignore any interfaces that include veth in the name. I have created a PR in Calico that should fix the issue.

In the meantime, you can try the suggestion from balchua, or try a dev build image with the fix included:

sudo microk8s kubectl set image ds/calico-node -n kube-system calico-node=cdkbot/calico-node:v3.23.3-dev1-ppc64le

Thanks for reporting this! For more context, I was able to reproduce the issue on amd64 as well, so definitely not arch related.

neoaggelos commented 2 years ago

This has now been fixed upstream with the release of Calico 3.24.3 https://github.com/projectcalico/calico/blob/v3.24.3/calico/_includes/release-notes/v3.24.3-release-notes.md

neoaggelos commented 1 year ago

MicroK8s now comes with Calico 3.24.5, which includes the required fix. Closing the issue, thanks for reporting.