k3s-io / k3s

Lightweight Kubernetes
https://k3s.io
Apache License 2.0
28.08k stars 2.35k forks source link

Latest v1.24.x and v1.23.x releases break IPv6-only setup #5621

Closed martwz closed 2 years ago

martwz commented 2 years ago

Environmental Info: K3s Version:

k3s version v1.24.1-rc1+k3s1 (a5a0e8fd)
go version go1.18.1

Cluster Configuration: Single-node ipv6-only server

Describe the bug: I'm running k3s using the following command:

k3s server --flannel-ipv6-masq --disable-network-policy --cluster-cidr=2001:cafe:42:0::/56 --service-cidr=2001:cafe:42:1::/112 --disable=traefik

While that works for k3s version v1.23.5+k3s1 (313aaca5), it doesn't work for the latest v1.24.x and v1.23.x releases.

Logs, working example using k3s version v1.23.5+k3s1 (313aaca5): Full logs

Logs, NOT working example using k3s version v1.24.1-rc1+k3s1 (a5a0e8fd):

INFO[0011] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error
E0529 17:42:52.149525 1339371 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://[2001:cafe:42::11]:4443/apis/metrics.k8s.io/v1beta1: Get "https://[2001:cafe:42::11]:4443/apis/metrics.k8s.io/v1beta1": proxy error from [::1]:6443 while dialing [2001:cafe:42::11]:4443, code 500: 500 Internal Server Error
INFO[0015] Tunnel server egress proxy waiting for runtime core to become available
INFO[0016] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error
INFO[0020] Tunnel server egress proxy waiting for runtime core to become available
INFO[0021] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error
E0529 17:43:02.391893 1339371 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://[2001:cafe:42::11]:4443/apis/metrics.k8s.io/v1beta1: Get "https://[2001:cafe:42::11]:4443/apis/metrics.k8s.io/v1beta1": proxy error from [::1]:6443 while dialing [2001:cafe:42::11]:4443, code 500: 500 Internal Server Error
^CI0529 17:43:04.030385 1339371 watch.go:39] context canceled, close receiver chan
I0529 17:43:04.030399 1339371 vxlan_network.go:75] evts chan closed
W0529 17:43:04.030454 1339371 reflector.go:442] k8s.io/client-go@v1.24.1-k3s1/tools/cache/reflector.go:167: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: context canceled") has prevented the request from succeeding

Full logs

manuelbuil commented 2 years ago

What OS are you using?

manuelbuil commented 2 years ago

Can you please confirm that v1.23.6+k3s1 fails to start too?

ShylajaDevadiga commented 2 years ago

@martinxxd I am trying to reproduce with below config on a fresh node. I am not seeing issues bringing up the cluster on Ubuntu 20.04 ipv6 only setup using v1.23.7.rc1+k3s1 as well as v1.24.1.rc1+k3s1. Any input that will be helpful to reproduce?

$ cat config.yaml 
write-kubeconfig-mode: 644
token: REDACTED
cluster-cidr: 2001:cafe:42:0::/56
service-cidr: 2001:cafe:42:1::/112
disable-network-policy: true
flannel-ipv6-masq: true
disable: traefik

ubuntu@i-02f672ec94681e415:~$ kubectl get nodes
NAME                  STATUS   ROLES                  AGE   VERSION
i-02f672ec94681e415   Ready    control-plane,master   48s   v1.24.1-rc1+k3s1
ubuntu@i-02f672ec94681e415:~$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   coredns-b96499967-ldjm6                   1/1     Running   0          35s
kube-system   local-path-provisioner-7b7dc8d6f5-zg7r5   1/1     Running   0          35s
kube-system   metrics-server-668d979685-tpqkc           1/1     Running   0          35s
ubuntu@i-02f672ec94681e415:~$ 
martwz commented 2 years ago

@manuelbuil, @ShylajaDevadiga, thanks for getting back so promptly!

I'll gather a bit more information regarding my OS and try out v1.23.6+k3s1. On a high-level, I'm running Rocky Linux 8.

brandond commented 2 years ago

Do you actually have globally a routable IPv6 address on an interface, and a ipv6 default route?

martwz commented 2 years ago

yes that's an ipv6-only server, there's no (routable) ipv4 assigned.

OS info:

[root@rocky ~]# cat /etc/os-release
NAME="Rocky Linux"
VERSION="8.6 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.6 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8"

[root@rocky ~]# hostnamectl
   Static hostname: rocky
         Icon name: computer-desktop
           Chassis: desktop
        Machine ID: CENSORED
           Boot ID: CENSORED
  Operating System: Rocky Linux 8.6 (Green Obsidian)
       CPE OS Name: cpe:/o:rocky:rocky:8:GA
            Kernel: Linux 4.18.0-348.7.1.el8_5.x86_64
      Architecture: x86-64

[root@rocky ~]# cat /etc/*release
Rocky Linux release 8.6 (Green Obsidian)
NAME="Rocky Linux"
VERSION="8.6 (Green Obsidian)"
ID="rocky"
ID_LIKE="rhel centos fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Rocky Linux 8.6 (Green Obsidian)"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:rocky:rocky:8:GA"
HOME_URL="https://rockylinux.org/"
BUG_REPORT_URL="https://bugs.rockylinux.org/"
ROCKY_SUPPORT_PRODUCT="Rocky Linux"
ROCKY_SUPPORT_PRODUCT_VERSION="8"
REDHAT_SUPPORT_PRODUCT="Rocky Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8"
Rocky Linux release 8.6 (Green Obsidian)

[root@rocky ~]# uname -srm
Linux 4.18.0-348.7.1.el8_5.x86_64 x86_64

Network info:

[root@rocky ~]# nmcli connection show
NAME           UUID                                  TYPE      DEVICE
System enp6s0  b43fa2aa-5a85-7b0a-9a20-469067dba6d6  ethernet  enp6s0
tailscale0     8d9ddd75-443e-445c-b140-c4af5a80a41b  tun       tailscale0
cni0           8450821d-9831-4b92-a102-dc6402065dee  bridge    cni0
flannel-v6.1   1d0b918f-3f61-46c1-b8fb-bed18953d616  vxlan     flannel-v6.1

[root@rocky ~]# nmcli device show enp6s0
GENERAL.DEVICE:                         enp6s0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         7C:10:C9:21:FC:C7
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System enp6s0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER:               on
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         CENSORED::2/64
IP6.ADDRESS[2]:                         fe80::7e10:c9ff:fe21:fcc7/64
IP6.GATEWAY:                            fe80::1
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 100
IP6.ROUTE[2]:                           dst = CENSORED::/64, nh = ::, mt = 100
IP6.ROUTE[3]:                           dst = ::/0, nh = fe80::1, mt = 100

[root@rocky ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp6s0
DEVICE=enp6s0
ONBOOT=yes
BOOTPROTO=none
DEFROUTE=yes

IPV6INIT=yes
IPV6ADDR=CENSORED::2/64
IPV6_DEFAULTGW=fe80::1
IPV6_DEFROUTE=yes
martwz commented 2 years ago

Getting the same error using v1.23.6+k3s1 (418c3fa8):

Logs ```shell [root@rocky ~]# ./k3s -version k3s version v1.23.6+k3s1 (418c3fa8) go version go1.17.5 [root@rocky ~]# ./k3s server --flannel-ipv6-masq --disable-network-policy --cluster-cidr=2001:cafe:42:0::/56 --service-cidr=2001:cafe:42:1::/112 --disable=traefik INFO[0000] Starting k3s v1.23.6+k3s1 (418c3fa8) INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s INFO[0000] Configuring database table schema and indexes, this may take a moment... INFO[0000] Database tables and indexes are up to date INFO[0000] Kine available at unix://kine.sock INFO[0000] Reconciling bootstrap data between datastore and disk INFO[0000] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=::1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=2001:cafe:42:1::/112 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=::1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259 INFO[0000] Waiting for API server to become available INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=::1 --cluster-cidr=2001:cafe:42::/56 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true I0601 05:35:40.297953 1412177 server.go:593] external host was not specified, using CENSORED::2 I0601 05:35:40.298107 1412177 server.go:186] Version: v1.23.6+k3s1 INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=::1 --cloud-provider=k3s --cluster-cidr=2001:cafe:42::/56 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false I0601 05:35:40.299863 1412177 shared_informer.go:240] Waiting for caches to sync for node_authorizer I0601 05:35:40.300420 1412177 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0601 05:35:40.300434 1412177 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0601 05:35:40.300945 1412177 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0601 05:35:40.300953 1412177 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. INFO[0000] Node token is available at /var/lib/rancher/k3s/server/token INFO[0000] To join node to cluster: k3s agent -s https://[CENSORED::2]:6443 -t ${NODE_TOKEN} INFO[0000] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml INFO[0000] Run: k3s kubectl W0601 05:35:40.313378 1412177 genericapiserver.go:538] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. I0601 05:35:40.314029 1412177 instance.go:274] Using reconciler: lease INFO[0000] certificate CN=rocky-85-amd64-base signed by CN=k3s-server-ca@1650080547: notBefore=2022-04-16 03:42:27 +0000 UTC notAfter=2023-06-01 03:35:40 +0000 UTC INFO[0000] certificate CN=system:node:rocky-85-amd64-base,O=system:nodes signed by CN=k3s-client-ca@1650080547: notBefore=2022-04-16 03:42:27 +0000 UTC notAfter=2023-06-01 03:35:40 +0000 UTC INFO[0000] Module overlay was already loaded INFO[0000] Module nf_conntrack was already loaded INFO[0000] Module br_netfilter was already loaded INFO[0000] Module iptable_nat was already loaded INFO[0000] Module ip6table_nat was already loaded INFO[0000] Set sysctl 'net/core/devconf_inherit_init_net' to 1 ERRO[0000] Failed to set sysctl: open /proc/sys/net/core/devconf_inherit_init_net: no such file or directory WARN[0000] SELinux is enabled on this host, but k3s has not been started with --selinux - containerd SELinux support is disabled INFO[0000] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[0000] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd W0601 05:35:40.506001 1412177 genericapiserver.go:538] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.507870 1412177 genericapiserver.go:538] Skipping API authorization.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.517091 1412177 genericapiserver.go:538] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.518270 1412177 genericapiserver.go:538] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.522082 1412177 genericapiserver.go:538] Skipping API networking.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.524776 1412177 genericapiserver.go:538] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0601 05:35:40.528613 1412177 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.528622 1412177 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0601 05:35:40.529488 1412177 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W0601 05:35:40.529495 1412177 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0601 05:35:40.531876 1412177 genericapiserver.go:538] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0601 05:35:40.534363 1412177 genericapiserver.go:538] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W0601 05:35:40.536795 1412177 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources. W0601 05:35:40.536802 1412177 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources. W0601 05:35:40.537905 1412177 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I0601 05:35:40.540186 1412177 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0601 05:35:40.540194 1412177 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W0601 05:35:40.546670 1412177 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. I0601 05:35:41.159743 1412177 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I0601 05:35:41.159794 1412177 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I0601 05:35:41.159876 1412177 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" I0601 05:35:41.159891 1412177 secure_serving.go:266] Serving securely on [::1]:6444 I0601 05:35:41.159909 1412177 tlsconfig.go:240] "Starting DynamicServingCertificateController" I0601 05:35:41.159965 1412177 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key" I0601 05:35:41.160007 1412177 apiservice_controller.go:97] Starting APIServiceRegistrationController I0601 05:35:41.160015 1412177 customresource_discovery_controller.go:209] Starting DiscoveryController I0601 05:35:41.160022 1412177 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0601 05:35:41.160239 1412177 autoregister_controller.go:141] Starting autoregister controller I0601 05:35:41.160249 1412177 cache.go:32] Waiting for caches to sync for autoregister controller I0601 05:35:41.160267 1412177 crdregistration_controller.go:111] Starting crd-autoregister controller I0601 05:35:41.160271 1412177 controller.go:85] Starting OpenAPI controller I0601 05:35:41.160272 1412177 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0601 05:35:41.160283 1412177 naming_controller.go:291] Starting NamingConditionController I0601 05:35:41.160293 1412177 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0601 05:35:41.160302 1412177 available_controller.go:491] Starting AvailableConditionController I0601 05:35:41.160305 1412177 establishing_controller.go:76] Starting EstablishingController I0601 05:35:41.160305 1412177 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0601 05:35:41.160319 1412177 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0601 05:35:41.160294 1412177 crd_finalizer.go:266] Starting CRDFinalizer I0601 05:35:41.160400 1412177 apf_controller.go:317] Starting API Priority and Fairness config controller I0601 05:35:41.160520 1412177 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0601 05:35:41.160527 1412177 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0601 05:35:41.160548 1412177 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I0601 05:35:41.162142 1412177 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I0601 05:35:41.162282 1412177 controller.go:83] Starting OpenAPI AggregationController I0601 05:35:41.199905 1412177 shared_informer.go:247] Caches are synced for node_authorizer E0601 05:35:41.205240 1412177 controller.go:161] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I0601 05:35:41.260176 1412177 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0601 05:35:41.260333 1412177 cache.go:39] Caches are synced for AvailableConditionController controller I0601 05:35:41.260397 1412177 shared_informer.go:247] Caches are synced for crd-autoregister I0601 05:35:41.260402 1412177 cache.go:39] Caches are synced for autoregister controller I0601 05:35:41.260447 1412177 apf_controller.go:322] Running API Priority and Fairness config worker I0601 05:35:41.260570 1412177 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller INFO[0001] Containerd is now running INFO[0001] Connecting to proxy url="wss://[::1]:6443/v1-k3s/connect" INFO[0001] Handling backend connection request [rocky-85-amd64-base] INFO[0001] Running kubelet --address=:: --allowed-unsafe-sysctls=net.ipv6.conf.all.forwarding --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=2001:cafe:42:1::a --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/8c2b0191f6e36ec6f3cb68e2302fcc4be850c6db31ec5f8a74e4b3be403101d8/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime=remote --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=::1 --hostname-override=rocky-85-amd64-base --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet. Flag --cni-bin-dir has been deprecated, will be removed along with dockershim. Flag --cni-conf-dir has been deprecated, will be removed along with dockershim. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. INFO[0001] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error I0601 05:35:41.442121 1412177 server.go:442] "Kubelet version" kubeletVersion="v1.23.6+k3s1" I0601 05:35:41.442815 1412177 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt" I0601 05:35:42.159889 1412177 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0601 05:35:42.162205 1412177 storage_scheduling.go:109] all system priority classes are created successfully or already exist. I0601 05:35:42.206231 1412177 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). INFO[0002] labels have been set successfully on node: rocky-85-amd64-base INFO[0002] Starting flannel with backend vxlan INFO[0002] Flannel found PodCIDR assigned for node rocky-85-amd64-base INFO[0002] The interface enp6s0 with ipv6 address CENSORED::2 will be used by flannel I0601 05:35:42.450395 1412177 kube.go:120] Waiting 10m0s for node controller to sync I0601 05:35:42.450406 1412177 kube.go:378] Starting kube subnet manager I0601 05:35:42.451327 1412177 kube.go:406] Creating the node lease for IPv6. This is the n.Spec.PodCIDRs: [2001:cafe:42::/64] I0601 05:35:43.451116 1412177 kube.go:127] Node controller sync successful I0601 05:35:43.451160 1412177 vxlan.go:138] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false I0601 05:35:43.457930 1412177 kube.go:345] Skip setting NodeNetworkUnavailable INFO[0003] Wrote flannel subnet file to /run/flannel/subnet.env INFO[0003] Running flannel backend. I0601 05:35:43.459748 1412177 vxlan_network.go:61] watching for new subnet leases INFO[0006] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error I0601 05:35:46.463707 1412177 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" I0601 05:35:46.463816 1412177 container_manager_linux.go:281] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] I0601 05:35:46.463855 1412177 container_manager_linux.go:286] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} I0601 05:35:46.463877 1412177 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" I0601 05:35:46.463882 1412177 container_manager_linux.go:321] "Creating device plugin manager" devicePluginEnabled=true I0601 05:35:46.463901 1412177 state_mem.go:36] "Initialized new in-memory state store" I0601 05:35:46.464877 1412177 kubelet.go:416] "Attempting to sync node with API server" I0601 05:35:46.464891 1412177 kubelet.go:278] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests" I0601 05:35:46.464909 1412177 kubelet.go:289] "Adding apiserver pod source" I0601 05:35:46.464919 1412177 apiserver.go:42] "Waiting for node sync before watching apiserver pods" I0601 05:35:46.465540 1412177 kuberuntime_manager.go:249] "Container runtime initialized" containerRuntime="containerd" version="v1.5.11-k3s2" apiVersion="v1alpha2" I0601 05:35:46.465801 1412177 server.go:1231] "Started kubelet" I0601 05:35:46.465857 1412177 server.go:150] "Starting to listen" address="::" port=10250 E0601 05:35:46.466340 1412177 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs" E0601 05:35:46.466367 1412177 kubelet.go:1351] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" I0601 05:35:46.466672 1412177 server.go:410] "Adding debug handlers to kubelet server" I0601 05:35:46.466867 1412177 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" I0601 05:35:46.466908 1412177 volume_manager.go:291] "Starting Kubelet Volume Manager" I0601 05:35:46.466969 1412177 desired_state_of_world_populator.go:147] "Desired state populator starts to run" I0601 05:35:46.482435 1412177 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io I0601 05:35:46.490472 1412177 kubelet_network_linux.go:57] "Initialized protocol iptables rules." protocol=IPv4 I0601 05:35:46.505982 1412177 kubelet_network_linux.go:57] "Initialized protocol iptables rules." protocol=IPv6 I0601 05:35:46.505998 1412177 status_manager.go:161] "Starting to sync pod status with apiserver" I0601 05:35:46.506011 1412177 kubelet.go:2016] "Starting kubelet main sync loop" E0601 05:35:46.506042 1412177 kubelet.go:2040] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" I0601 05:35:46.567736 1412177 kuberuntime_manager.go:1105] "Updating runtime config through cri with podcidr" CIDR="2001:cafe:42::/64" I0601 05:35:46.568196 1412177 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="2001:cafe:42::/64" I0601 05:35:46.570353 1412177 cpu_manager.go:213] "Starting CPU manager" policy="none" I0601 05:35:46.570362 1412177 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" I0601 05:35:46.570373 1412177 state_mem.go:36] "Initialized new in-memory state store" I0601 05:35:46.570478 1412177 state_mem.go:88] "Updated default CPUSet" cpuSet="" I0601 05:35:46.570486 1412177 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] I0601 05:35:46.570491 1412177 policy_none.go:49] "None policy: Start" I0601 05:35:46.570839 1412177 memory_manager.go:168] "Starting memorymanager" policy="None" I0601 05:35:46.570853 1412177 state_mem.go:35] "Initializing new in-memory state store" I0601 05:35:46.570949 1412177 state_mem.go:75] "Updated machine memory state" I0601 05:35:46.571717 1412177 manager.go:610] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" I0601 05:35:46.572136 1412177 plugin_manager.go:114] "Starting Kubelet Plugin Manager" I0601 05:35:46.578905 1412177 container_manager_linux.go:676] "Failed to ensure state" containerName="/k3s" err="failed to find container of PID 1412177: cpu and memory cgroup hierarchy not unified. cpu: /user.slice, memory: /user.slice/user-0.slice/session-102.scope" I0601 05:35:46.606672 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="649f2d3e7d97473a4fd816bf210ce982b8f4c5371d06c7c79d1494068de47dc9" I0601 05:35:46.606689 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="42008242bf2ffe578d874ed2b5afec8a2297ce17201632cf18fb72049b8009f6" I0601 05:35:46.606701 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="7185f17791cafa9ab08d48c78047a01c7001d51c16f95c5e2d8615db79f720b1" I0601 05:35:46.606713 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="e562733c15c438639c2118834adeaa169b5a8b8af648706bcdb65d043b7f974b" I0601 05:35:46.606726 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="81c926bdab2b11ab0f1af80f7bc5da79b163653dabc43a4b54201f949ed6c1d3" I0601 05:35:46.606737 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="b046d1893e1dd8c4a59a49e459ef8a154c533ea99fcd86d5fecac2237468f349" I0601 05:35:46.606746 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="0060022aa1d862ce28b2a31ad25add77156e72713ba355f07a76fc4d3b59ebd9" I0601 05:35:46.606766 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="77607c6e2249de16d4bbf0a132dc218ef294add3a6b8873770d8a8abcedc3616" I0601 05:35:46.606775 1412177 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="5a5b6f59d05a83bb091e37fbea43c38a2429b785d70edc3ca6ac8551638a8edb" I0601 05:35:46.609339 1412177 kubelet_node_status.go:70] "Attempting to register node" node="rocky-85-amd64-base" I0601 05:35:46.613446 1412177 kubelet_node_status.go:108] "Node was previously registered" node="rocky-85-amd64-base" I0601 05:35:46.613504 1412177 kubelet_node_status.go:73] "Successfully registered node" node="rocky-85-amd64-base" INFO[0007] Stopped tunnel to [::1]:6443 INFO[0007] Proxy done err="context canceled" url="wss://[::1]:6443/v1-k3s/connect" INFO[0007] Connecting to proxy url="wss://[CENSORED::2]:6443/v1-k3s/connect" INFO[0007] error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF INFO[0007] Handling backend connection request [rocky-85-amd64-base] I0601 05:35:47.465197 1412177 apiserver.go:52] "Watching apiserver" I0601 05:35:47.467714 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.467759 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.467785 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.467818 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.467846 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.467871 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.467952 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.468083 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.468136 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.468313 1412177 topology_manager.go:200] "Topology Admit Handler" I0601 05:35:47.573725 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfqzn\" (UniqueName: \"kubernetes.io/projected/50c6c3f2-7d77-4575-8dbb-a9f655e5bff2-kube-api-access-hfqzn\") pod \"local-path-provisioner-6c79684f77-bx2b7\" (UID: \"50c6c3f2-7d77-4575-8dbb-a9f655e5bff2\") " pod="kube-system/local-path-provisioner-6c79684f77-bx2b7" I0601 05:35:47.573752 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/50c6c3f2-7d77-4575-8dbb-a9f655e5bff2-config-volume\") pod \"local-path-provisioner-6c79684f77-bx2b7\" (UID: \"50c6c3f2-7d77-4575-8dbb-a9f655e5bff2\") " pod="kube-system/local-path-provisioner-6c79684f77-bx2b7" I0601 05:35:48.665585 1412177 request.go:665] Waited for 1.19493861s due to client-side throttling, not priority and fairness, request: PATCH:https://[::1]:6443/api/v1/namespaces/kube-system/pods/metrics-server-7cd5fcb6b7-fmdzb/status I0601 05:35:49.486910 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b6318365-5119-488d-9c02-e6559bafe7ff\" (UniqueName: \"kubernetes.io/host-path/3ec3c902-8600-4344-b85e-2f5af19d3199-pvc-b6318365-5119-488d-9c02-e6559bafe7ff\") pod \"redis-5cbd7686d5-zq2rn\" (UID: \"3ec3c902-8600-4344-b85e-2f5af19d3199\") " pod="default/redis-5cbd7686d5-zq2rn" I0601 05:35:49.486942 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcb6m\" (UniqueName: \"kubernetes.io/projected/3ec3c902-8600-4344-b85e-2f5af19d3199-kube-api-access-tcb6m\") pod \"redis-5cbd7686d5-zq2rn\" (UID: \"3ec3c902-8600-4344-b85e-2f5af19d3199\") " pod="default/redis-5cbd7686d5-zq2rn" I0601 05:35:50.493777 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e288a59b-f402-4410-a3e0-47b51cad15f8\" (UniqueName: \"kubernetes.io/host-path/904524a8-e6cb-4366-85a4-c85bde1a9c90-pvc-e288a59b-f402-4410-a3e0-47b51cad15f8\") pod \"clickhouse-7766d8f755-c5x8c\" (UID: \"904524a8-e6cb-4366-85a4-c85bde1a9c90\") " pod="default/clickhouse-7766d8f755-c5x8c" I0601 05:35:50.493807 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvwmx\" (UniqueName: \"kubernetes.io/projected/904524a8-e6cb-4366-85a4-c85bde1a9c90-kube-api-access-pvwmx\") pod \"clickhouse-7766d8f755-c5x8c\" (UID: \"904524a8-e6cb-4366-85a4-c85bde1a9c90\") " pod="default/clickhouse-7766d8f755-c5x8c" INFO[0011] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error I0601 05:35:51.702524 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd43cc9-ee9b-4d77-8497-9baefb6cfb5e-custom-config-volume\") pod \"coredns-d76bd69b-dq6mq\" (UID: \"bcd43cc9-ee9b-4d77-8497-9baefb6cfb5e\") " pod="kube-system/coredns-d76bd69b-dq6mq" I0601 05:35:51.702549 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vector-config\" (UniqueName: \"kubernetes.io/configmap/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-vector-config\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702569 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sysfs\" (UniqueName: \"kubernetes.io/host-path/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-sysfs\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702585 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bcd43cc9-ee9b-4d77-8497-9baefb6cfb5e-config-volume\") pod \"coredns-d76bd69b-dq6mq\" (UID: \"bcd43cc9-ee9b-4d77-8497-9baefb6cfb5e\") " pod="kube-system/coredns-d76bd69b-dq6mq" I0601 05:35:51.702603 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxrw8\" (UniqueName: \"kubernetes.io/projected/eaca86cc-7076-4bca-95ee-a6b8c1cc7d1e-kube-api-access-gxrw8\") pod \"static-site-67bcb6859d-xf478\" (UID: \"eaca86cc-7076-4bca-95ee-a6b8c1cc7d1e\") " pod="default/static-site-67bcb6859d-xf478" I0601 05:35:51.702622 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j44kh\" (UniqueName: \"kubernetes.io/projected/39b5cbf6-a738-4959-b292-bd98899d0c0f-kube-api-access-j44kh\") pod \"grafana-6944f767f6-gltzp\" (UID: \"39b5cbf6-a738-4959-b292-bd98899d0c0f\") " pod="monitoring/grafana-6944f767f6-gltzp" I0601 05:35:51.702639 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-var-log\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702654 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/914136fd-1dd6-478a-a572-9da9a9fa6731-tmp-dir\") pod \"metrics-server-7cd5fcb6b7-fmdzb\" (UID: \"914136fd-1dd6-478a-a572-9da9a9fa6731\") " pod="kube-system/metrics-server-7cd5fcb6b7-fmdzb" I0601 05:35:51.702672 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbrs\" (UniqueName: \"kubernetes.io/projected/914136fd-1dd6-478a-a572-9da9a9fa6731-kube-api-access-8tbrs\") pod \"metrics-server-7cd5fcb6b7-fmdzb\" (UID: \"914136fd-1dd6-478a-a572-9da9a9fa6731\") " pod="kube-system/metrics-server-7cd5fcb6b7-fmdzb" I0601 05:35:51.702687 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-var-lib\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702703 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"procfs\" (UniqueName: \"kubernetes.io/host-path/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-procfs\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702719 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml6tr\" (UniqueName: \"kubernetes.io/projected/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-kube-api-access-ml6tr\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702738 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c5ad9827-1a74-4e9c-9bf9-5e2903a5b82e\" (UniqueName: \"kubernetes.io/host-path/39b5cbf6-a738-4959-b292-bd98899d0c0f-pvc-c5ad9827-1a74-4e9c-9bf9-5e2903a5b82e\") pod \"grafana-6944f767f6-gltzp\" (UID: \"39b5cbf6-a738-4959-b292-bd98899d0c0f\") " pod="monitoring/grafana-6944f767f6-gltzp" I0601 05:35:51.702754 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d7ea9d08-0c2c-49e1-8977-37e51f178dc1-data\") pod \"vector-9d49985b4-pnskq\" (UID: \"d7ea9d08-0c2c-49e1-8977-37e51f178dc1\") " pod="monitoring/vector-9d49985b4-pnskq" I0601 05:35:51.702774 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h2n6\" (UniqueName: \"kubernetes.io/projected/bcd43cc9-ee9b-4d77-8497-9baefb6cfb5e-kube-api-access-6h2n6\") pod \"coredns-d76bd69b-dq6mq\" (UID: \"bcd43cc9-ee9b-4d77-8497-9baefb6cfb5e\") " pod="kube-system/coredns-d76bd69b-dq6mq" I0601 05:35:53.716063 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90956120-5329-4afa-a7ea-f9beb9ac48b4\" (UniqueName: \"kubernetes.io/host-path/a43f958e-bee8-4508-84af-26b083742db5-pvc-90956120-5329-4afa-a7ea-f9beb9ac48b4\") pod \"postgres-79cb55f775-2rvq8\" (UID: \"a43f958e-bee8-4508-84af-26b083742db5\") " pod="default/postgres-79cb55f775-2rvq8" I0601 05:35:53.716091 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-registry-config\" (UniqueName: \"kubernetes.io/configmap/2ef930bc-55ab-48b7-94fd-3bf71149de8c-kube-registry-config\") pod \"kube-registry-848c7577cf-wlm54\" (UID: \"2ef930bc-55ab-48b7-94fd-3bf71149de8c\") " pod="default/kube-registry-848c7577cf-wlm54" I0601 05:35:53.716115 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx4nr\" (UniqueName: \"kubernetes.io/projected/2ef930bc-55ab-48b7-94fd-3bf71149de8c-kube-api-access-xx4nr\") pod \"kube-registry-848c7577cf-wlm54\" (UID: \"2ef930bc-55ab-48b7-94fd-3bf71149de8c\") " pod="default/kube-registry-848c7577cf-wlm54" I0601 05:35:53.716138 1412177 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tcs\" (UniqueName: \"kubernetes.io/projected/a43f958e-bee8-4508-84af-26b083742db5-kube-api-access-w6tcs\") pod \"postgres-79cb55f775-2rvq8\" (UID: \"a43f958e-bee8-4508-84af-26b083742db5\") " pod="default/postgres-79cb55f775-2rvq8" I0601 05:35:53.716148 1412177 reconciler.go:157] "Reconciler: start to sync state" INFO[0016] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error INFO[0021] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error INFO[0026] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error INFO[0030] Waiting for API server to become available INFO[0030] Waiting for API server to become available INFO[0031] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error INFO[0036] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error INFO[0041] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error INFO[0046] Waiting to retrieve kube-proxy configuration; server is not ready: https://[::1]:6443/v1-k3s/readyz: 500 Internal Server Error ^CI0601 05:36:29.851753 1412177 watch.go:39] context canceled, close receiver chan I0601 05:36:29.851771 1412177 vxlan_network.go:75] evts chan closed ```
brandond commented 2 years ago

Does localhost in /etc/hosts point at an ipv4, or ipv6, address?

rbrtbnfgl commented 2 years ago

I think that it could be a certificate issue. Are you starting the newer version of K3s on an existing setup or is a fresh installation?

martwz commented 2 years ago

oh, localhost points to an ipv4. And I'm starting the newer version of k3s on an existing setup.

[root@rocky ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
REDACTED::2 rocky

[root@rocky ~]# ping localhost
PING localhost.localdomain (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=2 ttl=64 time=0.041 ms
64 bytes from localhost.localdomain (127.0.0.1): icmp_seq=3 ttl=64 time=0.022 ms
^C
--- localhost.localdomain ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 9227ms
rtt min/avg/max/mdev = 0.022/0.028/0.041/0.006 ms
rbrtbnfgl commented 2 years ago

The issue is related on the generated certificate, on v1.23.5 IPv6 wasn't fully supported and has some issues on the internal generated URLs with 127.0.0.1 and not ::1, on v1.23.6 this was fixed; on your setup the certificate generated on v1.23.5 points only to 127.0.0.1 then it fails to contact the service on ::1 because it's not a valid certificate. I have to check how to force the certificate generation.

rbrtbnfgl commented 2 years ago

If you remove the certificates on /var/lib/rancher/k3s/server/tls/ before updating they will be generated again from the newer version and it should be fixed.

martwz commented 2 years ago

Thanks rbrtbnfgl and brandond! Removing the certificates from /var/lib/rancher/k3s/server/tls/ fixed the issue, I'm now able to run the latest k3s using an ipv6-only setup 🎉