loft-sh / vcluster

vCluster - Create fully functional virtual Kubernetes clusters - Each vcluster runs inside a namespace of the underlying k8s cluster. It's cheaper than creating separate full-blown clusters and it offers better multi-tenancy and isolation than regular namespaces.
https://www.vcluster.com
Apache License 2.0
6.26k stars 398 forks source link

vcluster on k3s on WSL2 #162

Closed pmualaba closed 2 years ago

pmualaba commented 2 years ago

Hello, trying out vcluster. Any idea why my attempt to create a simple vcluster is failing here? I succesfully installed k3s here in WSL2 and now i'm trying to create my first vcluster inside it...

k3s version v1.22.2+k3s2 (3f5774b4) go version go1.16.8

image

This is the k3s log on WSL2:

╰─ I1023 18:55:27.145605 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:27.145634 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:27.145866 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:55:40.145573 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:40.145604 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:40.145915 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 W1023 18:55:46.540702 31070 sysinfo.go:203] Nodes topology is not available, providing CPU topology I1023 18:55:54.145219 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:55:54.145249 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:55:54.145498 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 E1023 18:55:57.670079 31070 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.4 (legacy): Couldn't load matchlimit':No such file or directory

Error occurred at line: 83 Try iptables-restore -h' or 'iptables-restore --help' for more information. ) *filter :INPUT ACCEPT [40874:12238794] - [0:0] :FORWARD DROP [0:0] - [0:0] :OUTPUT ACCEPT [41155:11345096] - [0:0] :DOCKER - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-1 - [0:0] - [0:0] :DOCKER-ISOLATION-STAGE-2 - [0:0] - [0:0] :DOCKER-USER - [0:0] - [0:0] :KUBE-EXTERNAL-SERVICES - [0:0] - [0:0] :KUBE-FIREWALL - [0:0] - [0:0] :KUBE-FORWARD - [0:0] - [0:0] :KUBE-KUBELET-CANARY - [0:0] - [0:0] :KUBE-NODEPORTS - [0:0] - [0:0] :KUBE-NWPLCY-DEFAULT - [0:0] - [0:0] :KUBE-PROXY-CANARY - [0:0] - [0:0] :KUBE-ROUTER-FORWARD - [0:0] - [0:0] :KUBE-ROUTER-INPUT - [0:0] - [0:0] :KUBE-ROUTER-OUTPUT - [0:0] - [0:0] :KUBE-SERVICES - [0:0] - [0:0] :KUBE-POD-FW-YTHDYMA2CBWLR2PW - [0:0] :KUBE-POD-FW-CEOFHLPKKYLD56IO - [0:0] :KUBE-POD-FW-NAOEZKKUB5NO4KBI - [0:0] :KUBE-POD-FW-4M52UXR2EWFBQ6QH - [0:0] :KUBE-POD-FW-K2TVNHK5E5ZQHCLK - [0:0] :KUBE-POD-FW-C7NCCGNSUR3CKZKN - [0:0] -A INPUT -m comment --comment "kube-router netpol - 4IA2OSFRMVNDXBVV" -j KUBE-ROUTER-INPUT -A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS -A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A INPUT -j KUBE-FIREWALL -A FORWARD -m comment --comment "kube-router netpol - TEMCG2JMHZYE7H7T" -j KUBE-ROUTER-FORWARD -A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES -A FORWARD -j DOCKER-USER -A FORWARD -j DOCKER-ISOLATION-STAGE-1 -A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o docker0 -j DOCKER -A FORWARD -i docker0 ! -o docker0 -j ACCEPT -A FORWARD -i docker0 -o docker0 -j ACCEPT -A FORWARD -o br-ba82458eff39 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A FORWARD -o br-ba82458eff39 -j DOCKER -A FORWARD -i br-ba82458eff39 ! -o br-ba82458eff39 -j ACCEPT -A FORWARD -i br-ba82458eff39 -o br-ba82458eff39 -j ACCEPT -A FORWARD -s 10.42.0.0/16 -j ACCEPT -A FORWARD -d 10.42.0.0/16 -j ACCEPT -A OUTPUT -m comment --comment "kube-router netpol - VEAAIY32XVBHCSCY" -j KUBE-ROUTER-OUTPUT -A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES -A OUTPUT -j KUBE-FIREWALL -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -i br-ba82458eff39 ! -o br-ba82458eff39 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -o br-ba82458eff39 -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN -A DOCKER-USER -j RETURN -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP -A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP -A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP -A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -A KUBE-NWPLCY-DEFAULT -m comment --comment "rule to mark traffic matching a network policy" -j MARK --set-xmark 0x10000/0x10000 -A KUBE-ROUTER-FORWARD -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-INPUT -d 10.43.0.0/16 -m comment --comment "allow traffic to cluster IP - M66LPN4N3KB5HTJR" -j RETURN -A KUBE-ROUTER-INPUT -p tcp -m comment --comment "allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -p udp -m comment --comment "allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN -A KUBE-ROUTER-INPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-ROUTER-OUTPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT -A KUBE-SERVICES -d 10.43.44.208/32 -p udp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns has no endpoints" -m udp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:dns-tcp has no endpoints" -m tcp --dport 53 -j REJECT --reject-with icmp-port-unreachable -A KUBE-SERVICES -d 10.43.44.208/32 -p tcp -m comment --comment "host-namespace-1/kube-dns-x-kube-system-x-vcluster-1:metrics has no endpoints" -m tcp --dport 9153 -j REJECT --reject-with icmp-port-unreachable -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -d 10.42.0.7 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -s 10.42.0.7 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.7 -j ACCEPT -I KUBE-POD-FW-YTHDYMA2CBWLR2PW 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -d 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:svclb-traefik-k4whv namespace: kube-system to chain KUBE-POD-FW-YTHDYMA2CBWLR2PW" -s 10.42.0.7 -j KUBE-POD-FW-YTHDYMA2CBWLR2PW -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to log dropped traffic POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "rule to REJECT traffic destined for POD name:svclb-traefik-k4whv namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-YTHDYMA2CBWLR2PW -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -d 10.42.0.11 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -s 10.42.0.11 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.11 -j ACCEPT -I KUBE-POD-FW-CEOFHLPKKYLD56IO 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -d 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:vcluster-1-0 namespace: host-namespace-1 to chain KUBE-POD-FW-CEOFHLPKKYLD56IO" -s 10.42.0.11 -j KUBE-POD-FW-CEOFHLPKKYLD56IO -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to log dropped traffic POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "rule to REJECT traffic destined for POD name:vcluster-1-0 namespace: host-namespace-1" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-CEOFHLPKKYLD56IO -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-CEOFHLPKKYLD56IO -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -d 10.42.0.8 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -s 10.42.0.8 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.8 -j ACCEPT -I KUBE-POD-FW-NAOEZKKUB5NO4KBI 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -d 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:traefik-74dd4975f9-4fpmk namespace: kube-system to chain KUBE-POD-FW-NAOEZKKUB5NO4KBI" -s 10.42.0.8 -j KUBE-POD-FW-NAOEZKKUB5NO4KBI -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to log dropped traffic POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "rule to REJECT traffic destined for POD name:traefik-74dd4975f9-4fpmk namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-NAOEZKKUB5NO4KBI -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -d 10.42.0.2 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -s 10.42.0.2 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.2 -j ACCEPT -I KUBE-POD-FW-4M52UXR2EWFBQ6QH 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -d 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system to chain KUBE-POD-FW-4M52UXR2EWFBQ6QH" -s 10.42.0.2 -j KUBE-POD-FW-4M52UXR2EWFBQ6QH -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to log dropped traffic POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "rule to REJECT traffic destined for POD name:local-path-provisioner-64ffb68fd-mv8j4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-4M52UXR2EWFBQ6QH -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -d 10.42.0.3 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -s 10.42.0.3 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.3 -j ACCEPT -I KUBE-POD-FW-K2TVNHK5E5ZQHCLK 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -d 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system to chain KUBE-POD-FW-K2TVNHK5E5ZQHCLK" -s 10.42.0.3 -j KUBE-POD-FW-K2TVNHK5E5ZQHCLK -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to log dropped traffic POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "rule to REJECT traffic destined for POD name:metrics-server-9cf544f65-xdbg6 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-K2TVNHK5E5ZQHCLK -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -d 10.42.0.5 -m comment --comment "run through default ingress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -s 10.42.0.5 -m comment --comment "run through default egress network policy chain" -j KUBE-NWPLCY-DEFAULT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.5 -j ACCEPT -I KUBE-POD-FW-C7NCCGNSUR3CKZKN 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -d 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:coredns-85cb69466-sfmfm namespace: kube-system to chain KUBE-POD-FW-C7NCCGNSUR3CKZKN" -s 10.42.0.5 -j KUBE-POD-FW-C7NCCGNSUR3CKZKN -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to log dropped traffic POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "rule to REJECT traffic destined for POD name:coredns-85cb69466-sfmfm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -j MARK --set-mark 0/0x10000 -A KUBE-POD-FW-C7NCCGNSUR3CKZKN -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 COMMIT I1023 18:56:09.144877 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:09.144909 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" ╭─    ~ ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 1 ✘    at 18:54:23   ╰─ I1023 18:56:20.145216 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:20.145252 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:20.145522 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:31.145003 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:31.145035 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:31.145307 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 I1023 18:56:42.145112 31070 scope.go:110] "RemoveContainer" containerID="4709e866a99bcdbe858feaa24263302ca41811aeb1d9989a2096063ce3021402" I1023 18:56:42.145144 31070 scope.go:110] "RemoveContainer" containerID="e5ecccd55e58988e150565eddaec0a34cbd4519e50f27b287988198e5f8bc449" E1023 18:56:42.145447 31070 pod_workers.go:765] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"vcluster\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=vcluster pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\", failed to \"StartContainer\" for \"syncer\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=syncer pod=vcluster-1-0_host-namespace-1(c076d934-6820-46cd-b886-c6b35111ece2)\"]" pod="host-namespace-1/vcluster-1-0" podUID=c076d934-6820-46cd-b886-c6b35111ece2 Pr

FabianKramm commented 2 years ago

@pmualaba thanks for creating this issue! Could you post the syncer and k3s logs here as well? (kubectl logs vcluster-1-0 -n host-namespace-1 -c syncer && kubectl logs vcluster-1-0 -n host-namespace-1 -c k3s)

pmualaba commented 2 years ago

Thanks for your reply! These are the requested logs... Note: I use pure WSL2 (Ubuntu-20.04), so no Docker Desktop installed, only k3s directly on WSL2 :-)

`╰─ kubectl logs vcluster-1-0 -n host-namespace-1 -c syncer && kubectl logs vcluster-1-0 -n host-namespace-1 -c k3s I1025 08:34:35.496032 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:36.492089 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:37.492310 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:38.492410 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:39.491530 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:40.491626 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:41.491944 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:42.491906 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:43.492067 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:44.490301 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:45.492276 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:46.492333 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:47.492298 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:48.491486 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:49.490726 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:50.491884 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:51.492853 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:52.491696 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:53.491662 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:54.490968 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:55.491407 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:56.492570 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:57.491715 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:58.491880 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:34:59.491120 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:00.492287 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:01.491879 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:02.491856 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:03.492087 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:04.491142 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:05.491404 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:06.491621 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:07.491751 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:08.491875 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:09.490062 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:10.492230 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:11.491967 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:12.491940 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:13.492064 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:14.490339 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:15.492399 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:16.491768 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:17.492895 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:18.490108 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:19.490358 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:20.492497 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:21.491653 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:22.491740 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:23.490042 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:24.490948 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:25.492122 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:26.490700 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:27.490295 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:28.491913 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:29.490845 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:30.492722 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:31.491791 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:32.492113 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:33.492293 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:34.490444 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:35.492463 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:36.491581 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:37.491692 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:38.491957 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:39.490030 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:40.492225 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:41.492084 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:42.492801 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:43.491250 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:44.491168 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:45.492111 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:46.490671 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:47.492001 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:48.490399 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:49.490627 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:50.491785 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds I1025 08:35:51.491972 1 main.go:179] couldn't retrieve virtual cluster version (Get "https://127.0.0.1:6444/version?timeout=32s": dial tcp 127.0.0.1:6444: connect: connection refused), will retry in 1 seconds error: container k3s is not valid for pod vcluster-1-0

`

pmualaba commented 2 years ago

If it can help, these are the boot logs when starting k3s on WSL2:

[sudo] password for pmualaba: INFO[0000] Starting k3s v1.22.2+k3s2 (3f5774b4) INFO[0000] Cluster bootstrap already complete INFO[0000] Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s INFO[0000] Configuring database table schema and indexes, this may take a moment... INFO[0000] Database tables and indexes are up to date INFO[0000] Kine available at unix://kine.sock INFO[0000] Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/var/li b/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requesthea der-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/ser ving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I1025 11:21:40.238639 27100 server.go:581] external host was not specified, using 172.29.15.142 I1025 11:21:40.238794 27100 server.go:175] Version: v1.22.2+k3s2 I1025 11:21:40.240637 27100 shared_informer.go:240] Waiting for caches to sync for node_authorizer I1025 11:21:40.241283 27100 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1025 11:21:40.241301 27100 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1025 11:21:40.241862 27100 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1025 11:21:40.241878 27100 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W1025 11:21:40.250785 27100 genericapiserver.go:455] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. I1025 11:21:40.251382 27100 instance.go:278] Using reconciler: lease I1025 11:21:40.331677 27100 rest.go:130] the default service ipfamily for this cluster is: IPv4 W1025 11:21:40.644235 27100 genericapiserver.go:455] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.644930 27100 genericapiserver.go:455] Skipping API authorization.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.651742 27100 genericapiserver.go:455] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.652405 27100 genericapiserver.go:455] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.656194 27100 genericapiserver.go:455] Skipping API networking.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.657512 27100 genericapiserver.go:455] Skipping API node.k8s.io/v1alpha1 because it has no resources. W1025 11:21:40.660724 27100 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.660742 27100 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1025 11:21:40.661615 27100 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W1025 11:21:40.661633 27100 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1025 11:21:40.663594 27100 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1025 11:21:40.664468 27100 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W1025 11:21:40.671529 27100 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources. W1025 11:21:40.671556 27100 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources. W1025 11:21:40.672789 27100 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I1025 11:21:40.674911 27100 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1025 11:21:40.674928 27100 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W1025 11:21:40.677612 27100 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. INFO[0000] Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259 INFO[0000] Waiting for API server to become available INFO[0000] Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/va r/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --contr ollers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true INFO[0000] Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false INFO[0000] Node token is available at /var/lib/rancher/k3s/server/token INFO[0000] To join node to cluster: k3s agent -s https://172.29.15.142:6443 -t ${NODE_TOKEN} INFO[0000] Wrote kubeconfig /etc/rancher/k3s/k3s.yaml INFO[0000] Run: k3s kubectl INFO[0000] Cluster-Http-Server 2021/10/25 11:21:40 http: TLS handshake error from 127.0.0.1:49508: remote error: tls: bad certificate INFO[0000] Cluster-Http-Server 2021/10/25 11:21:40 http: TLS handshake error from 127.0.0.1:49514: remote error: tls: bad certificate INFO[0000] certificate CN=windows signed by CN=k3s-server-ca@1634938539: notBefore=2021-10-22 21:35:39 +0000 UTC notAfter=2022-10-25 09:21:40 +0000 UTC INFO[0000] certificate CN=system:node:windows,O=system:nodes signed by CN=k3s-client-ca@1634938539: notBefore=2021-10-22 21:35:39 +0000 UTC notAfter=2022-10-25 09:21:40 +0000 UTC INFO[0000] Module overlay was already loaded INFO[0000] Module nf_conntrack was already loaded WARN[0000] Failed to load kernel module br_netfilter with modprobe WARN[0000] Failed to load kernel module iptable_nat with modprobe W1025 11:21:40.716100 27100 sysinfo.go:203] Nodes topology is not available, providing CPU topology INFO[0000] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log INFO[0000] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd I1025 11:21:41.626312 27100 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I1025 11:21:41.626381 27100 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I1025 11:21:41.626470 27100 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" I1025 11:21:41.626604 27100 secure_serving.go:266] Serving securely on 127.0.0.1:6444 I1025 11:21:41.626708 27100 tlsconfig.go:240] "Starting DynamicServingCertificateController" I1025 11:21:41.626747 27100 customresource_discovery_controller.go:209] Starting DiscoveryController I1025 11:21:41.626761 27100 controller.go:85] Starting OpenAPI controller I1025 11:21:41.626775 27100 apiservice_controller.go:97] Starting APIServiceRegistrationController I1025 11:21:41.626799 27100 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1025 11:21:41.626800 27100 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1025 11:21:41.626810 27100 autoregister_controller.go:141] Starting autoregister controller I1025 11:21:41.626829 27100 apf_controller.go:299] Starting API Priority and Fairness config controller I1025 11:21:41.626835 27100 cache.go:32] Waiting for caches to sync for autoregister controller I1025 11:21:41.626858 27100 crdregistration_controller.go:111] Starting crd-autoregister controller I1025 11:21:41.626867 27100 controller.go:83] Starting OpenAPI AggregationController I1025 11:21:41.626875 27100 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1025 11:21:41.626817 27100 available_controller.go:491] Starting AvailableConditionController I1025 11:21:41.626899 27100 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key" I1025 11:21:41.626946 27100 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I1025 11:21:41.626978 27100 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I1025 11:21:41.626900 27100 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1025 11:21:41.626809 27100 crd_finalizer.go:266] Starting CRDFinalizer I1025 11:21:41.626889 27100 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1025 11:21:41.627118 27100 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1025 11:21:41.626786 27100 establishing_controller.go:76] Starting EstablishingController I1025 11:21:41.626778 27100 naming_controller.go:291] Starting NamingConditionController I1025 11:21:41.626792 27100 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I1025 11:21:41.640731 27100 shared_informer.go:247] Caches are synced for node_authorizer E1025 11:21:41.641244 27100 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service INFO[0001] Containerd is now running I1025 11:21:41.726891 27100 apf_controller.go:304] Running API Priority and Fairness config worker I1025 11:21:41.726914 27100 shared_informer.go:247] Caches are synced for crd-autoregister I1025 11:21:41.726946 27100 cache.go:39] Caches are synced for autoregister controller I1025 11:21:41.727057 27100 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1025 11:21:41.727068 27100 cache.go:39] Caches are synced for AvailableConditionController controller I1025 11:21:41.727159 27100 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller INFO[0001] Connecting to proxy url="wss://172.29.15.142:6443/v1-k3s/connect" INFO[0001] Handling backend connection request [windows] INFO[0001] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/279165f078654a6f634bbe61f4583cb123d890f0773d8bef982f7cdd095b7633/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/contai nerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=windows --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file =/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet. Flag --cni-bin-dir has been deprecated, will be removed along with dockershim. Flag --cni-conf-dir has been deprecated, will be removed along with dockershim. Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed. INFO[0001] Cluster-Http-Server 2021/10/25 11:21:41 http: TLS handshake error from 127.0.0.1:49608: remote error: tls: bad certificate I1025 11:21:41.738950 27100 server.go:436] "Kubelet version" kubeletVersion="v1.22.2+k3s2" INFO[0001] Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error I1025 11:21:41.747864 27100 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt" I1025 11:21:42.626639 27100 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1025 11:21:42.629372 27100 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I1025 11:21:42.998804 27100 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). INFO[0003] Kube API server is now running INFO[0003] k3s is up and running INFO[0003] Waiting for cloud-controller-manager privileges to become available INFO[0003] Applying CRD addons.k3s.cattle.io INFO[0003] Applying CRD helmcharts.helm.cattle.io INFO[0003] Applying CRD helmchartconfigs.helm.cattle.io INFO[0003] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.3.0.tgz INFO[0003] Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.3.0.tgz INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml INFO[0003] Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml INFO[0003] Node CIDR assigned for: windows I1025 11:21:43.745951 27100 flannel.go:93] Determining IP address of default interface I1025 11:21:43.747152 27100 kube.go:120] Waiting 10m0s for node controller to sync I1025 11:21:43.747180 27100 kube.go:378] Starting kube subnet manager INFO[0003] labels have already set on node: windows I1025 11:21:43.791464 27100 serving.go:354] Generated self-signed cert in-memory INFO[0003] Starting k3s.cattle.io/v1, Kind=Addon controller INFO[0003] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"ccm", UID:"b0ace6e1-cb46-4291-9634-7fd5239f199e", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"280", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml" I1025 11:21:43.802523 27100 serving.go:354] Generated self-signed cert in-memory INFO[0003] Cluster dns configmap already exists I1025 11:21:43.828385 27100 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io INFO[0003] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"ccm", UID:"b0ace6e1-cb46-4291-9634-7fd5239f199e", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"280", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/ccm.yaml" I1025 11:21:43.830599 27100 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io INFO[0003] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"coredns", UID:"16bd717e-b629-4cf5-9395-4e8decb997ed", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"296", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml" I1025 11:21:43.847308 27100 serving.go:354] Generated self-signed cert in-memory I1025 11:21:43.852886 27100 network_policy_controller.go:144] Starting network policy controller I1025 11:21:43.855905 27100 controller.go:611] quota admission added evaluator for: deployments.apps INFO[0003] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"coredns", UID:"16bd717e-b629-4cf5-9395-4e8decb997ed", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"296", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/coredns.yaml" INFO[0003] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"local-storage", UID:"3d72d8a9-30a0-48b0-ad20-e66183ce6431", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"308", FieldPath:""}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml" I1025 11:21:43.864760 27100 network_policy_controller.go:154] Starting network policy controller full sync goroutine INFO[0003] Event(v1.ObjectReference{Kind:"Addon", Namespace:"kube-system", Name:"local-storage", UID:"3d72d8a9-30a0-48b0-ad20-e66183ce6431", APIVersion:"k3s.cattle.io/v1", ResourceVersion:"308", FieldPath:""}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at "/var/lib/rancher/k3s/server/manifests/local-storage.yaml" E1025 11:21:43.879320 27100 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.4 (legacy): Couldn't load matchlimit':No such file or directory

Error occurred at line: 83 Try `iptables-restore -h' or 'iptables-restore --help' for more information. ) *filter

`

pmualaba commented 2 years ago

I got one step further by editing ~/.kube/k3s.yaml

from

apiVersion: v1 clusters:

to

apiVersion: v1 clusters:

So the k3s cluster on WSL2 is not accessible at 127.0.0.1, but only at the interface IP address. Is there a way that we could "autodetect" that ip address somehow, so that after reboot we do not need to change it again?

Now for 30 seconds it works (2/2 RUNNING) but then it crashes again:

╰─ kubectl logs vcluster-1-0 -n host-namespace-1 -c syncer && kubectl logs vcluster-1-0 -n host-namespace-1 -c k3s I1025 13:01:32.527425 1 main.go:234] Using physical cluster at https://10.43.0.1:443 I1025 13:01:32.543140 1 main.go:265] Can connect to virtual cluster with version v1.22.1-rc1+k3s1 I1025 13:01:32.645961 1 plugins.go:158] Loaded 1 mutating admission controller(s) successfully in the following order: MutatingAdmissionWebhook. I1025 13:01:32.645975 1 leaderelection.go:243] attempting to acquire leader lease host-namespace-1/vcluster-vcluster-1-controller... I1025 13:01:32.645981 1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook. I1025 13:01:32.650141 1 leaderelection.go:253] successfully acquired lease host-namespace-1/vcluster-vcluster-1-controller I1025 13:01:32.650273 1 leaderelection.go:68] Acquired leadership and run vcluster in leader mode I1025 13:01:32.650314 1 leaderelection.go:31] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"cd510f2d-4e7a-4832-8440-b7de2b718204", APIVersion:"v1", ResourceVersion:"54975", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' vcluster-1-0-external-vcluster-controller became leader E1025 13:01:32.651189 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"vcluster-vcluster-1-controller.16b147a018e3f198", GenerateName:"", Namespace:"host-namespace-1", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]s tring(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"cd510f2d-4e7a-4832-8440-b7de2b718204", APIVersion:"v1", ResourceVersion:"54975", FieldPath:""}, Reason:"LeaderElection", Message:"vcluster-1-0-external-vcluster-controller became leader", Source:v1.EventSource{Component:"vcluster", Host:""}, FirstTimestamp:v 1.Time{Time:time.Time{wall:0xc055c88b26c03998, ext:2845965683, loc:(time.Location)(0x3221280)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc055c88b26c03998, ext:2845965683, loc:(time.Location)(0x3221280)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1 " cannot create resource "events" in API group "" in the namespace "host-namespace-1"' (will not retry!) I1025 13:01:32.662137 1 loghelper.go:53] Start nodes sync controller I1025 13:01:32.662194 1 loghelper.go:53] Start priorityclasses sync controller I1025 13:01:32.662206 1 loghelper.go:53] Start services sync controller I1025 13:01:32.662253 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &source.Kind{Type:(v1.Node)(0xc00037a900), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.662276 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc001382c20), run:(generic.fakeSyncer)(0xc000b00d80), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.662281 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting Controller I1025 13:01:32.662768 1 loghelper.go:53] Start endpoints sync controller I1025 13:01:32.662877 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting EventSource source &source.Kind{Type:(v1.Service)(0xc0006b5400), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.662896 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting EventSource source &source.Kind{Type:(v1.Service)(0xc0006b5680), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.662899 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000ad2140), run:(generic.forwardController)(0xc000a7b740), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.662904 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting Controller I1025 13:01:32.662907 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000ad21f0), run:(generic.backwardController)(0xc000a7b860), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.662909 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting Controller I1025 13:01:32.662970 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting workers worker count 1 I1025 13:01:32.663311 1 loghelper.go:53] Start ingresses sync controller I1025 13:01:32.663439 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &source.Kind{Type:(v1.Endpoints)(0xc0006b3180), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.663464 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000ad36d0), run:(generic.backwardController)(0xc00057d6e0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.663468 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting Controller I1025 13:01:32.663442 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &source.Kind{Type:(v1.Endpoints)(0xc0006b3040), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.663558 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000ad3630), run:(generic.forwardController)(0xc00057d5c0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.663574 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting Controller I1025 13:01:32.663627 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting workers worker count 1 I1025 13:01:32.664325 1 loghelper.go:53] Start events sync controller I1025 13:01:32.664358 1 loghelper.go:53] Start persistentvolumeclaims sync controller I1025 13:01:32.664420 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &source.Kind{Type:(v1.Ingress)(0xc000983080), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.664438 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &source.Kind{Type:(v1.Ingress)(0xc000983200), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.664451 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc001487d30), run:(generic.backwardController)(0xc000f506c0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.664452 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc001487c80), run:(generic.forwardController)(0xc000f505a0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.664453 1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting Controller I1025 13:01:32.664456 1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting Controller I1025 13:01:32.664458 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting EventSource source &source.Kind{Type:(v1.Event)(0xc0001bcc80), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.664473 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting Controller I1025 13:01:32.664493 1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting workers worker count 1 I1025 13:01:32.664842 1 loghelper.go:53] Start persistentvolumes sync controller I1025 13:01:32.664868 1 loghelper.go:53] Start storageclasses sync controller I1025 13:01:32.664870 1 loghelper.go:53] Start configmaps sync controller I1025 13:01:32.664985 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &source.Kind{Type:(v1.PersistentVolumeClaim)(0xc000010e00), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.664996 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &source.Kind{Type:(v1.PersistentVolumeClaim)(0xc000010fc0), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.665001 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0011a9160), run:(generic.forwardController)(0xc00099d8c0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.665005 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting Controller I1025 13:01:32.665009 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0011a9200), run:(generic.backwardController)(0xc00099d9e0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.665016 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting Controller I1025 13:01:32.665020 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &source.Kind{Type:(v1.PersistentVolume)(0xc0001bd400), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.665029 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0011a92c0), run:(generic.fakeSyncer)(0xc0009e8c00), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.665033 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting Controller I1025 13:01:32.665038 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting workers worker count 1 I1025 13:01:32.665403 1 loghelper.go:53] Start secrets sync controller I1025 13:01:32.665486 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(v1.ConfigMap)(0xc0011ca140), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.665507 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &source.Kind{Type:(v1.ConfigMap)(0xc0011ca280), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.665517 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000a8a6f0), run:(generic.forwardController)(0xc00025c600), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.665520 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000a8a820), run:(generic.backwardController)(0xc00025c780), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.665527 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000580800), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.665525 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting Controller I1025 13:01:32.665530 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting Controller I1025 13:01:32.666311 1 loghelper.go:53] Start pods sync controller I1025 13:01:32.666440 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(v1.Secret)(0xc0013b0b40), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.666468 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000f17e10), run:(generic.forwardController)(0xc0002ec780), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.666473 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(v1.Ingress)(0xc00138d500), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.666482 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000c56c00), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.666488 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting Controller I1025 13:01:32.666486 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting EventSource source &source.Kind{Type:(v1.Secret)(0xc0013b0dc0), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.666507 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000f17f10), run:(generic.backwardController)(0xc0002ec8a0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.666509 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting Controller I1025 13:01:32.666553 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting workers worker count 1 I1025 13:01:32.668080 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000101000), cache:(cache.informerCache)(0xc00011a128), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.668118 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000d8a7b0), run:(generic.backwardController)(0xc00082aba0), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.668124 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting Controller I1025 13:01:32.668131 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000100c00), cache:(cache.informerCache)(0xc000306270), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:01:32.668160 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000d8a710), run:(generic.forwardController)(0xc00082a960), stopChan:(<-chan struct {})(0xc0006a6000)} I1025 13:01:32.668168 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting Controller I1025 13:01:32.668395 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting workers worker count 1 I1025 13:01:32.762543 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting workers worker count 1 I1025 13:01:32.763657 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting workers worker count 1 I1025 13:01:32.763702 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting workers worker count 1 I1025 13:01:32.764855 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting workers worker count 1 I1025 13:01:32.764881 1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting workers worker count 1 I1025 13:01:32.765147 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting workers worker count 1 I1025 13:01:32.765185 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting workers worker count 1 I1025 13:01:32.765984 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting workers worker count 1 I1025 13:01:32.766026 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting workers worker count 1 I1025 13:01:32.767145 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting workers worker count 1 I1025 13:01:32.769336 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting workers worker count 1 I1025 13:01:32.978449 1 server.go:172] Starting tls proxy server at 0.0.0.0:8443 I1025 13:01:32.978732 1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt I1025 13:01:32.978763 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt I1025 13:01:32.979104 1 syncer.go:170] Generating serving cert for service ips: [10.43.30.240] I1025 13:01:32.979719 1 secure_serving.go:197] Serving securely on [::]:8443 I1025 13:01:32.979759 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1025 13:01:34.542985 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:34.548382 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:34.558813 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:34.579133 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:34.619848 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:34.700858 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:36.112708 1 handler.go:48] Error while proxying request: context canceled E1025 13:01:36.281312 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again I1025 13:01:38.366804 1 syncer.go:94] service-forward: create physical service host-namespace-1/kube-dns-x-kube-system-x-vcluster-1 I1025 13:01:38.384339 1 syncer.go:260] service-backward: recreating virtual service kube-system/kube-dns, because cluster ip differs 10.43.86.16 != 10.43.0.10 W1025 13:01:38.561635 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.Ingress ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561700 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.Namespace ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561850 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.PersistentVolumeClaim ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561852 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.MutatingWebhookConfiguration ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561851 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.Service ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561850 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: watch of v1.PersistentVolume ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561858 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.ConfigMap ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561861 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.Pod ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:38.561863 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.Endpoints ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received E1025 13:01:38.751014 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Secret: failed to list v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=3": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:38.905608 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.Node: failed to list v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?resourceVersion=3": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.113926 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.536937 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.542356 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.552742 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.573278 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.613725 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.694688 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:39.855583 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.175941 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.186236 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Namespace: failed to list v1.Namespace: Get "https://127.0.0.1:6444/api/v1/namespaces?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.267645 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.316927 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.394459 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.PersistentVolumeClaim: failed to list v1.PersistentVolumeClaim: Get "https://127.0.0.1:6444/api/v1/persistentvolumeclaims?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.418720 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.PersistentVolume: failed to list v1.PersistentVolume: Get "https://127.0.0.1:6444/api/v1/persistentvolumes?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:40.817130 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.025955 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.113143 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.230976 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Endpoints: failed to list v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=218": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.291295 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Secret: failed to list v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=3": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.631268 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.MutatingWebhookConfiguration: failed to list v1.MutatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.693282 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.ConfigMap: failed to list v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=226": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:41.868938 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.Node: failed to list v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?resourceVersion=3": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:42.097706 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:43.113525 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:44.218615 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.PersistentVolume: failed to list v1.PersistentVolume: Get "https://127.0.0.1:6444/api/v1/persistentvolumes?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:44.414417 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:44.658126 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:44.754201 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.PersistentVolumeClaim: failed to list v1.PersistentVolumeClaim: Get "https://127.0.0.1:6444/api/v1/persistentvolumeclaims?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:44.885923 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:44.892165 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Endpoints: failed to list v1.Endpoints: Get "https://127.0.0.1:6444/api/v1/endpoints?resourceVersion=218": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:45.113969 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:45.352328 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:45.395609 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Secret: failed to list v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=3": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:45.655431 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Namespace: failed to list v1.Namespace: Get "https://127.0.0.1:6444/api/v1/namespaces?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:45.837650 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.Node: failed to list v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?resourceVersion=3": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:47.114124 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:47.136348 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.MutatingWebhookConfiguration: failed to list v1.MutatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:47.581049 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.ConfigMap: failed to list v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=226": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:49.113797 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:49.779467 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:51.022306 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:51.114308 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:51.387178 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:52.105108 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.PersistentVolumeClaim: failed to list v1.PersistentVolumeClaim: Get "https://127.0.0.1:6444/api/v1/persistentvolumeclaims?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:52.587708 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:53.113939 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused W1025 13:01:56.787952 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.MutatingWebhookConfiguration ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:56.787975 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.ConfigMap ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:56.787964 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: watch of v1.Node ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:56.788186 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:56.788192 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: watch of v1.PersistentVolume ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:56.788196 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.Namespace ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received W1025 13:01:56.788192 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.Secret ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received E1025 13:01:57.114009 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.565651 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.571135 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.581525 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.601989 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.642374 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.723501 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:57.884429 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:58.204738 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:58.845683 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:59.114332 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:01:59.725187 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=241": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:00.126648 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:01.113899 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:02.687141 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:03.114214 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:04.894954 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=241": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:05.113944 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:07.114079 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:07.550197 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.PersistentVolumeClaim: failed to list v1.PersistentVolumeClaim: Get "https://127.0.0.1:6444/api/v1/persistentvolumeclaims?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:07.808357 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:09.113747 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:10.302188 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.MutatingWebhookConfiguration: failed to list v1.MutatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:11.109961 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.PersistentVolume: failed to list v1.PersistentVolume: Get "https://127.0.0.1:6444/api/v1/persistentvolumes?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:11.113719 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:11.805623 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=241": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:12.319510 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Service: failed to list v1.Service: Get "https://127.0.0.1:6444/api/v1/services?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:13.113413 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:14.397685 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:15.113715 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:16.008184 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:241: Failed to watch v1.Node: failed to list v1.Node: Get "https://127.0.0.1:6444/api/v1/nodes?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:16.060582 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.ConfigMap: failed to list v1.ConfigMap: Get "https://127.0.0.1:6444/api/v1/configmaps?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:16.834173 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=212": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:17.114118 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:18.048970 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:18.827652 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.Namespace: failed to list v1.Namespace: Get "https://127.0.0.1:6444/api/v1/namespaces?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:19.114433 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:21.114009 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:21.400738 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Secret: failed to list v1.Secret: Get "https://127.0.0.1:6444/api/v1/secrets?resourceVersion=240": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:23.113513 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:25.114260 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.113987 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.114026 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.610157 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.615494 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.625888 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.646299 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.687227 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.767710 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:29.928524 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:30.248979 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:30.890043 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:31.113499 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:31.113558 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:32.170953 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:33.114176 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:33.114352 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:02:33.115692 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused error: container k3s is not valid for pod vcluster-1-0

FabianKramm commented 2 years ago

@pmualaba thanks for the logs! Sorry I made a mistake in the logs printing, you'll need to use kubectl logs vcluster-1-0 -n host-namespace-1 -c syncer && kubectl logs vcluster-1-0 -n host-namespace-1 -c vcluster. Seems like the syncer is starting up correctly, but the vcluster sidecar pod cannot start correctly or has a problem afterwards

pmualaba commented 2 years ago

Thank you! It would be really really great to get it working in WSL2 without dependency on Docker Desktop! :-) These are the requested logs:

` ─ kubectl logs vcluster-1-0 -n host-namespace-1 -c syncer && kubectl logs vcluster-1-0 -n host-namespace-1 -c vcluster I1025 13:59:41.742955 1 main.go:234] Using physical cluster at https://10.43.0.1:443 I1025 13:59:41.761306 1 main.go:265] Can connect to virtual cluster with version v1.22.1-rc1+k3s1 I1025 13:59:41.864242 1 leaderelection.go:243] attempting to acquire leader lease host-namespace-1/vcluster-vcluster-1-controller... I1025 13:59:41.864334 1 plugins.go:158] Loaded 1 mutating admission controller(s) successfully in the following order: MutatingAdmissionWebhook. I1025 13:59:41.864353 1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook. I1025 13:59:41.868517 1 leaderelection.go:253] successfully acquired lease host-namespace-1/vcluster-vcluster-1-controller I1025 13:59:41.868563 1 leaderelection.go:31] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"cd510f2d-4e7a-4832-8440-b7de2b718204", APIVersion:"v1", ResourceVersion:"55218", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' vcluster-1-0-external-vcluster-controller became leader I1025 13:59:41.868595 1 leaderelection.go:68] Acquired leadership and run vcluster in leader mode E1025 13:59:41.869187 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"vcluster-vcluster-1-controller.16b14acc7e833850", GenerateName:"", Namespace:"host-namespace-1", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string(nil), Annotations:map[string]s tring(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"host-namespace-1", Name:"vcluster-vcluster-1-controller", UID:"cd510f2d-4e7a-4832-8440-b7de2b718204", APIVersion:"v1", ResourceVersion:"55218", FieldPath:""}, Reason:"LeaderElection", Message:"vcluster-1-0-external-vcluster-controller became leader", Source:v1.EventSource{Component:"vcluster", Host:""}, FirstTimestamp:v 1.Time{Time:time.Time{wall:0xc055cbf373c47650, ext:2550402743, loc:(time.Location)(0x3221280)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc055cbf373c47650, ext:2550402743, loc:(time.Location)(0x3221280)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, Series:(v1.EventSeries)(nil), Action:"", Related:(v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1 " cannot create resource "events" in API group "" in the namespace "host-namespace-1"' (will not retry!) I1025 13:59:41.880722 1 loghelper.go:53] Start endpoints sync controller I1025 13:59:41.881268 1 loghelper.go:53] Start nodes sync controller I1025 13:59:41.881295 1 loghelper.go:53] Start persistentvolumes sync controller I1025 13:59:41.881305 1 loghelper.go:53] Start storageclasses sync controller I1025 13:59:41.881307 1 loghelper.go:53] Start priorityclasses sync controller I1025 13:59:41.881309 1 loghelper.go:53] Start configmaps sync controller I1025 13:59:41.881380 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &source.Kind{Type:(v1.Endpoints)(0xc0006a0280), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881408 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000c0f8b0), run:(generic.forwardController)(0xc0013b8c00), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.881407 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &source.Kind{Type:(v1.Endpoints)(0xc0006a0640), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881413 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting Controller I1025 13:59:41.881408 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &source.Kind{Type:(v1.Node)(0xc00113e600), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881427 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000c0fa00), run:(generic.backwardController)(0xc0013b8d20), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.881429 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000c0fec0), run:(generic.fakeSyncer)(0xc0008cc4b0), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.881432 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting Controller I1025 13:59:41.881434 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting Controller I1025 13:59:41.881433 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &source.Kind{Type:(v1.PersistentVolume)(0xc0002e1680), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881444 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000c0ffb0), run:(generic.fakeSyncer)(0xc0008cc750), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.881460 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting Controller I1025 13:59:41.881463 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-forward: Starting workers worker count 1 I1025 13:59:41.881813 1 loghelper.go:53] Start secrets sync controller I1025 13:59:41.881898 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(v1.ConfigMap)(0xc0006a0780), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881915 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &source.Kind{Type:(v1.ConfigMap)(0xc0006a08c0), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881922 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0011a7480), run:(generic.forwardController)(0xc0011bbb60), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.881925 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0011a7550), run:(generic.backwardController)(0xc0011bbc80), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.881929 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting Controller I1025 13:59:41.881931 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000f8a400), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.881934 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting Controller I1025 13:59:41.882771 1 loghelper.go:53] Start pods sync controller I1025 13:59:41.882844 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(v1.Secret)(0xc0006a0c80), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.882869 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0009050a0), run:(generic.forwardController)(0xc001137c20), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.882877 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(v1.Ingress)(0xc000a4ec00), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.882875 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting EventSource source &source.Kind{Type:(v1.Secret)(0xc0006a0dc0), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.882883 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000f8a800), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.882886 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting Controller I1025 13:59:41.882893 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0009051d0), run:(generic.backwardController)(0xc001137d40), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.882918 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting Controller I1025 13:59:41.882959 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-forward: Starting workers worker count 1 I1025 13:59:41.884819 1 loghelper.go:53] Start events sync controller I1025 13:59:41.884845 1 loghelper.go:53] Start persistentvolumeclaims sync controller I1025 13:59:41.884937 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000680c00), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.884980 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc00047cb30), run:(generic.forwardController)(0xc000d3c3c0), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.884989 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting Controller I1025 13:59:41.885125 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-forward: Starting workers worker count 1 I1025 13:59:41.885566 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting EventSource source &source.Kind{Type:(v1.Event)(0xc0008aaa00), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.885840 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting Controller I1025 13:59:41.885852 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting EventSource source &source.Kind{Type:(v1.Pod)(0xc000681000), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.885866 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc00047cf60), run:(generic.backwardController)(0xc000d3c6c0), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.885884 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting Controller I1025 13:59:41.886583 1 loghelper.go:53] Start ingresses sync controller I1025 13:59:41.886666 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &source.Kind{Type:(v1.PersistentVolumeClaim)(0xc0008ac540), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.886695 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000f28f10), run:(generic.forwardController)(0xc00103eba0), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.886701 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting Controller I1025 13:59:41.886746 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-forward: Starting workers worker count 1 I1025 13:59:41.886787 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &source.Kind{Type:(v1.PersistentVolumeClaim)(0xc0008ac700), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.886804 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc000f28fb0), run:(generic.backwardController)(0xc00103ecc0), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.886808 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting Controller I1025 13:59:41.887854 1 loghelper.go:53] Start services sync controller I1025 13:59:41.887968 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &source.Kind{Type:(v1.Ingress)(0xc000bbaf00), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.888002 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc00110f670), run:(generic.backwardController)(0xc000c27620), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.888008 1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting Controller I1025 13:59:41.887971 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &source.Kind{Type:(v1.Ingress)(0xc000bbac00), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.888020 1 controller.go:165] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc00110f5c0), run:(generic.forwardController)(0xc000c27500), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.888034 1 controller.go:173] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting Controller I1025 13:59:41.888097 1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-forward: Starting workers worker count 1 I1025 13:59:41.888665 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting EventSource source &source.Kind{Type:(v1.Service)(0xc0006ac280), cache:(cache.informerCache)(0xc000d9e008), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.888686 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0001f4c30), run:(generic.forwardController)(0xc000d839e0), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.888691 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting Controller I1025 13:59:41.888749 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting EventSource source &source.Kind{Type:(v1.Service)(0xc0006ac500), cache:(cache.informerCache)(0xc00029a298), started:(chan error)(nil), startCancel:(func())(nil)} I1025 13:59:41.888776 1 controller.go:165] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting EventSource source &garbagecollect.Source{Period:30000000000, log:(loghelper.logger)(0xc0001f4cd0), run:(generic.backwardController)(0xc000d83b00), stopChan:(<-chan struct {})(0xc001012000)} I1025 13:59:41.888783 1 controller.go:173] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting Controller I1025 13:59:41.888795 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-forward: Starting workers worker count 1 I1025 13:59:41.888885 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Service: controller: service-backward: Starting workers worker count 1 I1025 13:59:41.982350 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: Starting workers worker count 1 I1025 13:59:41.982434 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Node: controller: fake-node-syncer: Starting workers worker count 1 I1025 13:59:41.982471 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-backward: Starting workers worker count 1 I1025 13:59:41.982482 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind ConfigMap: controller: configmap-forward: Starting workers worker count 1 I1025 13:59:41.982472 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind PersistentVolume: controller: fake-persistent-volumes-syncer: Starting workers worker count 1 I1025 13:59:41.983517 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Secret: controller: secret-backward: Starting workers worker count 1 I1025 13:59:41.986072 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Event: controller: event-backward: Starting workers worker count 1 I1025 13:59:41.986100 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind Pod: controller: pod-backward: Starting workers worker count 1 E1025 13:59:41.986421 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again I1025 13:59:41.987479 1 controller.go:207] controller-runtime: manager: reconciler group reconciler kind PersistentVolumeClaim: controller: persistentvolumeclaim-backward: Starting workers worker count 1 I1025 13:59:41.988582 1 controller.go:207] controller-runtime: manager: reconciler group networking.k8s.io reconciler kind Ingress: controller: ingress-backward: Starting workers worker count 1 I1025 13:59:42.192421 1 server.go:172] Starting tls proxy server at 0.0.0.0:8443 I1025 13:59:42.192677 1 dynamic_cafile_content.go:167] Starting request-header::/data/server/tls/request-header-ca.crt I1025 13:59:42.192688 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/data/server/tls/client-ca.crt I1025 13:59:42.193009 1 syncer.go:170] Generating serving cert for service ips: [10.43.30.240] I1025 13:59:42.193589 1 secure_serving.go:197] Serving securely on [::]:8443 I1025 13:59:42.193665 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1025 13:59:44.003788 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.236984 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.242540 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.252981 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.273421 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.313874 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.394710 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:44.555384 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Put "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:46.003782 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:48.003033 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:50.003652 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:52.003432 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:54.003138 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:56.003969 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 13:59:58.362151 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:58.369315 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:58.381106 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:58.403509 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:58.445425 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:58.527767 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:58.689985 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:59.012474 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again E1025 13:59:59.655407 1 controller.go:302] controller-runtime: manager: reconciler group reconciler kind Endpoints: controller: endpoints-backward: name vcluster-1 namespace host-namespace-1: Reconciler error Operation cannot be fulfilled on endpoints "kubernetes": the object has been modified; please apply your changes to the latest version and try again W1025 14:00:00.604823 1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of v1.MutatingWebhookConfiguration ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received W1025 14:00:00.604848 1 reflector.go:436] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: watch of v1.Pod ended with: very short watch: sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Unexpected watch close - watch lasted less than a second and no items received E1025 14:00:01.340648 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=298": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:02.003384 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:02.696612 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.MutatingWebhookConfiguration: failed to list v1.MutatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?resourceVersion=309": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:03.075351 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=309": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:03.491745 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=298": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:04.003981 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:06.003271 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:07.261196 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=298": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:07.718948 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.MutatingWebhookConfiguration: failed to list v1.MutatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?resourceVersion=309": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:08.003738 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:08.873661 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=309": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:10.003387 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:12.003857 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:13.908571 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Ingress: failed to list v1.Ingress: Get "https://127.0.0.1:6444/apis/networking.k8s.io/v1/ingresses?resourceVersion=298": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:14.003957 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:16.003390 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:17.200140 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch v1.MutatingWebhookConfiguration: failed to list v1.MutatingWebhookConfiguration: Get "https://127.0.0.1:6444/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?resourceVersion=309": dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:18.003762 1 handler.go:48] Error while proxying request: dial tcp 127.0.0.1:6444: connect: connection refused E1025 14:00:18.026512 1 reflector.go:138] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:142: Failed to watch v1.Pod: failed to list v1.Pod: Get "https://127.0.0.1:6444/api/v1/pods?resourceVersion=309": dial tcp 127.0.0.1:6444: connect: connection refused time="2021-10-25T13:59:56Z" level=info msg="Starting k3s v1.22.1-rc1+k3s1 (58315fe1)" time="2021-10-25T13:59:56Z" level=info msg="Cluster bootstrap already complete" time="2021-10-25T13:59:56Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s" time="2021-10-25T13:59:56Z" level=info msg="Configuring database table schema and indexes, this may take a moment..." time="2021-10-25T13:59:56Z" level=info msg="Database tables and indexes are up to date" time="2021-10-25T13:59:56Z" level=info msg="Kine listening on unix://kine.sock" time="2021-10-25T13:59:56Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/data/server/tls/temporary-certs --client-ca-file=/data/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/data/server/tls/server-ca.crt --kubelet-client-cer tificate=/data/server/tls/client-kube-apiserver.crt --kubelet-client-key=/data/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/data/server/tls/client-auth-proxy.crt --proxy-client-key-file=/data/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/data/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User - -secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/data/server/tls/service.key --service-account-signing-key-file=/data/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/data/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/data/server/tls/serving-kube-apiserver.key" Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24. I1025 13:59:56.868696 1 server.go:581] external host was not specified, using 10.42.0.23 I1025 13:59:56.868810 1 server.go:175] Version: v1.22.1-rc1+k3s1 I1025 13:59:56.870540 1 shared_informer.go:240] Waiting for caches to sync for node_authorizer I1025 13:59:56.871077 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1025 13:59:56.871098 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1025 13:59:56.872962 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1025 13:59:56.872992 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W1025 13:59:56.879990 1 genericapiserver.go:455] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources. I1025 13:59:56.880575 1 instance.go:278] Using reconciler: lease I1025 13:59:56.970906 1 rest.go:130] the default service ipfamily for this cluster is: IPv4 W1025 13:59:57.274039 1 genericapiserver.go:455] Skipping API authentication.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.275137 1 genericapiserver.go:455] Skipping API authorization.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.282709 1 genericapiserver.go:455] Skipping API certificates.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.283579 1 genericapiserver.go:455] Skipping API coordination.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.287622 1 genericapiserver.go:455] Skipping API networking.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.289632 1 genericapiserver.go:455] Skipping API node.k8s.io/v1alpha1 because it has no resources. W1025 13:59:57.293236 1 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.293254 1 genericapiserver.go:455] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1025 13:59:57.293934 1 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1beta1 because it has no resources. W1025 13:59:57.293955 1 genericapiserver.go:455] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1025 13:59:57.296076 1 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1025 13:59:57.297259 1 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources. W1025 13:59:57.301972 1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources. W1025 13:59:57.301987 1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources. W1025 13:59:57.304121 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources. I1025 13:59:57.306342 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1025 13:59:57.306355 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. W1025 13:59:57.310031 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources. time="2021-10-25T13:59:57Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/data/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/data/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/data/server/tls/ server-ca.crt --cluster-signing-kubelet-serving-key-file=/data/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/data/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/data/server/tls/client-ca.key --controllers=,-nodeipam,-nodelifecycle,-persistentvolume-binder,-attachdetach,-persistentvolume-expander,-cloud-node-lifecycle --kubeconfig=/data/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/data/server/tls/server-ca.crt --secure-port=10257 --s ervice-account-private-key-file=/data/server/tls/service.key --use-service-account-credentials=true" time="2021-10-25T13:59:57Z" level=info msg="Waiting for API server to become available" time="2021-10-25T13:59:57Z" level=info msg="Node token is available at /data/server/token" time="2021-10-25T13:59:57Z" level=info msg="To join node to cluster: k3s agent -s https://10.42.0.23:6443 -t ${NODE_TOKEN}" time="2021-10-25T13:59:57Z" level=info msg="Wrote kubeconfig /k3s-config/kube-config.yaml" time="2021-10-25T13:59:57Z" level=info msg="Run: k3s kubectl" I1025 13:59:58.255007 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/data/server/tls/request-header-ca.crt" I1025 13:59:58.255075 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/data/server/tls/client-ca.crt" I1025 13:59:58.255117 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/data/server/tls/serving-kube-apiserver.crt::/data/server/tls/serving-kube-apiserver.key" I1025 13:59:58.255190 1 secure_serving.go:266] Serving securely on 127.0.0.1:6444 I1025 13:59:58.255323 1 tlsconfig.go:240] "Starting DynamicServingCertificateController" I1025 13:59:58.255342 1 available_controller.go:491] Starting AvailableConditionController I1025 13:59:58.255352 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1025 13:59:58.255386 1 controller.go:83] Starting OpenAPI AggregationController I1025 13:59:58.255404 1 apf_controller.go:299] Starting API Priority and Fairness config controller I1025 13:59:58.255391 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I1025 13:59:58.255413 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1025 13:59:58.255433 1 customresource_discovery_controller.go:209] Starting DiscoveryController I1025 13:59:58.255485 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/data/server/tls/client-auth-proxy.crt::/data/server/tls/client-auth-proxy.key" I1025 13:59:58.255517 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1025 13:59:58.255533 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1025 13:59:58.255554 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/data/server/tls/client-ca.crt" I1025 13:59:58.255571 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/data/server/tls/request-header-ca.crt" I1025 13:59:58.255838 1 autoregister_controller.go:141] Starting autoregister controller I1025 13:59:58.255857 1 cache.go:32] Waiting for caches to sync for autoregister controller I1025 13:59:58.255962 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1025 13:59:58.255975 1 crd_finalizer.go:266] Starting CRDFinalizer I1025 13:59:58.255999 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1025 13:59:58.256002 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1025 13:59:58.256042 1 controller.go:85] Starting OpenAPI controller I1025 13:59:58.256072 1 naming_controller.go:291] Starting NamingConditionController I1025 13:59:58.256081 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I1025 13:59:58.256094 1 establishing_controller.go:76] Starting EstablishingController W1025 13:59:58.262366 1 controller.go:292] Resetting master service "kubernetes" to &v1.Service{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kubernetes", GenerateName:"", Namespace:"default", SelfLink:"", UID:"a6bf6b77-8b59-462e-9e41-f3f757dc5c25", ResourceVersion:"304", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63770763692, loc:(time.Location)(0x7fbe9e0)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]str ing{"component":"apiserver", "provider":"kubernetes"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"vcluster", Operation:"Update", APIVersion:"v1", Time:(v1.Time)(0xc00feab800), FieldsType:"FieldsV1", FieldsV1:(v1.FieldsV1)(0xc00feab818), Subresource:""}}}, Spec:v1.ServiceSpec{Ports:[]v1.ServicePort{v1.ServicePort{Name:"https", Protocol:"TCP", AppProtocol:(string)(nil), Port: 443, TargetPort:intstr.IntOrString{Type:0, IntVal:6443, StrVal:""}, NodePort:0}}, Selector:map[string]string(nil), ClusterIP:"10.43.30.240", ClusterIPs:[]string{"10.43.30.240"}, Type:"ClusterIP", ExternalIPs:[]string(nil), SessionAffinity:"None", LoadBalancerIP:"", LoadBalancerSourceRanges:[]string(nil), ExternalName:"", ExternalTrafficPolicy:"", HealthCheckNodePort:0, PublishNotReadyAddresses:false, SessionAffinityConfig:(v1.SessionAffinityConfig)(nil), IPFamilies:[]v1.IPFamily{"IPv4"}, IPFamilyPolicy:(v1.IPFamilyPol icyType)(0xc00f8d3760), AllocateLoadBalancerNodePorts:(bool)(nil), LoadBalancerClass:(string)(nil), InternalTrafficPolicy:(v1.ServiceInternalTrafficPolicyType)(0xc00f8d3790)}, Status:v1.ServiceStatus{LoadBalancer:v1.LoadBalancerStatus{Ingress:[]v1.LoadBalancerIngress(nil)}, Conditions:[]v1.Condition(nil)}} W1025 13:59:58.268698 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [10.42.0.23] I1025 13:59:58.269187 1 controller.go:611] quota admission added evaluator for: endpoints I1025 13:59:58.270600 1 shared_informer.go:247] Caches are synced for node_authorizer I1025 13:59:58.270985 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io E1025 13:59:58.274795 1 controller.go:156] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I1025 13:59:58.356242 1 shared_informer.go:247] Caches are synced for crd-autoregister I1025 13:59:58.356270 1 apf_controller.go:304] Running API Priority and Fairness config worker I1025 13:59:58.356270 1 cache.go:39] Caches are synced for autoregister controller I1025 13:59:58.356323 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1025 13:59:58.356282 1 cache.go:39] Caches are synced for AvailableConditionController controller I1025 13:59:58.356457 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1025 13:59:59.255296 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1025 13:59:59.255322 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1025 13:59:59.258993 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. W1025 13:59:59.389583 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [10.42.0.23] time="2021-10-25T14:00:00Z" level=info msg="Kube API server is now running" time="2021-10-25T14:00:00Z" level=info msg="k3s is up and running" time="2021-10-25T14:00:00Z" level=warning msg="Deploy controller node name is empty or too long, and will not be tracked via server side apply field management" time="2021-10-25T14:00:00Z" level=info msg="Applying CRD addons.k3s.cattle.io" time="2021-10-25T14:00:00Z" level=info msg="Applying CRD helmcharts.helm.cattle.io" time="2021-10-25T14:00:00Z" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io" time="2021-10-25T14:00:00Z" level=info msg="Writing static file: /data/server/static/charts/traefik-10.3.0.tgz" time="2021-10-25T14:00:00Z" level=info msg="Writing static file: /data/server/static/charts/traefik-crd-10.3.0.tgz" time="2021-10-25T14:00:00Z" level=info msg="Writing manifest: /data/server/manifests/rolebindings.yaml" time="2021-10-25T14:00:00Z" level=info msg="Writing manifest: /data/server/manifests/coredns.yaml" time="2021-10-25T14:00:00Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller" time="2021-10-25T14:00:00Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"689c3442-8374-457c-a58d-703c27e007ca\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"220\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/data/server/manifests/coredns.yaml\"" time="2021-10-25T14:00:00Z" level=info msg="Cluster dns configmap already exists" I1025 14:00:00.429993 1 controller.go:611] quota admission added evaluator for: deployments.apps time="2021-10-25T14:00:00Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"689c3442-8374-457c-a58d-703c27e007ca\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"220\", FieldPath:\"\"}): type: 'Warning' reason: 'ApplyManifestFailed' Applying manifest at \"/data/server/manifests/coredns.yaml\" failed: failed to update kube-system/kube-dns /v1, Kind=Service for kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIPs[0]: Invalid value: []string {\"10.43.0.10\"}: may not change once set" time="2021-10-25T14:00:00Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"62a3732e-c5cf-4a77-804b-671b69be8cc6\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"231\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/data/server/manifests/rolebindings.yaml\"" time="2021-10-25T14:00:00Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"62a3732e-c5cf-4a77-804b-671b69be8cc6\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"231\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/data/server/manifests/rolebindings.yaml\"" I1025 14:00:00.443876 1 serving.go:354] Generated self-signed cert in-memory I1025 14:00:00.444090 1 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io time="2021-10-25T14:00:00Z" level=error msg="Failed to process config: failed to process /data/server/manifests/coredns.yaml: failed to update kube-system/kube-dns /v1, Kind=Service for kube-system/coredns: Service \"kube-dns\" is invalid: spec.clusterIPs[0]: Invalid value: []string{\"10.43.0.10\"}: may not change once set" time="2021-10-25T14:00:00Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller" time="2021-10-25T14:00:00Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller" time="2021-10-25T14:00:00Z" level=info msg="Starting apps/v1, Kind=Deployment controller" time="2021-10-25T14:00:00Z" level=info msg="Starting apps/v1, Kind=DaemonSet controller" time="2021-10-25T14:00:00Z" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller" time="2021-10-25T14:00:00Z" level=info msg="Starting batch/v1, Kind=Job controller" time="2021-10-25T14:00:00Z" level=info msg="Starting /v1, Kind=ServiceAccount controller" time="2021-10-25T14:00:00Z" level=info msg="Starting /v1, Kind=Pod controller" time="2021-10-25T14:00:00Z" level=info msg="Starting /v1, Kind=Service controller" time="2021-10-25T14:00:00Z" level=info msg="Starting /v1, Kind=Endpoints controller" time="2021-10-25T14:00:00Z" level=info msg="Starting /v1, Kind=ConfigMap controller" time="2021-10-25T14:00:00Z" level=info msg="Starting /v1, Kind=Node controller" W1025 14:00:00.600247 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:host-namespace-1:vc-vcluster-1" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

`

FabianKramm commented 2 years ago

@pmualaba looks like you are using the rc candidate of k3s which has problems, probably a better idea to use a newer version with:

vcluster create --k3s-image rancher/k3s:v1.22.2-k3s2 my-vcluster -n my-vcluster
pmualaba commented 2 years ago

nice! that seems to work. Big thanks for your support!

image

olljanat commented 2 years ago

Afaiu this one started to working so maybe it should be closed?

FabianKramm commented 2 years ago

@olljanat thanks for the notification! This can be closed indeed!