gitpod-io / gitpod

The developer platform for on-demand cloud development environments to create software faster and more securely.
https://www.gitpod.io
GNU Affero General Public License v3.0
12.85k stars 1.24k forks source link

Epic: Run k3s natively in Gitpod #4889

Open fntlnz opened 3 years ago

fntlnz commented 3 years ago

Right now, to run k3s in Gitpod the only viable option is to use emulation to create a VM as showed here https://github.com/gitpod-io/template-k3s

I did some analysis of why, even if we are able to run Docker, k3s still does not work.

There are a couple of errors to solve

Error 1 Snapshotter (:heavy_check_mark: )

ERRO[2021-07-21T08:57:21.159608731Z] Failed to configure agent: "overlayfs" snapshotter cannot be enabled for "/var/lib/rancher/k3s/agent/containerd", try using "fuse-overlayfs" or "native": failed to mount overlay: invalid argument

Solution:

This is already solved, we can use --snapshotter=fuse-overlayfs since we fixed fuse support in https://github.com/gitpod-io/gitpod/pull/4594 and https://github.com/gitpod-io/gitpod/pull/4762

Error 2 Privileges (:negative_squared_cross_mark: )

E0721 09:00:38.387724   20938 server.go:288] "Failed to run kubelet" err="failed to run Kubelet: failed to create kubelet: open /dev/kmsg: no such file or directory"

The kubelet can run in rootless mode to avoid us dealing with privileged devices, like we do for Docker.

However, running in that mode

 sudo k3s server --snapshotter=fuse-overlayfs --rootless
WARN[2021-07-21T09:01:57.964650205Z] cannot set cgroup2 evacuation, make sure to run k3s as a systemd unit 
WARN[2021-07-21T09:01:57.965501805Z] Running RootlessKit as the root user is unsupported. 
WARN[2021-07-21T09:01:57.965778310Z] The host root filesystem is mounted as "". Setting child propagation to "" is not supported. 
FATA[2021-07-21T09:01:57.969073870Z] failed to setup UID/GID map: failed to compute uid/gid map: No subuid ranges found for user 0 ("root") 

Amazing, we can create the uid and gid maps for that user by editing /etc/subuid and /etc/subgid

However, after doing that

sudo k3s server --snapshotter=fuse-overlayfs --rootless
WARN[2021-07-21T09:09:54.117789745Z] cannot set cgroup2 evacuation, make sure to run k3s as a systemd unit 
WARN[2021-07-21T09:09:54.119755834Z] Running RootlessKit as the root user is unsupported. 
WARN[2021-07-21T09:09:54.120024215Z] The host root filesystem is mounted as "". Setting child propagation to "" is not supported. 
FATA[2021-07-21T09:09:54.125484467Z] failed to setup UID/GID map: newuidmap 24262 [0 0 1 1 200000 65536] failed: newuidmap: write to uid_map failed: Operation not permitted
: exit status 1 

Looks like the problem is that the current workspace root user cannot write the uid map of any process.

24548 openat(AT_FDCWD, "/proc/24262/", O_RDONLY|O_DIRECTORY) = 3
24548 openat(3, "uid_map", O_WRONLY)    = -1 EACCES (Permission denied)
24548 write(2, "newuidmap: open of uid_map faile"..., 53) = 53

Front logo Front conversations

csweichel commented 3 years ago

/schedule

fntlnz commented 3 years ago

I think we have two roads for this.

@leodido suggested that we can make the process dumpable (better than making dumpable system wide echo 1 > /proc/sys/fs/suid_dumpable

prctl(PR_SET_DUMPABLE, 1, 0, 0, 0)

Not sure about the security implications of this just yet

OR

Maybe it's a crazy and wrong idea but what if we use our seccomp notify mechanism to pass an unprivileged file handle and return that instead of the actual one in the proc filesystem?

csweichel commented 3 years ago

Maybe it's a crazy and wrong idea but what if we use our seccomp notify mechanism to pass an unprivileged file handle and return that instead of the actual one in the proc filesystem?

Probably a stupid question, but would that require SECCOMP_NOTIFY_IOCTL_ADDFD?

leodido commented 3 years ago

I think we have two roads for this.

@leodido suggested that we can make the process dumpable (better than making dumpable system wide echo 1 > /proc/sys/fs/suid_dumpable

prctl(PR_SET_DUMPABLE, 1, 0, 0, 0)

Not sure about the security implications of this just yet

OR

Maybe it's a crazy and wrong idea but what if we use our seccomp notify mechanism to pass an unprivileged file handle and return that instead of the actual one in the proc filesystem?

Yup! This code in the Kernel could be the reason why that write fails.

First attempt would be to play with the sysctl knob (/proc/sys/fs/suid_dumpable at the moment is set to 2).

image
leodido commented 3 years ago

A quick update on the progress made by me and @fntlnz (also thanks to @csweichel 🤗 ).

image

^^^ These are the settings for /etc/subuid and /etc/subgid we used.

Then inspecting with strace:

[pid 16099] openat(AT_FDCWD, "/dev/kmsg", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
[pid 16099] openat(AT_FDCWD, "/etc/localtime", O_RDONLY) = 7
[pid 16099] write(2, "\33[31mFATA\33[0m[2021-07-21T16:00:0"..., 92FATA[2021-07-21T16:00:01.195076930Z] no such file or directory                    
) = 92

Creating /dev/kmsg (in the host) made us progress further.

The next error is about permissions on /root/.rancher/.... Specifying a directory (or playing with symlinks) in /workspace removes such an error.

So, we got k3s to start in rootless mode. But it's not properly functioning yet.

For example:

gitpod /workspace/gitpod $ sudo k3s kubectl get pods --all-namespaces
Unable to connect to the server: x509: certificate signed by unknown authority
csweichel commented 3 years ago

/assign @fntlnz

csweichel commented 3 years ago

/assign @leodido

csweichel commented 3 years ago

We (mainly @leodido) managed to get a bit further on this journey. The next issue is the recent change in runc, see https://github.com/gitpod-io/gitpod/issues/5124.

Complete list of steps:

  1. download and install k3s
    curl -sfL https://get.k3s.io | sh -
  2. Create a /dev/kmsg file in the workspace. This must be done as root from the node, but can be empty. A simple touch is enough.
  3. Make sure newuidmap and mount.fuse3 is present
    sudo apt-get update
    sudo apt-get install fuse3 uidmap
  4. Ensure the UID/GID range in /etc/sub*id fall within /proc/self/uid_map/gid_map resp. E.g. by changing /etc/subuid and /etc/subgid to
    gitpod:1000:1000
  5. Create a cgroupv2 namespace (maybe optional, dunno yet)
    sudo unshare -C bash
    su gitpod
  6. Bind mount the hardcoded kubelet directory to /workspace - the shiftfs mount interferes with the kubelet
    mkdir /workspace/kubelet
    sudo mount --rbind /workspace/kubelet /home/gitpod/.rancher/k3s/agent/kubelet
  7. Set XDG_RUNTIME_DIR to something sensible
    export XDG_RUNTIME_DIR=/workspace/k3s/config
    mkdir -p -m 700 $XDG_RUNTIME_DIR
  8. Run k3s server
    k3s server --rootless --snapshotter=fuse-overlayfs --debug
leodido commented 3 years ago

To do not have k3s installation error out run it like so:

curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_ENABLE=true INSTALL_K3S_SKIP_START=true INSTALL_K3S_SYMLINK=skip sh -

To get flooded (😄 ) by debug logs use:

k3s server --rootless --snapshotter=fuse-overlayfs --debug -v 10
csweichel commented 3 years ago

We've tested the steps above on a branch with #5139 (the runc proc mount fix), and have encountered the following error:

{"args":["/app/nsinsider","move-mount","--target","/run/k3s/containerd/io.containerd.runtime.v2.task/k8s.io/8af8c2bf9a4d742b5110810bd549dd8731936d25df6d25baf715fec19fc651ca/rootfs/proc","--pipe-fd","3"],"instanceId":"48fb29e2-fdb3-434a-9133-86c8c9d3e250","level":"fatal","message":"permission denied","serviceContext":{"service":"nsinsider","version":""},"severity":"CRITICAL","time":"2021-08-10T18:46:00Z"}

It's really odd that the move_mount syscall would fail with permission denied considering it's run as root. According to the move_mount man page this happens when:

To select a mount object, no permissions are required on the object referred to by the path, but execute (search) permission is required on all of the directories in pathname that lead to the object.

respectively

Search permission is denied for one of the directories in the path prefix of pathname .

Considering there's nothing inherently special about root, it's conceivable that the target path does not grant execute permissions to world, hence the EPERM. This is hard to debug because the paths are very short lived.

JohannesLandgraf commented 3 years ago

Any progress/update on this issue? cc @leodido @csweichel

JanKoehnlein commented 3 years ago

I am not sure who's working on this ATM. Maybe remove the groundwork: in progress label?

tokt commented 2 years ago

@csweichel Sorry to bother, I've been trying to follow the steps from https://github.com/gitpod-io/gitpod/issues/4889#issuecomment-895385399 and I got stuck at step 2: how do I "sudo touch /dev/kmsg" from the node level? Do I need to setup some magic before that?

csweichel commented 2 years ago

No bother at all. You'd need to run this experiment on a Gitpod installation where you have access the cluster Gitpod is running on. Once your workspace is running, you can

  1. kubectl exec into ws-daemon on the machine your workspace is running on (kubectl get pod -o yaml and kubectl describe node come in very handy here)
  2. identify your workspace. I usually do this by running sleep 1234 in the workspace, and then look at the process table from within ws-daemon
  3. Once you've found the sleep process/workspace, you can create /dev/kmsg by running: nsenter -t <sleepPID or workspace supervisor PID> -m touch /dev/kmesg

You cannot do this from within the workspace because there /dev is a bind mount to /dev of the workspace's Kubernetes pod, hence you don't have permission for this operation.

aledbf commented 2 years ago

@csweichel with the new k3s clusters now I get

$ sudo k3s server --rootless --snapshotter=fuse-overlayfs --debug -v 10
INFO[0000] Acquiring lock file /var/lib/rancher/k3s/data/.lock 
INFO[0000] Preparing data dir /var/lib/rancher/k3s/data/9d8f9670e1bff08a901bc7bc270202323f7c2c716a89a73d776c363ac1971018 
DEBU[0001] Verified hash aux/ebtablesd is correct       
DEBU[0001] Verified hash aux/iptables-detect.sh is correct 
DEBU[0001] Verified hash aux/xtables-set-mode.sh is correct 
DEBU[0001] Verified hash charon is correct              
DEBU[0001] Verified hash slirp4netns is correct         
DEBU[0001] Verified hash socat is correct               
DEBU[0001] Verified hash swanctl is correct             
DEBU[0001] Verified hash aux/ebtables-legacy is correct 
DEBU[0001] Verified hash conntrack is correct           
DEBU[0001] Verified hash containerd is correct          
DEBU[0001] Verified hash ip is correct                  
DEBU[0001] Verified hash ipset is correct               
DEBU[0001] Verified hash aux/ebtablesu is correct       
DEBU[0001] Verified hash aux/xtables-legacy-multi is correct 
DEBU[0001] Verified hash blkid is correct               
DEBU[0001] Verified hash find is correct                
DEBU[0001] Verified hash cni is correct                 
DEBU[0001] Verified hash containerd-shim is correct     
DEBU[0001] Verified hash aux/wg-add.sh is correct       
DEBU[0001] Verified hash aux/xtables-nft-multi is correct 
DEBU[0001] Verified hash coreutils is correct           
DEBU[0001] Verified hash ethtool is correct             
DEBU[0001] Verified hash runc is correct                
DEBU[0001] Verified hash aux/iptables-apply is correct  
DEBU[0001] Verified hash fuse-overlayfs is correct      
DEBU[0001] Verified hash losetup is correct             
DEBU[0001] Verified hash pigz is correct                
DEBU[0001] Verified hash containerd-shim-runc-v2 is correct 
DEBU[0001] Verified hash nsenter is correct             
DEBU[0001] Verified hash busybox is correct             
DEBU[0001] Verified hash check-config is correct        
DEBU[0001] Verified link tee is correct                 
DEBU[0001] Verified link aux/ip6tables-legacy-restore is correct 
DEBU[0001] Verified link head is correct                
DEBU[0001] Verified link sync is correct                
DEBU[0001] Verified link aux/ip6tables is correct       
DEBU[0001] Verified link expand is correct              
DEBU[0001] Verified link fdformat is correct            
DEBU[0001] Verified link strings is correct             
DEBU[0001] Verified link udhcpc is correct              
DEBU[0001] Verified link aux/iptables-legacy-save is correct 
DEBU[0001] Verified link dirname is correct             
DEBU[0001] Verified link setserial is correct           
DEBU[0001] Verified link ctr is correct                 
DEBU[0001] Verified link deallocvt is correct           
DEBU[0001] Verified link delgroup is correct            
DEBU[0001] Verified link fsfreeze is correct            
DEBU[0001] Verified link k3s-server is correct          
DEBU[0001] Verified link aux/ebtables-nft-restore is correct 
DEBU[0001] Verified link crond is correct               
DEBU[0001] Verified link csplit is correct              
DEBU[0001] Verified link microcom is correct            
DEBU[0001] Verified link ptx is correct                 
DEBU[0001] Verified link wget is correct                
DEBU[0001] Verified link timeout is correct             
DEBU[0001] Verified link usleep is correct              
DEBU[0001] Verified link fmt is correct                 
DEBU[0001] Verified link fsck is correct                
DEBU[0001] Verified link lzcat is correct               
DEBU[0001] Verified link addgroup is correct            
DEBU[0001] Verified link aux/iptables-legacy-restore is correct 
DEBU[0001] Verified link i2cdetect is correct           
DEBU[0001] Verified link freeramdisk is correct         
DEBU[0001] Verified link nslookup is correct            
DEBU[0001] Verified link sysctl is correct              
DEBU[0001] Verified link sha512sum is correct           
DEBU[0001] Verified link yes is correct                 
DEBU[0001] Verified link cat is correct                 
DEBU[0001] Verified link mdev is correct                
DEBU[0001] Verified link pivot_root is correct          
DEBU[0001] Verified link mkdosfs is correct             
DEBU[0001] Verified link time is correct                
DEBU[0001] Verified link [[ is correct                  
DEBU[0001] Verified link aux/iptables-nft-save is correct 
DEBU[0001] Verified link linuxrc is correct             
DEBU[0001] Verified link i2cget is correct              
DEBU[0001] Verified link i2ctransfer is correct         
DEBU[0001] Verified link nameif is correct              
DEBU[0001] Verified link sleep is correct               
DEBU[0001] Verified link sum is correct                 
DEBU[0001] Verified link adduser is correct             
DEBU[0001] Verified link arch is correct                
DEBU[0001] Verified link fbset is correct               
DEBU[0001] Verified link uudecode is correct            
DEBU[0001] Verified link setpriv is correct             
DEBU[0001] Verified link vconfig is correct             
DEBU[0001] Verified link vi is correct                  
DEBU[0001] Verified link more is correct                
DEBU[0001] Verified link mv is correct                  
DEBU[0001] Verified link watchdog is correct            
DEBU[0001] Verified link pathchk is correct             
DEBU[0001] Verified link printenv is correct            
DEBU[0001] Verified link rdate is correct               
DEBU[0001] Verified link run-init is correct            
DEBU[0001] Verified link stat is correct                
DEBU[0001] Verified link aux/ip6tables-nft-save is correct 
DEBU[0001] Verified link crontab is correct             
DEBU[0001] Verified link du is correct                  
DEBU[0001] Verified link login is correct               
DEBU[0001] Verified link lzma is correct                
DEBU[0001] Verified link rmdir is correct               
DEBU[0001] Verified link killall is correct             
DEBU[0001] Verified link mkpasswd is correct            
DEBU[0001] Verified link od is correct                  
DEBU[0001] Verified link wc is correct                  
DEBU[0001] Verified link whoami is correct              
DEBU[0001] Verified link crictl is correct              
DEBU[0001] Verified link eject is correct               
DEBU[0001] Verified link getty is correct               
DEBU[0001] Verified link watch is correct               
DEBU[0001] Verified link aux/ebtables-restore is correct 
DEBU[0001] Verified link hostname is correct            
DEBU[0001] Verified link mim is correct                 
DEBU[0001] Verified link chmod is correct               
DEBU[0001] Verified link hostid is correct              
DEBU[0001] Verified link ifconfig is correct            
DEBU[0001] Verified link setlogcons is correct          
DEBU[0001] Verified link ash is correct                 
DEBU[0001] Verified link aux/iptables-restore is correct 
DEBU[0001] Verified link aux/iptables-restore-translate is correct 
DEBU[0001] Verified link ubirename is correct           
DEBU[0001] Verified link df is correct                  
DEBU[0001] Verified link nologin is correct             
DEBU[0001] Verified link rmmod is correct               
DEBU[0001] Verified link bunzip2 is correct             
DEBU[0001] Verified link dc is correct                  
DEBU[0001] Verified link ln is correct                  
DEBU[0001] Verified link arp is correct                 
DEBU[0001] Verified link aux/iptables-translate is correct 
DEBU[0001] Verified link vlock is correct               
DEBU[0001] Verified link split is correct               
DEBU[0001] Verified link unlzma is correct              
DEBU[0001] Verified link grep is correct                
DEBU[0001] Verified link mkfifo is correct              
DEBU[0001] Verified link pwd is correct                 
DEBU[0001] Verified link reset is correct               
DEBU[0001] Verified link gunzip is correct              
DEBU[0001] Verified link ipcs is correct                
DEBU[0001] Verified link ipneigh is correct             
DEBU[0001] Verified link groups is correct              
DEBU[0001] Verified link paste is correct               
DEBU[0001] Verified link setconsole is correct          
DEBU[0001] Verified link aux/arptables-restore is correct 
DEBU[0001] Verified link aux/ebtables is correct        
DEBU[0001] Verified link fallocate is correct           
DEBU[0001] Verified link aux/arptables is correct       
DEBU[0001] Verified link awk is correct                 
DEBU[0001] Verified link init is correct                
DEBU[0001] Verified link pr is correct                  
DEBU[0001] Verified link bc is correct                  
DEBU[0001] Verified link chroot is correct              
DEBU[0001] Verified link deluser is correct             
DEBU[0001] Verified link inetd is correct               
DEBU[0001] Verified link lsmod is correct               
DEBU[0001] Verified link aux/arptables-nft is correct   
DEBU[0001] Verified link aux/ip6tables-restore-translate is correct 
DEBU[0001] Verified link aux/iptables is correct        
DEBU[0001] Verified link runlevel is correct            
DEBU[0001] Verified link uuencode is correct            
DEBU[0001] Verified link fold is correct                
DEBU[0001] Verified link host-local is correct          
DEBU[0001] Verified link portmap is correct             
DEBU[0001] Verified link ar is correct                  
DEBU[0001] Verified link chown is correct               
DEBU[0001] Verified link flannel is correct             
DEBU[0001] Verified link nice is correct                
DEBU[0001] Verified link nuke is correct                
DEBU[0001] Verified link shuf is correct                
DEBU[0001] Verified link uevent is correct              
DEBU[0001] Verified link dos2unix is correct            
DEBU[0001] Verified link install is correct             
DEBU[0001] Verified link kubectl is correct             
DEBU[0001] Verified link unlink is correct              
DEBU[0001] Verified link unpigz is correct              
DEBU[0001] Verified link klogd is correct               
DEBU[0001] Verified link pinky is correct               
DEBU[0001] Verified link run-parts is correct           
DEBU[0001] Verified link fuser is correct               
DEBU[0001] Verified link halt is correct                
DEBU[0001] Verified link join is correct                
DEBU[0001] Verified link last is correct                
DEBU[0001] Verified link mesg is correct                
DEBU[0001] Verified link aux/ebtables-save is correct   
DEBU[0001] Verified link chvt is correct                
DEBU[0001] Verified link dnsdomainname is correct       
DEBU[0001] Verified link mkdir is correct               
DEBU[0001] Verified link top is correct                 
DEBU[0001] Verified link readlink is correct            
DEBU[0001] Verified link aux/arptables-save is correct  
DEBU[0001] Verified link aux/ip6tables-nft is correct   
DEBU[0001] Verified link echo is correct                
DEBU[0001] Verified link less is correct                
DEBU[0001] Verified link linux64 is correct             
DEBU[0001] Verified link which is correct               
DEBU[0001] Verified link bridge is correct              
DEBU[0001] Verified link dd is correct                  
DEBU[0001] Verified link partprobe is correct           
DEBU[0001] Verified link route is correct               
DEBU[0001] Verified link sha3sum is correct             
DEBU[0001] Verified link switch_root is correct         
DEBU[0001] Verified link tail is correct                
DEBU[0001] Verified link [ is correct                   
DEBU[0001] Verified link dir is correct                 
DEBU[0001] Verified link iprule is correct              
DEBU[0001] Verified link sha224sum is correct           
DEBU[0001] Verified link aux/arptables-nft-restore is correct 
DEBU[0001] Verified link aux/ebtables-nft is correct    
DEBU[0001] Verified link dnsd is correct                
DEBU[0001] Verified link linux32 is correct             
DEBU[0001] Verified link netstat is correct             
DEBU[0001] Verified link tsort is correct               
DEBU[0001] Verified link who is correct                 
DEBU[0001] Verified link aux/iptables-save is correct   
DEBU[0001] Verified link dumpkmap is correct            
DEBU[0001] Verified link gzip is correct                
DEBU[0001] Verified link iproute is correct             
DEBU[0001] Verified link makedevs is correct            
DEBU[0001] Verified link nl is correct                  
DEBU[0001] Verified link k3s-agent is correct           
DEBU[0001] Verified link k3s-etcd-snapshot is correct   
DEBU[0001] Verified link sha256sum is correct           
DEBU[0001] Verified link aux/ip6tables-apply is correct 
DEBU[0001] Verified link unix2dos is correct            
DEBU[0001] Verified link nohup is correct               
DEBU[0001] Verified link aux/ip6tables-translate is correct 
DEBU[0001] Verified link cksum is correct               
DEBU[0001] Verified link comm is correct                
DEBU[0001] Verified link tty is correct                 
DEBU[0001] Verified link getopt is correct              
DEBU[0001] Verified link shred is correct               
DEBU[0001] Verified link tac is correct                 
DEBU[0001] Verified link mke2fs is correct              
DEBU[0001] Verified link pidof is correct               
DEBU[0001] Verified link test is correct                
DEBU[0001] Verified link aux/ip6tables-legacy is correct 
DEBU[0001] Verified link ether-wake is correct          
DEBU[0001] Verified link lsusb is correct               
DEBU[0001] Verified link base64 is correct              
DEBU[0001] Verified link factor is correct              
DEBU[0001] Verified link loadkmap is correct            
DEBU[0001] Verified link lzopcat is correct             
DEBU[0001] Verified link sed is correct                 
DEBU[0001] Verified link arping is correct              
DEBU[0001] Verified link aux/iptables-legacy is correct 
DEBU[0001] Verified link aux/iptables-nft is correct    
DEBU[0001] Verified link su is correct                  
DEBU[0001] Verified link syslogd is correct             
DEBU[0001] Verified link resize is correct              
DEBU[0001] Verified link true is correct                
DEBU[0001] Verified link aux/ip6tables-restore is correct 
DEBU[0001] Verified link diff is correct                
DEBU[0001] Verified link realpath is correct            
DEBU[0001] Verified link aux/modprobe is correct        
DEBU[0001] Verified link reboot is correct              
DEBU[0001] Verified link stty is correct                
DEBU[0001] Verified link mountpoint is correct          
DEBU[0001] Verified link patch is correct               
DEBU[0001] Verified link sha384sum is correct           
DEBU[0001] Verified link sulogin is correct             
DEBU[0001] Verified link xzcat is correct               
DEBU[0001] Verified link aux/mount is correct           
DEBU[0001] Verified link chcon is correct               
DEBU[0001] Verified link killall5 is correct            
DEBU[0001] Verified link sha1sum is correct             
DEBU[0001] Verified link zcat is correct                
DEBU[0001] Verified link aux/arptables-nft-save is correct 
DEBU[0001] Verified link cpio is correct                
DEBU[0001] Verified link rm is correct                  
DEBU[0001] Verified link env is correct                 
DEBU[0001] Verified link loopback is correct            
DEBU[0001] Verified link mktemp is correct              
DEBU[0001] Verified link ps is correct                  
DEBU[0001] Verified link uname is correct               
DEBU[0001] Verified link aux/ebtables-nft-save is correct 
DEBU[0001] Verified link basenc is correct              
DEBU[0001] Verified link egrep is correct               
DEBU[0001] Verified link uptime is correct              
DEBU[0001] Verified link xxd is correct                 
DEBU[0001] Verified link ipcrm is correct               
DEBU[0001] Verified link traceroute is correct          
DEBU[0001] Verified link uniq is correct                
DEBU[0001] Verified link fdflush is correct             
DEBU[0001] Verified link sh is correct                  
DEBU[0001] Verified link ts is correct                  
DEBU[0001] Verified link poweroff is correct            
DEBU[0001] Verified link setfattr is correct            
DEBU[0001] Verified link setkeycodes is correct         
DEBU[0001] Verified link sort is correct                
DEBU[0001] Verified link users is correct               
DEBU[0001] Verified link xz is correct                  
DEBU[0001] Verified link b2sum is correct               
DEBU[0001] Verified link chgrp is correct               
DEBU[0001] Verified link pipe_progress is correct       
DEBU[0001] Verified link tftp is correct                
DEBU[0001] Verified link unzip is correct               
DEBU[0001] Verified link cut is correct                 
DEBU[0001] Verified link ifdown is correct              
DEBU[0001] Verified link insmod is correct              
DEBU[0001] Verified link kill is correct                
DEBU[0001] Verified link logname is correct             
DEBU[0001] Verified link lsattr is correct              
DEBU[0001] Verified link truncate is correct            
DEBU[0001] Verified link base32 is correct              
DEBU[0001] Verified link basename is correct            
DEBU[0001] Verified link iplink is correct              
DEBU[0001] Verified link ifup is correct                
DEBU[0001] Verified link ls is correct                  
DEBU[0001] Verified link printf is correct              
DEBU[0001] Verified link loadfont is correct            
DEBU[0001] Verified link nproc is correct               
DEBU[0001] Verified link openvt is correct              
DEBU[0001] Verified link runcon is correct              
DEBU[0001] Verified link vdir is correct                
DEBU[0001] Verified link aux/xtables-monitor is correct 
DEBU[0001] Verified link dircolors is correct           
DEBU[0001] Verified link expr is correct                
DEBU[0001] Verified link mt is correct                  
DEBU[0001] Verified link tr is correct                  
DEBU[0001] Verified link w is correct                   
DEBU[0001] Verified link chattr is correct              
DEBU[0001] Verified link hwclock is correct             
DEBU[0001] Verified link lsscsi is correct              
DEBU[0001] Verified link k3s is correct                 
DEBU[0001] Verified link logger is correct              
DEBU[0001] Verified link lspci is correct               
DEBU[0001] Verified link numfmt is correct              
DEBU[0001] Verified link svc is correct                 
DEBU[0001] Verified link bzcat is correct               
DEBU[0001] Verified link cp is correct                  
DEBU[0001] Verified link date is correct                
DEBU[0001] Verified link telnet is correct              
DEBU[0001] Verified link iptunnel is correct            
DEBU[0001] Verified link passwd is correct              
DEBU[0001] Verified link ping is correct                
DEBU[0001] Verified link resume is correct              
DEBU[0001] Verified link svok is correct                
DEBU[0001] Verified link aux/iptables-nft-restore is correct 
DEBU[0001] Verified link false is correct               
DEBU[0001] Verified link hdparm is correct              
DEBU[0001] Verified link unxz is correct                
DEBU[0001] Verified link free is correct                
DEBU[0001] Verified link lsof is correct                
DEBU[0001] Verified link mknod is correct               
DEBU[0001] Verified link setarch is correct             
DEBU[0001] Verified link start-stop-daemon is correct   
DEBU[0001] Verified link aux/ip6tables-legacy-save is correct 
DEBU[0001] Verified link aux/ip6tables-save is correct  
DEBU[0001] Verified link cmp is correct                 
DEBU[0001] Verified link seq is correct                 
DEBU[0001] Verified link touch is correct               
DEBU[0001] Verified link i2cset is correct              
DEBU[0001] Verified link id is correct                  
DEBU[0001] Verified link link is correct                
DEBU[0001] Verified link umount is correct              
DEBU[0001] Verified link unlzop is correct              
DEBU[0001] Verified link clear is correct               
DEBU[0001] Verified link fgrep is correct               
DEBU[0001] Verified link md5sum is correct              
DEBU[0001] Verified link hexedit is correct             
DEBU[0001] Verified link i2cdump is correct             
DEBU[0001] Verified link ipaddr is correct              
DEBU[0001] Verified link tar is correct                 
DEBU[0001] Verified link unexpand is correct            
DEBU[0001] Verified link aux/ip6tables-nft-restore is correct 
DEBU[0001] Verified link chrt is correct                
DEBU[0001] Verified link devmem is correct              
DEBU[0001] Asset dir /var/lib/rancher/k3s/data/9d8f9670e1bff08a901bc7bc270202323f7c2c716a89a73d776c363ac1971018 
DEBU[0001] Running /var/lib/rancher/k3s/data/9d8f9670e1bff08a901bc7bc270202323f7c2c716a89a73d776c363ac1971018/bin/k3s-server [k3s server --rootless --snapshotter=fuse-overlayfs --debug -v 10] 
DEBU[2021-12-01T23:41:49.474793550Z] Running rootless parent                      
FATA[2021-12-01T23:41:49.475061846Z] expected sysctl value "net.ipv4.ip_forward" to be "1", got "0"; try adding "net.ipv4.ip_forward=1" to /etc/sysctl.conf and running `sudo sysctl --system` 
csweichel commented 2 years ago

I ran the same command in a workspace with experimentalNetwork: true and got past that point. There's heaps of debug output:

here an excerpt ``` I1202 10:45:48.630609 2235 apf_filter.go:161] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/default/endpoints/kubernetes", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"default", Resource:"endpoints", Subresource:"", Name:"kubernetes", Parts:[]string{"endpoints", "kubernetes"}}, User: &user.DefaultInfo{Name:"system:apiserver", UID:"7a1b9643-3cd9-47a1-a393-b31d9553cb1d", Groups:[]string{"system:masters"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false I1202 10:45:48.630689 2235 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/default/endpoints/kubernetes" satisfied by nonGoRestful I1202 10:45:48.630716 2235 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/default/endpoints/kubernetes" satisfied by prefix /api/ I1202 10:45:48.630739 2235 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/default/endpoints/kubernetes" satisfied by gorestful with webservice /api/v1 I1202 10:45:48.632764 2235 apf_filter.go:165] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/default/endpoints/kubernetes", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"default", Resource:"endpoints", Subresource:"", Name:"kubernetes", Parts:[]string{"endpoints", "kubernetes"}}, User: &user.DefaultInfo{Name:"system:apiserver", UID:"7a1b9643-3cd9-47a1-a393-b31d9553cb1d", Groups:[]string{"system:masters"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false, Finish() => panicking=false idle=false I1202 10:45:48.632885 2235 httplog.go:109] "HTTP" verb="GET" URI="/api/v1/namespaces/default/endpoints/kubernetes" latency="2.536259ms" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="88acc20f-9255-4266-977b-fd8742189438" srcIP="127.0.0.1:33694" resp=200 I1202 10:45:48.633156 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes 200 OK in 3 milliseconds I1202 10:45:48.633181 2235 round_trippers.go:460] Response Headers: I1202 10:45:48.633193 2235 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: b4edd666-d5bb-4b34-b265-e71666e61ab9 I1202 10:45:48.633203 2235 round_trippers.go:463] Content-Length: 342 I1202 10:45:48.633221 2235 round_trippers.go:463] Date: Thu, 02 Dec 2021 10:45:48 GMT I1202 10:45:48.633229 2235 round_trippers.go:463] Audit-Id: 88acc20f-9255-4266-977b-fd8742189438 I1202 10:45:48.633238 2235 round_trippers.go:463] Cache-Control: no-cache, private I1202 10:45:48.633246 2235 round_trippers.go:463] Content-Type: application/vnd.kubernetes.protobuf I1202 10:45:48.633253 2235 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: dcf1015f-d5f5-4a5a-b1ee-8d3b96f9aac4 I1202 10:45:48.633319 2235 request.go:1179] Response Body: 00000000 6b 38 73 00 0a 0f 0a 02 76 31 12 09 45 6e 64 70 |k8s.....v1..Endp| 00000010 6f 69 6e 74 73 12 ba 02 0a 94 02 0a 0a 6b 75 62 |oints........kub| 00000020 65 72 6e 65 74 65 73 12 00 1a 07 64 65 66 61 75 |ernetes....defau| 00000030 6c 74 22 00 2a 24 34 62 66 34 38 31 31 39 2d 37 |lt".*$4bf48119-7| 00000040 63 62 32 2d 34 32 35 37 2d 62 33 64 38 2d 34 62 |cb2-4257-b3d8-4b| 00000050 62 66 37 38 38 37 32 30 35 33 32 03 32 30 35 38 |bf788720532.2058| 00000060 00 42 08 08 d2 c7 a2 8d 06 10 00 5a 2f 0a 27 65 |.B.........Z/.'e| 00000070 6e 64 70 6f 69 6e 74 73 6c 69 63 65 2e 6b 75 62 |ndpointslice.kub| 00000080 65 72 6e 65 74 65 73 2e 69 6f 2f 73 6b 69 70 2d |ernetes.io/skip-| 00000090 6d 69 72 72 6f 72 12 04 74 72 75 65 7a 00 8a 01 |mirror..truez...| 000000a0 8d 01 0a 03 6b 33 73 12 06 55 70 64 61 74 65 1a |....k3s..Update.| 000000b0 02 76 31 22 08 08 d2 c7 a2 8d 06 10 00 32 08 46 |.v1".........2.F| 000000c0 69 65 6c 64 73 56 31 3a 64 0a 62 7b 22 66 3a 6d |ieldsV1:d.b{"f:m| 000000d0 65 74 61 64 61 74 61 22 3a 7b 22 66 3a 6c 61 62 |etadata":{"f:lab| 000000e0 65 6c 73 22 3a 7b 22 2e 22 3a 7b 7d 2c 22 66 3a |els":{".":{},"f:| 000000f0 65 6e 64 70 6f 69 6e 74 73 6c 69 63 65 2e 6b 75 |endpointslice.ku| 00000100 62 65 72 6e 65 74 65 73 2e 69 6f 2f 73 6b 69 70 |bernetes.io/skip| 00000110 2d 6d 69 72 72 6f 72 22 3a 7b 7d 7d 7d 2c 22 66 |-mirror":{}}},"f| 00000120 3a 73 75 62 73 65 74 73 22 3a 7b 7d 7d 42 00 12 |:subsets":{}}B..| 00000130 21 0a 0e 0a 0a 31 30 2e 30 2e 32 2e 31 30 30 1a |!....10.0.2.100.| 00000140 00 1a 0f 0a 05 68 74 74 70 73 10 ab 32 1a 03 54 |.....https..2..T| 00000150 43 50 1a 00 22 00 |CP..".| I1202 10:45:48.634479 2235 lease.go:82] Current master IPs listed in storage are [10.0.2.100] I1202 10:45:48.635004 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" -H "Authorization: Bearer " 'https://127.0.0.1:6444/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes' I1202 10:45:48.635508 2235 apf_controller.go:780] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes", Verb:"get", APIPrefix:"apis", APIGroup:"discovery.k8s.io", APIVersion:"v1", Namespace:"default", Resource:"endpointslices", Subresource:"", Name:"kubernetes", Parts:[]string{"endpointslices", "kubernetes"}}, User: &user.DefaultInfo{Name:"system:apiserver", UID:"7a1b9643-3cd9-47a1-a393-b31d9553cb1d", Groups:[]string{"system:masters"}, Extra:map[string][]string(nil)}}) I1202 10:45:48.635602 2235 apf_controller.go:810] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes", Verb:"get", APIPrefix:"apis", APIGroup:"discovery.k8s.io", APIVersion:"v1", Namespace:"default", Resource:"endpointslices", Subresource:"", Name:"kubernetes", Parts:[]string{"endpointslices", "kubernetes"}}, User: &user.DefaultInfo{Name:"system:apiserver", UID:"7a1b9643-3cd9-47a1-a393-b31d9553cb1d", Groups:[]string{"system:masters"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", immediate I1202 10:45:48.635651 2235 apf_filter.go:161] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes", Verb:"get", APIPrefix:"apis", APIGroup:"discovery.k8s.io", APIVersion:"v1", Namespace:"default", Resource:"endpointslices", Subresource:"", Name:"kubernetes", Parts:[]string{"endpointslices", "kubernetes"}}, User: &user.DefaultInfo{Name:"system:apiserver", UID:"7a1b9643-3cd9-47a1-a393-b31d9553cb1d", Groups:[]string{"system:masters"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false I1202 10:45:48.635773 2235 handler.go:153] kube-aggregator: GET "/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" satisfied by nonGoRestful I1202 10:45:48.635797 2235 pathrecorder.go:247] kube-aggregator: "/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" satisfied by prefix /apis/discovery.k8s.io/v1/ I1202 10:45:48.635819 2235 handler.go:143] kube-apiserver: GET "/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" satisfied by gorestful with webservice /apis/discovery.k8s.io/v1 I1202 10:45:48.637828 2235 apf_filter.go:165] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes", Verb:"get", APIPrefix:"apis", APIGroup:"discovery.k8s.io", APIVersion:"v1", Namespace:"default", Resource:"endpointslices", Subresource:"", Name:"kubernetes", Parts:[]string{"endpointslices", "kubernetes"}}, User: &user.DefaultInfo{Name:"system:apiserver", UID:"7a1b9643-3cd9-47a1-a393-b31d9553cb1d", Groups:[]string{"system:masters"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false, Finish() => panicking=false idle=false I1202 10:45:48.637964 2235 httplog.go:109] "HTTP" verb="GET" URI="/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes" latency="2.556231ms" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="7798e1ff-bece-41dc-9f02-66071119adce" srcIP="127.0.0.1:33694" resp=200 I1202 10:45:48.638244 2235 round_trippers.go:454] GET https://127.0.0.1:6444/apis/discovery.k8s.io/v1/namespaces/default/endpointslices/kubernetes 200 OK in 3 milliseconds I1202 10:45:48.638271 2235 round_trippers.go:460] Response Headers: I1202 10:45:48.638278 2235 round_trippers.go:463] Audit-Id: 7798e1ff-bece-41dc-9f02-66071119adce I1202 10:45:48.638289 2235 round_trippers.go:463] Cache-Control: no-cache, private I1202 10:45:48.638296 2235 round_trippers.go:463] Content-Type: application/vnd.kubernetes.protobuf I1202 10:45:48.638300 2235 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: dcf1015f-d5f5-4a5a-b1ee-8d3b96f9aac4 I1202 10:45:48.638308 2235 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: b4edd666-d5bb-4b34-b265-e71666e61ab9 I1202 10:45:48.638315 2235 round_trippers.go:463] Content-Length: 400 I1202 10:45:48.638321 2235 round_trippers.go:463] Date: Thu, 02 Dec 2021 10:45:48 GMT I1202 10:45:48.638378 2235 request.go:1179] Response Body: 00000000 6b 38 73 00 0a 24 0a 13 64 69 73 63 6f 76 65 72 |k8s..$..discover| 00000010 79 2e 6b 38 73 2e 69 6f 2f 76 31 12 0d 45 6e 64 |y.k8s.io/v1..End| 00000020 70 6f 69 6e 74 53 6c 69 63 65 12 df 02 0a b3 02 |pointSlice......| 00000030 0a 0a 6b 75 62 65 72 6e 65 74 65 73 12 00 1a 07 |..kubernetes....| 00000040 64 65 66 61 75 6c 74 22 00 2a 24 62 38 31 66 65 |default".*$b81fe| 00000050 38 33 35 2d 30 61 62 38 2d 34 65 39 35 2d 62 65 |835-0ab8-4e95-be| 00000060 61 35 2d 31 61 38 33 37 32 36 64 30 66 30 31 32 |a5-1a83726d0f012| 00000070 03 32 30 36 38 01 42 08 08 d2 c7 a2 8d 06 10 00 |.2068.B.........| 00000080 5a 28 0a 1a 6b 75 62 65 72 6e 65 74 65 73 2e 69 |Z(..kubernetes.i| 00000090 6f 2f 73 65 72 76 69 63 65 2d 6e 61 6d 65 12 0a |o/service-name..| 000000a0 6b 75 62 65 72 6e 65 74 65 73 7a 00 8a 01 b3 01 |kubernetesz.....| 000000b0 0a 03 6b 33 73 12 06 55 70 64 61 74 65 1a 13 64 |..k3s..Update..d| 000000c0 69 73 63 6f 76 65 72 79 2e 6b 38 73 2e 69 6f 2f |iscovery.k8s.io/| 000000d0 76 31 22 08 08 d2 c7 a2 8d 06 10 00 32 08 46 69 |v1".........2.Fi| 000000e0 65 6c 64 73 56 31 3a 79 0a 77 7b 22 66 3a 61 64 |eldsV1:y.w{"f:ad| 000000f0 64 72 65 73 73 54 79 70 65 22 3a 7b 7d 2c 22 66 |dressType":{},"f| 00000100 3a 65 6e 64 70 6f 69 6e 74 73 22 3a 7b 7d 2c 22 |:endpoints":{},"| 00000110 66 3a 6d 65 74 61 64 61 74 61 22 3a 7b 22 66 3a |f:metadata":{"f:| 00000120 6c 61 62 65 6c 73 22 3a 7b 22 2e 22 3a 7b 7d 2c |labels":{".":{},| 00000130 22 66 3a 6b 75 62 65 72 6e 65 74 65 73 2e 69 6f |"f:kubernetes.io| 00000140 2f 73 65 72 76 69 63 65 2d 6e 61 6d 65 22 3a 7b |/service-name":{| 00000150 7d 7d 7d 2c 22 66 3a 70 6f 72 74 73 22 3a 7b 7d |}}},"f:ports":{}| 00000160 7d 42 00 12 10 0a 0a 31 30 2e 30 2e 32 2e 31 30 |}B.....10.0.2.10| 00000170 30 12 02 08 01 1a 0f 0a 05 68 74 74 70 73 12 03 |0........https..| 00000180 54 43 50 18 ab 32 22 04 49 50 76 34 1a 00 22 00 |TCP..2".IPv4..".| I1202 10:45:48.947380 2235 controller.go:107] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001 I1202 10:45:48.947493 2235 handler.go:153] kube-apiserver: GET "/openapi/v2" satisfied by nonGoRestful I1202 10:45:48.947511 2235 pathrecorder.go:240] kube-apiserver: "/openapi/v2" satisfied by exact match I1202 10:45:48.947567 2235 controller.go:122] OpenAPI AggregationController: action for local item k8s_internal_local_delegation_chain_0000000001: Requeue after 1s. I1202 10:45:49.038791 2235 controller.go:107] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002 I1202 10:45:49.038894 2235 handler.go:153] apiextensions-apiserver: GET "/openapi/v2" satisfied by nonGoRestful I1202 10:45:49.038919 2235 pathrecorder.go:240] apiextensions-apiserver: "/openapi/v2" satisfied by exact match I1202 10:45:49.038992 2235 controller.go:122] OpenAPI AggregationController: action for local item k8s_internal_local_delegation_chain_0000000002: Requeue after 1s. I1202 10:45:49.404136 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" 'https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc' I1202 10:45:49.404905 2235 apf_controller.go:780] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Parts:[]string{"nodes", "ws-c233b1e8-2756-4c8d-b052-34e2b38056cc"}}, User: &user.DefaultInfo{Name:"system:admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) I1202 10:45:49.404960 2235 apf_controller.go:810] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Parts:[]string{"nodes", "ws-c233b1e8-2756-4c8d-b052-34e2b38056cc"}}, User: &user.DefaultInfo{Name:"system:admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", immediate I1202 10:45:49.404986 2235 apf_filter.go:161] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Parts:[]string{"nodes", "ws-c233b1e8-2756-4c8d-b052-34e2b38056cc"}}, User: &user.DefaultInfo{Name:"system:admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false I1202 10:45:49.405458 2235 handler.go:153] kube-aggregator: GET "/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc" satisfied by nonGoRestful I1202 10:45:49.405476 2235 pathrecorder.go:247] kube-aggregator: "/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc" satisfied by prefix /api/ I1202 10:45:49.405486 2235 handler.go:143] kube-apiserver: GET "/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc" satisfied by gorestful with webservice /api/v1 I1202 10:45:49.407134 2235 apf_filter.go:165] Handle(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Verb:"get", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"", Resource:"nodes", Subresource:"", Name:"ws-c233b1e8-2756-4c8d-b052-34e2b38056cc", Parts:[]string{"nodes", "ws-c233b1e8-2756-4c8d-b052-34e2b38056cc"}}, User: &user.DefaultInfo{Name:"system:admin", UID:"", Groups:[]string{"system:masters", "system:authenticated"}, Extra:map[string][]string(nil)}}) => fsName="exempt", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", isExempt=true, queued=false, Finish() => panicking=false idle=false I1202 10:45:49.407233 2235 httplog.go:109] "HTTP" verb="GET" URI="/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc" latency="2.782141ms" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="2d691404-7edc-43a5-80fa-8e9bf145c8f6" srcIP="127.0.0.1:33692" resp=404 I1202 10:45:49.407469 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc 404 Not Found in 3 milliseconds I1202 10:45:49.407495 2235 round_trippers.go:460] Response Headers: I1202 10:45:49.407504 2235 round_trippers.go:463] Content-Type: application/json I1202 10:45:49.407514 2235 round_trippers.go:463] X-Kubernetes-Pf-Flowschema-Uid: dcf1015f-d5f5-4a5a-b1ee-8d3b96f9aac4 I1202 10:45:49.407519 2235 round_trippers.go:463] X-Kubernetes-Pf-Prioritylevel-Uid: b4edd666-d5bb-4b34-b265-e71666e61ab9 I1202 10:45:49.407525 2235 round_trippers.go:463] Content-Length: 248 I1202 10:45:49.407533 2235 round_trippers.go:463] Date: Thu, 02 Dec 2021 10:45:49 GMT I1202 10:45:49.407538 2235 round_trippers.go:463] Audit-Id: 2d691404-7edc-43a5-80fa-8e9bf145c8f6 I1202 10:45:49.407548 2235 round_trippers.go:463] Cache-Control: no-cache, private I1202 10:45:49.407588 2235 request.go:1181] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"ws-c233b1e8-2756-4c8d-b052-34e2b38056cc\" not found","reason":"NotFound","details":{"name":"ws-c233b1e8-2756-4c8d-b052-34e2b38056cc","kind":"nodes"},"code":404} INFO[0014] Waiting for control-plane node ws-c233b1e8-2756-4c8d-b052-34e2b38056cc startup: nodes "ws-c233b1e8-2756-4c8d-b052-34e2b38056cc" not found ^CINFO[0015] Shutting down helm.cattle.io/v1, Kind=HelmChartConfig workers INFO[0015] Shutting down /v1, Kind=Endpoints workers I1202 10:45:49.907677 2235 reflector.go:225] Stopping reflector *v1.Addon (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 INFO[0015] Shutting down /v1, Kind=ConfigMap workers I1202 10:45:49.907759 2235 reflector.go:225] Stopping reflector *v1.DaemonSet (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 INFO[0015] Shutting down apps/v1, Kind=DaemonSet workers {"level":"warn","ts":"2021-12-02T10:45:49.907Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc00082ca80/#initially=[https://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = latest balancer error: last connection error: connection error: desc = \"transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused\""} ERRO[0015] error syncing 'default/kubernetes': handler k3s-apiserver-lb-controller: context canceled, requeuing INFO[0015] Shutting down helm.cattle.io/v1, Kind=HelmChart workers INFO[0015] Shutting down /v1, Kind=Node workers INFO[0015] Shutting down /v1, Kind=Service workers I1202 10:45:49.907782 2235 controller.go:185] Shutting down kubernetes service endpoint reconciler I1202 10:45:49.908033 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/k3s.cattle.io/v1/addons?allowWatchBookmarks=true&resourceVersion=217&timeout=7m7s&timeoutSeconds=427&watch=true" latency="10.315133651s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="449c7454-c57c-4e8d-ac40-19fe87409766" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.908062 2235 reflector.go:225] Stopping reflector *v1.Pod (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908112 2235 reflector.go:225] Stopping reflector *v1.HelmChart (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908156 2235 reflector.go:225] Stopping reflector *v1.Job (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908177 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=217&timeout=7m31s&timeoutSeconds=451&watch=true" latency="10.130781411s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="c246ec33-0a3f-4c2c-94b9-36e022dd8d51" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.908203 2235 reflector.go:535] storage/cacher.go:/rolebindings: Watch close - *rbac.RoleBinding total 9 items received I1202 10:45:49.908237 2235 reflector.go:225] Stopping reflector *v1.ConfigMap (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 INFO[0015] Shutting down /v1, Kind=Pod workers I1202 10:45:49.908280 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=218&timeout=7m30s&timeoutSeconds=450&watch=true" latency="10.129201209s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="2dd97b33-6614-4e91-8ffa-5330d7ddba09" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.908315 2235 reflector.go:225] Stopping reflector *v1.Secret (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908355 2235 reflector.go:225] Stopping reflector *v1.ClusterRoleBinding (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908372 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/pods?allowWatchBookmarks=true&resourceVersion=217&timeout=6m29s&timeoutSeconds=389&watch=true" latency="10.130258862s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="34111059-90fd-4362-ae71-70c5d6f7cb76" srcIP="127.0.0.1:33692" resp=0 INFO[0015] Shutting down k3s.cattle.io/v1, Kind=Addon workers INFO[0015] Shutting down batch/v1, Kind=Job workers I1202 10:45:49.908393 2235 reflector.go:225] Stopping reflector *v1.Service (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908533 2235 reflector.go:535] storage/cacher.go:/jobs: Watch close - *batch.Job total 2 items received INFO[0015] Shutting down rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding workers I1202 10:45:49.908315 2235 reflector.go:535] storage/cacher.go:/poddisruptionbudgets: Watch close - *policy.PodDisruptionBudget total 0 items received INFO[0015] Shutting down apps/v1, Kind=Deployment workers I1202 10:45:49.907811 2235 genericapiserver.go:349] "[graceful-termination] shutdown event" name="ShutdownInitiated" I1202 10:45:49.908732 2235 genericapiserver.go:352] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration" I1202 10:45:49.908784 2235 genericapiserver.go:376] "[graceful-termination] shutdown event" name="InFlightRequestsDrained" I1202 10:45:49.908790 2235 reflector.go:535] storage/cacher.go:/volumeattachments: Watch close - *storage.VolumeAttachment total 0 items received INFO[0015] Shutting down /v1, Kind=Secret workers I1202 10:45:49.908912 2235 reflector.go:535] storage/cacher.go:/persistentvolumes: Watch close - *core.PersistentVolume total 0 items received I1202 10:45:49.910078 2235 reflector.go:225] Stopping reflector *v1.ValidatingWebhookConfiguration (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.910091 2235 dynamic_cafile_content.go:170] "Shutting down controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I1202 10:45:49.908043 2235 reflector.go:225] Stopping reflector *v1.Node (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.910127 2235 controller.go:89] Shutting down OpenAPI AggregationController I1202 10:45:49.910168 2235 reflector.go:225] Stopping reflector *v1.LimitRange (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.908944 2235 reflector.go:535] storage/cacher.go:/services/endpoints: Watch close - *core.Endpoints total 1 items received I1202 10:45:49.910197 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=3&timeout=6m35s&timeoutSeconds=395&watch=true" latency="12.932361992s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="d12eaf7c-704a-4ee1-acbd-a72f9160a5bd" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910230 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=3&timeout=5m7s&timeoutSeconds=307&watch=true" latency="12.953512711s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="dab92a12-60f3-4fd6-b87b-8443e22ebee4" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910251 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=3&timeout=7m56s&timeoutSeconds=476&watch=true" latency="12.937891171s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="b94c5be0-950b-4508-a110-08ff82df83f1" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910276 2235 tlsconfig.go:255] "Shutting down DynamicServingCertificateController" INFO[0015] Shutting down /v1, Kind=ServiceAccount workers I1202 10:45:49.910075 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=3&timeout=7m13s&timeoutSeconds=433&watch=true" latency="12.949255322s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="2a47bd69-a69b-4233-a397-74a1db7ccd95" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910322 2235 dynamic_serving_content.go:144] "Shutting down controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key" I1202 10:45:49.910169 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/services?allowWatchBookmarks=true&resourceVersion=217&timeout=6m58s&timeoutSeconds=418&watch=true" latency="10.146208401s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="5ff594ea-28ca-47e3-9fbb-19bbf2569d0e" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.910435 2235 reflector.go:535] storage/cacher.go:/podtemplates: Watch close - *core.PodTemplate total 0 items received I1202 10:45:49.910199 2235 reflector.go:225] Stopping reflector *v1.Node (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.910463 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=3&timeout=6m34s&timeoutSeconds=394&watch=true" latency="12.949011957s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="a317fbd3-e6d7-43c3-b1e1-8416a3694e65" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910133 2235 reflector.go:225] Stopping reflector *v1.RuntimeClass (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.910528 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/prioritylevelconfigurations?allowWatchBookmarks=true&resourceVersion=3&timeout=7m26s&timeoutSeconds=446&watch=true" latency="12.949119252s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="feaec9a1-7e5d-478b-913c-e0ca1d193b8b" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910547 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=3&timeout=9m39s&timeoutSeconds=579&watch=true" latency="12.935027878s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="e8fe7ea5-5c55-408b-b658-60c603d3d9e5" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910244 2235 reflector.go:225] Stopping reflector *v1.ResourceQuota (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.910560 2235 reflector.go:535] storage/cacher.go:/leases: Watch close - *coordination.Lease total 0 items received I1202 10:45:49.910642 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/networking.k8s.io/v1/ingressclasses?allowWatchBookmarks=true&resourceVersion=3&timeout=7m9s&timeoutSeconds=429&watch=true" latency="12.941045051s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="bf424332-227a-4e2a-8dd2-3afb5453f157" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910704 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=3&timeout=6m22s&timeoutSeconds=382&watch=true" latency="12.934394734s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="db503351-40c4-4a62-b1ed-b871ef832ebe" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910777 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/node.k8s.io/v1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=3&timeout=8m38s&timeoutSeconds=518&watch=true" latency="12.949868612s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="eed9be8c-5749-44e9-b182-5335d78d51ac" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910798 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=3&timeout=9m6s&timeoutSeconds=546&watch=true" latency="12.953299584s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="928b666f-9d1c-43b4-8e2c-ef0a85b4de92" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910852 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=3&timeout=9m33s&timeoutSeconds=573&watch=true" latency="12.939893312s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="d52b3414-920f-4c8f-acdf-f1a26e60b71c" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910328 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=217&timeout=6m6s&timeoutSeconds=366&watch=true" latency="10.154132311s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="9259ef89-04ce-4bef-a089-3e0f51da5d42" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.910277 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=3&timeout=6m19s&timeoutSeconds=379&watch=true" latency="12.950104445s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="0b6c90a5-9e7d-4376-a3d3-dc4d407958ca" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910916 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=3&timeout=9m57s&timeoutSeconds=597&watch=true" latency="12.948933427s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="4706ce59-b753-4ce7-a296-d67db3fd240d" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.908424 2235 reflector.go:535] storage/cacher.go:/priorityclasses: Watch close - *scheduling.PriorityClass total 2 items received I1202 10:45:49.907815 2235 reflector.go:225] Stopping reflector *v1.Deployment (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.907985 2235 reflector.go:225] Stopping reflector *v1.HelmChartConfig (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908012 2235 reflector.go:225] Stopping reflector *v1.Endpoints (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908209 2235 reflector.go:535] storage/cacher.go:/secrets: Watch close - *core.Secret total 20 items received I1202 10:45:49.908215 2235 reflector.go:225] Stopping reflector *v1.ServiceAccount (10h0m0s) from github.com/rancher/lasso/pkg/cache/cache.go:124 I1202 10:45:49.908213 2235 reflector.go:535] storage/cacher.go:/poddisruptionbudgets: Watch close - *policy.PodDisruptionBudget total 0 items received I1202 10:45:49.908286 2235 reflector.go:535] storage/cacher.go:/horizontalpodautoscalers: Watch close - *autoscaling.HorizontalPodAutoscaler total 0 items received I1202 10:45:49.908441 2235 reflector.go:535] storage/cacher.go:/serviceaccounts: Watch close - *core.ServiceAccount total 36 items received I1202 10:45:49.908441 2235 reflector.go:535] storage/cacher.go:/rolebindings: Watch close - *rbac.RoleBinding total 9 items received I1202 10:45:49.908447 2235 reflector.go:535] storage/cacher.go:/mutatingwebhookconfigurations: Watch close - *admissionregistration.MutatingWebhookConfiguration total 0 items received I1202 10:45:49.908477 2235 reflector.go:535] storage/cacher.go:/storageclasses: Watch close - *storage.StorageClass total 1 items received I1202 10:45:49.908478 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=217&timeout=9m53s&timeoutSeconds=593&watch=true" latency="10.130181856s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="64229fcb-0cec-4a72-ae15-8aaecb6728de" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.908482 2235 reflector.go:535] storage/cacher.go:/daemonsets: Watch close - *apps.DaemonSet total 0 items received I1202 10:45:49.908491 2235 reflector.go:535] storage/cacher.go:/csidrivers: Watch close - *storage.CSIDriver total 0 items received I1202 10:45:49.908583 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=217&timeout=6m21s&timeoutSeconds=381&watch=true" latency="10.1567236s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="1d17defb-abe3-4e3b-b423-bd0dfa646d78" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.908686 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=217&timeout=9m4s&timeoutSeconds=544&watch=true" latency="10.142620564s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="059cea0d-69b2-422e-9c1b-711d57d096ec" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.908980 2235 reflector.go:535] storage/cacher.go:/ingress: Watch close - *networking.Ingress total 0 items received I1202 10:45:49.908986 2235 reflector.go:535] storage/cacher.go:/prioritylevelconfigurations: Watch close - *flowcontrol.PriorityLevelConfiguration total 8 items received I1202 10:45:49.909026 2235 reflector.go:535] storage/cacher.go:/apiregistration.k8s.io/apiservices: Watch close - *apiregistration.APIService total 32 items received I1202 10:45:49.909038 2235 reflector.go:535] storage/cacher.go:/cronjobs: Watch close - *batch.CronJob total 0 items received I1202 10:45:49.909043 2235 reflector.go:535] storage/cacher.go:/validatingwebhookconfigurations: Watch close - *admissionregistration.ValidatingWebhookConfiguration total 0 items received I1202 10:45:49.909053 2235 reflector.go:535] storage/cacher.go:/podsecuritypolicy: Watch close - *policy.PodSecurityPolicy total 0 items received I1202 10:45:49.909084 2235 reflector.go:535] storage/cacher.go:/deployments: Watch close - *apps.Deployment total 3 items received I1202 10:45:49.909100 2235 reflector.go:535] storage/cacher.go:/limitranges: Watch close - *core.LimitRange total 0 items received I1202 10:45:49.909105 2235 reflector.go:535] storage/cacher.go:/statefulsets: Watch close - *apps.StatefulSet total 0 items received I1202 10:45:49.909124 2235 reflector.go:535] storage/cacher.go:/roles: Watch close - *rbac.Role total 7 items received I1202 10:45:49.909147 2235 reflector.go:535] storage/cacher.go:/persistentvolumeclaims: Watch close - *core.PersistentVolumeClaim total 0 items received I1202 10:45:49.909157 2235 reflector.go:535] storage/cacher.go:/certificatesigningrequests: Watch close - *certificates.CertificateSigningRequest total 0 items received I1202 10:45:49.909161 2235 reflector.go:535] storage/cacher.go:/configmaps: Watch close - *core.ConfigMap total 8 items received I1202 10:45:49.909175 2235 reflector.go:225] Stopping reflector *v1.ServiceAccount (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909184 2235 reflector.go:535] storage/cacher.go:/storageclasses: Watch close - *storage.StorageClass total 1 items received I1202 10:45:49.909204 2235 reflector.go:535] storage/cacher.go:/namespaces: Watch close - *core.Namespace total 4 items received I1202 10:45:49.909224 2235 reflector.go:225] Stopping reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/controlplane/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444 I1202 10:45:49.909217 2235 reflector.go:535] storage/cacher.go:/csinodes: Watch close - *storage.CSINode total 0 items received I1202 10:45:49.909237 2235 reflector.go:535] storage/cacher.go:/csidrivers: Watch close - *storage.CSIDriver total 0 items received I1202 10:45:49.909244 2235 reflector.go:535] storage/cacher.go:/horizontalpodautoscalers: Watch close - *autoscaling.HorizontalPodAutoscaler total 0 items received I1202 10:45:49.909258 2235 reflector.go:225] Stopping reflector *v1.Secret (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909278 2235 reflector.go:535] storage/cacher.go:/clusterroles: Watch close - *rbac.ClusterRole total 68 items received I1202 10:45:49.909303 2235 reflector.go:535] storage/cacher.go:/priorityclasses: Watch close - *scheduling.PriorityClass total 2 items received I1202 10:45:49.909309 2235 reflector.go:225] Stopping reflector *v1.ClusterRoleBinding (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909320 2235 reflector.go:535] storage/cacher.go:/csistoragecapacities: Watch close - *storage.CSIStorageCapacity total 0 items received I1202 10:45:49.909360 2235 reflector.go:535] storage/cacher.go:/minions: Watch close - *core.Node total 0 items received I1202 10:45:49.909362 2235 reflector.go:225] Stopping reflector *v1.ClusterRole (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909370 2235 reflector.go:535] storage/cacher.go:/runtimeclasses: Watch close - *node.RuntimeClass total 0 items received I1202 10:45:49.909373 2235 reflector.go:535] storage/cacher.go:/controllers: Watch close - *core.ReplicationController total 0 items received I1202 10:45:49.909381 2235 reflector.go:535] storage/cacher.go:/certificatesigningrequests: Watch close - *certificates.CertificateSigningRequest total 0 items received I1202 10:45:49.909403 2235 reflector.go:225] Stopping reflector *v1.APIService (30s) from k8s.io/kube-aggregator/pkg/client/informers/externalversions/factory.go:117 I1202 10:45:49.909416 2235 reflector.go:535] storage/cacher.go:/endpointslices: Watch close - *discovery.EndpointSlice total 1 items received I1202 10:45:49.909425 2235 reflector.go:535] storage/cacher.go:/leases: Watch close - *coordination.Lease total 0 items received I1202 10:45:49.909432 2235 reflector.go:225] Stopping reflector *v1.Service (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909434 2235 reflector.go:535] storage/cacher.go:/ingressclasses: Watch close - *networking.IngressClass total 0 items received I1202 10:45:49.909445 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=3&timeout=9m39s&timeoutSeconds=579&watch=true" latency="12.931925854s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="4d2594c1-da9a-4757-885b-b69f11fec90d" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.909459 2235 reflector.go:225] Stopping reflector *v1.RoleBinding (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909459 2235 reflector.go:535] storage/cacher.go:/helm.cattle.io/helmcharts: Watch close - helm.cattle.io/v1, Kind=HelmChart total 6 items received I1202 10:45:49.909460 2235 reflector.go:535] storage/cacher.go:/controllerrevisions: Watch close - *apps.ControllerRevision total 0 items received I1202 10:45:49.909459 2235 reflector.go:535] storage/cacher.go:/pods: Watch close - *core.Pod total 0 items received I1202 10:45:49.909462 2235 reflector.go:535] storage/cacher.go:/services/specs: Watch close - *core.Service total 3 items received I1202 10:45:49.909487 2235 reflector.go:225] Stopping reflector *v1.StorageClass (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909502 2235 reflector.go:535] storage/cacher.go:/ingressclasses: Watch close - *networking.IngressClass total 0 items received I1202 10:45:49.909502 2235 reflector.go:535] storage/cacher.go:/ingress: Watch close - *networking.Ingress total 0 items received I1202 10:45:49.909514 2235 reflector.go:225] Stopping reflector *v1.CustomResourceDefinition (5m0s) from k8s.io/apiextensions-apiserver/pkg/client/informers/externalversions/factory.go:117 I1202 10:45:49.909533 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=3&timeout=9m27s&timeoutSeconds=567&watch=true" latency="12.954389417s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="f04c4f7b-414a-4ad9-9ae6-30d55396e897" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.909544 2235 reflector.go:535] storage/cacher.go:/helm.cattle.io/helmchartconfigs: Watch close - helm.cattle.io/v1, Kind=HelmChartConfig total 0 items received I1202 10:45:49.909548 2235 reflector.go:225] Stopping reflector *v1.Endpoints (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909550 2235 reflector.go:535] storage/cacher.go:/replicasets: Watch close - *apps.ReplicaSet total 0 items received I1202 10:45:49.909577 2235 reflector.go:225] Stopping reflector *v1.Role (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909581 2235 reflector.go:535] storage/cacher.go:/ingress: Watch close - *networking.Ingress total 0 items received I1202 10:45:49.909606 2235 reflector.go:225] Stopping reflector *v1.PriorityClass (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909606 2235 reflector.go:535] storage/cacher.go:/clusterrolebindings: Watch close - *rbac.ClusterRoleBinding total 53 items received I1202 10:45:49.909614 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=3&timeout=6m38s&timeoutSeconds=398&watch=true" latency="12.938329572s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="b287f9f1-0c6f-4c97-98ef-b585bc760dc6" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.909639 2235 reflector.go:225] Stopping reflector *v1beta1.FlowSchema (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909651 2235 reflector.go:535] storage/cacher.go:/volumeattachments: Watch close - *storage.VolumeAttachment total 0 items received I1202 10:45:49.909658 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=217&timeout=6m47s&timeoutSeconds=407&watch=true" latency="10.163389006s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="917a775d-e748-4781-8980-e6daccf9a6b1" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.909662 2235 storage_flowcontrol.go:141] APF bootstrap ensurer is exiting I1202 10:45:49.909681 2235 reflector.go:225] Stopping reflector *v1beta1.PriorityLevelConfiguration (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909702 2235 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller I1202 10:45:49.909713 2235 crdregistration_controller.go:142] Shutting down crd-autoregister controller I1202 10:45:49.909723 2235 reflector.go:535] storage/cacher.go:/resourcequotas: Watch close - *core.ResourceQuota total 0 items received I1202 10:45:49.909727 2235 apf_controller.go:321] Shutting down API Priority and Fairness config worker I1202 10:45:49.909730 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/helm.cattle.io/v1/helmchartconfigs?allowWatchBookmarks=true&resourceVersion=218&timeout=9m47s&timeoutSeconds=587&watch=true" latency="10.135526753s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="232f24c2-88fa-43e7-bc92-c9adbe3a2483" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.909738 2235 crd_finalizer.go:278] Shutting down CRDFinalizer I1202 10:45:49.909747 2235 autoregister_controller.go:165] Shutting down autoregister controller I1202 10:45:49.909754 2235 customresource_discovery_controller.go:245] Shutting down DiscoveryController I1202 10:45:49.909761 2235 apiservice_controller.go:131] Shutting down APIServiceRegistrationController I1202 10:45:49.909763 2235 reflector.go:535] storage/cacher.go:/csinodes: Watch close - *storage.CSINode total 0 items received I1202 10:45:49.909770 2235 controller.go:122] Shutting down OpenAPI controller I1202 10:45:49.909776 2235 available_controller.go:503] Shutting down AvailableConditionController I1202 10:45:49.909782 2235 establishing_controller.go:87] Shutting down EstablishingController I1202 10:45:49.909788 2235 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController I1202 10:45:49.909803 2235 naming_controller.go:302] Shutting down NamingConditionController I1202 10:45:49.909802 2235 reflector.go:535] storage/cacher.go:/clusterrolebindings: Watch close - *rbac.ClusterRoleBinding total 53 items received I1202 10:45:49.909804 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=249&timeout=8m7s&timeoutSeconds=487&watch=true" latency="9.733000934s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="c007cf89-6237-497e-a4bd-4f2ed26b5c67" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.909809 2235 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController I1202 10:45:49.909810 2235 reflector.go:535] storage/cacher.go:/roles: Watch close - *rbac.Role total 7 items received I1202 10:45:49.909840 2235 reflector.go:225] Stopping reflector *v1.Namespace (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909866 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/helm.cattle.io/v1/helmcharts?allowWatchBookmarks=true&resourceVersion=218&timeout=5m36s&timeoutSeconds=336&watch=true" latency="10.144584934s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="8604104f-179b-49c4-bcf4-fa09c27bc9ca" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.909876 2235 reflector.go:535] storage/cacher.go:/validatingwebhookconfigurations: Watch close - *admissionregistration.ValidatingWebhookConfiguration total 0 items received I1202 10:45:49.909886 2235 reflector.go:225] Stopping reflector *v1.IngressClass (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909888 2235 reflector.go:535] storage/cacher.go:/horizontalpodautoscalers: Watch close - *autoscaling.HorizontalPodAutoscaler total 0 items received I1202 10:45:49.909889 2235 reflector.go:535] storage/cacher.go:/runtimeclasses: Watch close - *node.RuntimeClass total 0 items received I1202 10:45:49.909920 2235 reflector.go:225] Stopping reflector *v1.PersistentVolume (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909944 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=217&timeout=7m49s&timeoutSeconds=469&watch=true" latency="10.14543844s" userAgent="deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" audit-ID="c7fc414e-1ea9-4ef5-add8-460681adec76" srcIP="127.0.0.1:33692" resp=0 I1202 10:45:49.909947 2235 reflector.go:535] storage/cacher.go:/endpointslices: Watch close - *discovery.EndpointSlice total 1 items received I1202 10:45:49.909965 2235 reflector.go:225] Stopping reflector *v1.Pod (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.909995 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=3&timeout=8m33s&timeoutSeconds=513&watch=true" latency="12.950171975s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="b33111e5-5a72-48aa-ad17-d6bcb94da149" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910008 2235 reflector.go:225] Stopping reflector *v1.MutatingWebhookConfiguration (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.910012 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=3&timeout=8m2s&timeoutSeconds=482&watch=true" latency="12.940277193s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="a0ed7a24-fc03-4b0c-8d99-3fe86ad6858e" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910038 2235 reflector.go:225] Stopping reflector *v1.VolumeAttachment (10m0s) from k8s.io/client-go/informers/factory.go:134 I1202 10:45:49.910052 2235 dynamic_serving_content.go:144] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key" I1202 10:45:49.910062 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=3&timeout=8m14s&timeoutSeconds=494&watch=true" latency="12.95314133s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="7dcb68eb-e558-4c42-af81-c1c75ac83be3" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910116 2235 dynamic_cafile_content.go:170] "Shutting down controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt" I1202 10:45:49.910121 2235 reflector.go:535] storage/cacher.go:/k3s.cattle.io/addons: Watch close - k3s.cattle.io/v1, Kind=Addon total 24 items received I1202 10:45:49.910135 2235 reflector.go:535] storage/cacher.go:/cronjobs: Watch close - *batch.CronJob total 0 items received I1202 10:45:49.910143 2235 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I1202 10:45:49.910263 2235 secure_serving.go:311] Stopped listening on 127.0.0.1:6444 I1202 10:45:49.910278 2235 reflector.go:535] storage/cacher.go:/networkpolicies: Watch close - *networking.NetworkPolicy total 0 items received I1202 10:45:49.910310 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=3&timeout=7m33s&timeoutSeconds=453&watch=true" latency="12.948957811s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="6f215176-bdf5-441d-acc2-89d16ad2917c" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910339 2235 reflector.go:535] storage/cacher.go:/apiextensions.k8s.io/customresourcedefinitions: Watch close - *apiextensions.CustomResourceDefinition total 9 items received I1202 10:45:49.910342 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=3&timeout=7m23s&timeoutSeconds=443&watch=true" latency="12.945737141s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="eed620a0-0ac6-4834-86ea-9ffda4f0ab67" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910352 2235 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt" I1202 10:45:49.910388 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=3&timeout=6m8s&timeoutSeconds=368&watch=true" latency="12.954538051s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="b372fd75-6bc2-42c9-9477-72087f42ad5a" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910392 2235 reflector.go:535] storage/cacher.go:/clusterroles: Watch close - *rbac.ClusterRole total 68 items received I1202 10:45:49.910426 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/api/v1/pods?allowWatchBookmarks=true&resourceVersion=3&timeout=6m8s&timeoutSeconds=368&watch=true" latency="12.946937839s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="b28de9ef-9a3c-4194-a76f-25f291a14665" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910615 2235 reflector.go:535] storage/cacher.go:/mutatingwebhookconfigurations: Watch close - *admissionregistration.MutatingWebhookConfiguration total 0 items received I1202 10:45:49.910672 2235 reflector.go:535] storage/cacher.go:/flowschemas: Watch close - *flowcontrol.FlowSchema total 35 items received I1202 10:45:49.910874 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=3&timeout=6m31s&timeoutSeconds=391&watch=true" latency="12.94708337s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="82626a3f-a313-4c8b-ab32-a706587d4643" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.910987 2235 httplog.go:109] "HTTP" verb="WATCH" URI="/apis/flowcontrol.apiserver.k8s.io/v1beta1/flowschemas?allowWatchBookmarks=true&resourceVersion=3&timeout=5m53s&timeoutSeconds=353&watch=true" latency="12.957061885s" userAgent="k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" audit-ID="1f782ce1-7b81-4b56-aa64-9f352947719b" srcIP="127.0.0.1:33694" resp=0 I1202 10:45:49.912388 2235 genericapiserver.go:363] "[graceful-termination] shutdown event" name="HTTPServerStoppedListening" INFO[0015] Waiting to retrieve agent configuration; server is not ready: failed to get CA certs: Get "https://127.0.0.1:6443/cacerts": dial tcp 127.0.0.1:6443: connect: connection refused I1202 10:45:50.066270 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b" -H "Authorization: Bearer " 'https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes' I1202 10:45:50.066633 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes in 0 milliseconds I1202 10:45:50.066668 2235 round_trippers.go:460] Response Headers: E1202 10:45:50.066760 2235 controller.go:188] Get "https://127.0.0.1:6444/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6444: connect: connection refused I1202 10:45:50.066814 2235 genericapiserver.go:386] [graceful-termination] RunPreShutdownHooks has completed I1202 10:45:50.124723 2235 secure_serving.go:352] http: TLS handshake error from 127.0.0.1:46156: EOF I1202 10:45:50.124783 2235 secure_serving.go:352] http: TLS handshake error from 127.0.0.1:57930: EOF I1202 10:45:50.408182 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" 'https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc' I1202 10:45:50.408438 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc in 0 milliseconds I1202 10:45:50.408462 2235 round_trippers.go:460] Response Headers: INFO[0015] Waiting for control-plane node ws-c233b1e8-2756-4c8d-b052-34e2b38056cc startup: Get "https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc": dial tcp 127.0.0.1:6444: connect: connection refused I1202 10:45:50.898413 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b/system:serviceaccount:kube-system:node-controller" 'https://127.0.0.1:6444/api/v1/nodes' I1202 10:45:50.898494 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b/kube-controller-manager" 'https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller' I1202 10:45:50.898688 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller in 0 milliseconds I1202 10:45:50.898708 2235 round_trippers.go:460] Response Headers: W1202 10:45:50.898778 2235 client_builder_dynamic.go:197] get or create service account failed: Get "https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 127.0.0.1:6444: connect: connection refused I1202 10:45:51.399640 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b/kube-controller-manager" 'https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller' I1202 10:45:51.399890 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller in 0 milliseconds I1202 10:45:51.399907 2235 round_trippers.go:460] Response Headers: W1202 10:45:51.399972 2235 client_builder_dynamic.go:197] get or create service account failed: Get "https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 127.0.0.1:6444: connect: connection refused I1202 10:45:51.409086 2235 round_trippers.go:435] curl -v -XGET -H "User-Agent: deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" -H "Accept: application/json, */*" 'https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc' I1202 10:45:51.409366 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc in 0 milliseconds I1202 10:45:51.409388 2235 round_trippers.go:460] Response Headers: INFO[0016] Waiting for control-plane node ws-c233b1e8-2756-4c8d-b052-34e2b38056cc startup: Get "https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc": dial tcp 127.0.0.1:6444: connect: connection refused I1202 10:45:52.124265 2235 secure_serving.go:352] http: TLS handshake error from 127.0.0.1:46162: EOF I1202 10:45:52.124342 2235 secure_serving.go:352] http: TLS handshake error from 127.0.0.1:57932: EOF I1202 10:45:52.400363 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/vnd.kubernetes.protobuf, */*" -H "User-Agent: k3s/v1.22.4+k3s1 (linux/amd64) kubernetes/bec170b/kube-controller-manager" 'https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller' I1202 10:45:52.400544 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller in 0 milliseconds I1202 10:45:52.400559 2235 round_trippers.go:460] Response Headers: W1202 10:45:52.400614 2235 client_builder_dynamic.go:197] get or create service account failed: Get "https://127.0.0.1:6444/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 127.0.0.1:6444: connect: connection refused I1202 10:45:52.410061 2235 round_trippers.go:435] curl -v -XGET -H "Accept: application/json, */*" -H "User-Agent: deploy@ws-c233b1e8-2756-4c8d-b052-34e2b38056cc/v1.22.4+k3s1 (linux/amd64) k3s/bec170bc" 'https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc' I1202 10:45:52.410336 2235 round_trippers.go:454] GET https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc in 0 milliseconds I1202 10:45:52.410371 2235 round_trippers.go:460] Response Headers: INFO[0017] Waiting for control-plane node ws-c233b1e8-2756-4c8d-b052-34e2b38056cc startup: Get "https://127.0.0.1:6444/api/v1/nodes/ws-c233b1e8-2756-4c8d-b052-34e2b38056cc": dial tcp 127.0.0.1:6444: connect: connection refused ```
pathcl commented 2 years ago

After some small changes I was able to have k3s up & running. Not sure if natively though. Available here

csweichel commented 2 years ago

With recent cgroup v2 fixes, I figured I give this another try. On a machine with cgroup v2 enabled, I ran the k3s server without agent using the latest release, I started an agent separately using a custom build from https://github.com/k3s-io/k3s/commit/13728058a4e997d8e6168f473299918394f446ef to include the cgroup changes.

This got me a step closer, but it's still not quite working:

./gitpod/k3s agent -d /workspace/k3s_agent --token-file /workspace/k3s/server/token -s https://10.0.2.100:6443 --lb-server-port 6445 --node-ip 10.0.2.100 --with-node-id
INFO[0000] Starting k3s agent dev (HEAD)                
INFO[0000] Running load balancer 127.0.0.1:6445 -> [10.0.2.100:6443] 
INFO[0000] Module overlay was already loaded            
INFO[0000] Module nf_conntrack was already loaded       
INFO[0000] Module br_netfilter was already loaded       
INFO[0000] Module iptable_nat was already loaded        
INFO[0000] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 
ERRO[0000] Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established: read-only file system 
INFO[0000] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 
ERRO[0000] Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_close_wait: read-only file system 
WARN[0000] cgroup v2 controllers are not delegated for rootless. Disabling cgroup. 
INFO[0000] Logging containerd to /workspace/k3s_agent/agent/containerd/containerd.log 
INFO[0000] Running containerd -c /workspace/k3s_agent/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /workspace/k3s_agent/agent/containerd 
W0209 14:56:08.106424   26302 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {/run/k3s/containerd/containerd.sock /run/k3s/containerd/containerd.sock <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory". Reconnecting...
INFO[0001] Containerd is now running                    
INFO[0001] Connecting to proxy                           url="wss://10.0.2.100:6443/v1-k3s/connect"
INFO[0001] Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/workspace/k3s_agent/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/workspace/k3s_original/data/current/bin --cni-conf-dir=/workspace/k3s_agent/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=DevicePlugins=false --healthz-bind-address=127.0.0.1 --hostname-override=gitpodio-gitpod-zgryzv745nv-0c1b976c --kubeconfig=/workspace/k3s_agent/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/workspace/k3s_agent/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --serialize-image-pulls=false --tls-cert-file=/workspace/k3s_agent/agent/serving-kubelet.crt --tls-private-key-file=/workspace/k3s_agent/agent/serving-kubelet.key 
Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0209 14:56:09.132082   26302 server.go:442] "Kubelet version" kubeletVersion="v1.23.3-k3s1"
W0209 14:56:09.133468   26302 manager.go:159] Cannot detect current cgroup on cgroup v2
I0209 14:56:09.133598   26302 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/workspace/k3s_agent/agent/client-ca.crt"
INFO[0001] Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=gitpodio-gitpod-zgryzv745nv-0c1b976c --kubeconfig=/workspace/k3s_agent/agent/kubeproxy.kubeconfig --proxy-mode=iptables 
I0209 14:56:09.137456   26302 server.go:225] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
E0209 14:56:09.138079   26302 proxier.go:647] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.13.0-1013-gcp/modules.builtin: no such file or directory" filePath="/lib/modules/5.13.0-1013-gcp/modules.builtin"
I0209 14:56:09.138444   26302 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0209 14:56:09.138686   26302 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0209 14:56:09.138928   26302 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0209 14:56:09.139164   26302 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0209 14:56:09.139414   26302 proxier.go:657] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
WARN[0001] Running modprobe ip_vs failed with message: ``, error: exec: "modprobe": executable file not found in $PATH 
E0209 14:56:09.149361   26302 node.go:152] Failed to retrieve node info: nodes "gitpodio-gitpod-zgryzv745nv-0c1b976c" not found
I0209 14:56:09.214120   26302 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0209 14:56:09.214343   26302 container_manager_linux.go:281] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0209 14:56:09.214415   26302 container_manager_linux.go:286] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0209 14:56:09.214495   26302 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0209 14:56:09.214514   26302 container_manager_linux.go:321] "Creating device plugin manager" devicePluginEnabled=false
I0209 14:56:09.214536   26302 state_mem.go:36] "Initialized new in-memory state store"
I0209 14:56:09.415249   26302 server.go:799] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
I0209 14:56:09.418599   26302 kubelet.go:416] "Attempting to sync node with API server"
I0209 14:56:09.418655   26302 kubelet.go:278] "Adding static pod path" path="/workspace/k3s_agent/agent/pod-manifests"
I0209 14:56:09.418687   26302 kubelet.go:289] "Adding apiserver pod source"
I0209 14:56:09.418730   26302 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0209 14:56:09.419770   26302 kuberuntime_manager.go:248] "Container runtime initialized" containerRuntime="containerd" version="v1.5.9-k3s1" apiVersion="v1alpha2"
E0209 14:56:10.264664   26302 node.go:152] Failed to retrieve node info: nodes "gitpodio-gitpod-zgryzv745nv-0c1b976c" not found
I0209 14:56:10.419015   26302 apiserver.go:52] "Watching apiserver"
E0209 14:56:12.503021   26302 node.go:152] Failed to retrieve node info: nodes "gitpodio-gitpod-zgryzv745nv-0c1b976c" not found
E0209 14:56:17.072636   26302 node.go:152] Failed to retrieve node info: nodes "gitpodio-gitpod-zgryzv745nv-0c1b976c" not found
E0209 14:56:25.649130   26302 node.go:152] Failed to retrieve node info: nodes "gitpodio-gitpod-zgryzv745nv-0c1b976c" not found
E0209 14:56:44.739768   26302 node.go:152] Failed to retrieve node info: nodes "gitpodio-gitpod-zgryzv745nv-0c1b976c" not found
I0209 14:56:44.739803   26302 server.go:843] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
I0209 14:56:44.739814   26302 server_others.go:138] "Detected node IP" address="127.0.0.1"
I0209 14:56:44.746675   26302 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
I0209 14:56:44.746701   26302 server_others.go:206] "Using iptables Proxier"
I0209 14:56:44.747061   26302 server.go:656] "Version info" version="v1.23.3-k3s1"
I0209 14:56:44.948918   26302 config.go:317] "Starting service config controller"
I0209 14:56:44.949046   26302 shared_informer.go:240] Waiting for caches to sync for service config
I0209 14:56:44.948919   26302 config.go:226] "Starting endpoint slice config controller"
I0209 14:56:44.949227   26302 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0209 14:56:45.049227   26302 shared_informer.go:247] Caches are synced for service config 
I0209 14:56:45.049302   26302 shared_informer.go:247] Caches are synced for endpoint slice config 

There were some etcd issues which may be related to the custom k3s build. I started a postgres instance using Docker and used it with:

export K3S_DATASTORE_ENDPOINT='postgres://postgres:mysecretpassword@10.0.2.100:5432/k3s?sslmode=disable'

Also, I ran into https://github.com/k3s-io/k3s/issues/346 which I worked around by setting path to include ${DATA_DIR}/data/current/bin.

For good measure I tried to modify /etc/hosts to make the workspace hostname resolve to 10.0.2.100 (workspace tap0 IP) rather than 127.0.0.1.

Once everything has settled, kubectl get node remains empty :(

Update: using the userspace proxier and setting the bind address of the kube-proxy made the error messages go away

k3s agent -d /workspace/k3s_p --token-file /workspace/k3s_o/server/node-token -s https://localhost:6443 --lb-server-port 6445 --kube-proxy-arg proxy-mode=userspace --kube-proxy-arg bind-address=10.0.2.100 --debug
csweichel commented 2 years ago

We're now in a state where the kubelet comes up and the node registers correctly. However, containers don't start yet, most likely because the kubelet reports InvalidDiskCapacity.

image

To make this happen, we had to:

Note: the "Failed to ensure state" containerName="/k3s" err="failed to apply oom score -999 to PID 21789: write /proc/21789/oom_score_adj: permission denied" messages seem inconsequential. When we write the corresponding oom_score_adj file, the messages stop but scheduling does not improve.

Notes for future implementation:

csweichel commented 2 years ago

We have a running pod, on Kubernetes, inside a Gitpod workspace 🎉

BEWARE: this is pretty (very extremely) far from production ready. It's entirely unclear which features work, which don't and what the caveats are. It's a first important step, but it's just that: a first step.

image

In addition to the things above, we had to:

We ran k3s using:

# server
./k3s-feb6feeaeccc857a5744ef10efd82b18e8790e78 server --disable-agent -d /workspace/data/server

# agent
sudo ./k3s-feb6feeaeccc857a5744ef10efd82b18e8790e78 agent -d /workspace/k3s_p1 --token-file /workspace/data/server/server/node-token -s https://localhost:6443 --lb-server-port 6445

The pod spec we started was:

apiVersion: v1
kind: Pod
metadata:
  labels:
    run: foo
  name: foo
  namespace: default
  resourceVersion: "801"
spec:
  securityContext:
    runAsUser: 33333
    runAsGroup: 33333
    fsGroup: 33333
  containers:
    - image: docker.io/alpine:latest
      imagePullPolicy: Always
      name: foo
      command: ["/bin/sh", "-c", "--"]
      args: ["while true; do sleep 30; done;"]
      terminationMessagePath: /dev/termination-log
      terminationMessagePolicy: File
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  nodeName: gitpodio-templatetypescr-lkag6us7i8b
  tolerations:
    - effect: NoExecute
      key: node.kubernetes.io/not-ready
      operator: Exists
      tolerationSeconds: 300
    - effect: NoExecute
      key: node.kubernetes.io/unreachable
      operator: Exists
      tolerationSeconds: 300
chuck-confluent commented 2 years ago

I created a little test to see whether k3d can run in a gitpod workspace:

As of today, this doesn't work yet.

I included a log file for the kubernetes api server when it tried to start. The errors I think are most relevant are these, but you can also see the whole log file:

time="2022-03-18T19:03:50Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2022-03-18T19:03:50Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_close_wait: read-only file system"
time="2022-03-18T19:03:50Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2022-03-18T19:03:50Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_tcp_timeout_established: read-only file system"
stale[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

bangzhuzhu commented 2 years ago

@csweichel Is it possible to start a server in a docker without --privileged? I saw a lot of error msg like when I do such job

sb2nov commented 2 years ago

@csweichel any updates on the issue?

raphaeltm commented 1 year ago

I was trying to use Kind but couldn't get it up and running. Then I found this issue referenced elsewhere, suggesting that some people had managed to get k3s working, but it looks like that isn't really the case. I tried to get k3s and Kind running in rootless mode but neither worked :(

Getting some flavor of k8s up and running would be extremely helpful.

HadesArchitect commented 1 year ago

Any updates? :)

esigo commented 1 year ago

@raphaeltm @HadesArchitect, have you tried https://github.com/gitpod-io/template-k3s?

HadesArchitect commented 1 year ago

Thanks @esigo, that must cover a couple of scenarios! Not all of them, but still a good place to start.