rancher-sandbox / rancher-desktop

Container Management and Kubernetes on the Desktop
https://rancherdesktop.io
Apache License 2.0
5.86k stars 274 forks source link

Rancher Desktop fails to start: no active cluster #1470

Closed patware closed 2 years ago

patware commented 2 years ago

Rancher Desktop Version

1.0.1

Rancher Desktop K8s Version

1.22.6

Which container runtime are you using?

containerd (nerdctl)

What operating system are you using?

Windows

Operating System / Build Version

Windows 10 Enterprise 21H1 (19043.1466)

What CPU architecture are you using?

x64

Linux only: what package format did you use to install Rancher Desktop?

No response

Windows User Only

VPN: DirectAccess (but tried connected and disconnected) Proxy: No Special Firewall rules: probably (this is a corporate pc), McAffee Security Software: McAffee Note: I had a working Docker Desktop setup, but now uninstalled.

Actual Behavior

When launching Rancher Desktop, after choosing the Kubernetes version and container runtime, the Rancher Desktop appears, shows a progress bard at the bottom left, then at one point:

Error Starting Kubernetes Error: No active cluster! Context: Waiting for services

Some recent logfile lines: 2022-02-03T02:26:35.900Z: Running: wsl.exe --distribution rancher-desktop --exec /usr/local/bin/wsl-service k3s start 2022-02-03T02:26:36.083Z: Running command wsl --distribution rancher-desktop --exec cat /proc/net/route... 2022-02-03T02:26:36.083Z: Capturing output: wsl.exe --distribution rancher-desktop --exec cat /proc/net/route 2022-02-03T02:26:36.290Z: Running command wsl --distribution rancher-desktop --exec cat /proc/net/fib_trie... 2022-02-03T02:26:36.290Z: Capturing output: wsl.exe --distribution rancher-desktop --exec cat /proc/net/fib_trie 2022-02-03T02:26:38.567Z: Running command wsl --distribution rancher-desktop --exec wslpath -a -u C:\Users\calvep\AppData\Local\Programs\Rancher Desktop\resources\resources\linux\wsl-helper... 2022-02-03T02:26:38.567Z: Capturing output: wsl.exe --distribution rancher-desktop --exec wslpath -a -u C:\Users\calvep\AppData\Local\Programs\Rancher Desktop\resources\resources\linux\wsl-helper 2022-02-03T02:26:38.809Z: Running command wsl --distribution rancher-desktop --exec /mnt/c/Users/calvep/AppData/Local/Programs/Rancher Desktop/resources/resources/linux/wsl-helper k3s kubeconfig... 2022-02-03T02:26:38.809Z: Capturing output: wsl.exe --distribution rancher-desktop --exec /mnt/c/Users/calvep/AppData/Local/Programs/Rancher Desktop/resources/resources/linux/wsl-helper k3s kubeconfig

Steps to Reproduce

Install Rancher Desktop Launch Rancher Desktop

Result

A popup displays the "Error Starting Kubernetes" error.

Expected Behavior

That it works, obviously. But a little help in troubleshooting would be appreciated. I don't mind dirtying my hands a bit, but I'm running blind.

Additional Information

I tried many things:

Tried a full cleanup:

ericpromislow commented 2 years ago

Could you look through the log files and see if there are any error messages? Sometimes some log files are empty and aren't populated until the app is shut down due to flushing issues.

patware commented 2 years ago

I see errors, hard to say if some or "normal errors" or "that's the problem errors". From what I find that could be relevant:

wsl-exec.log:

[1m[32m*[m /proc is already mounted[K
[1m[32m*[m /run/openrc: creating directory[K
[1m[32m*[m /run/lock: correcting mode[K
[1m[32m*[m /run/lock: correcting owner[K
[1m[32m*[m Caching service dependencies[K ...
[1m[31m*[m /etc/fstab does not exist[K
Service `hwdrivers' needs non existent service `dev'
 [ ok ]
 [1m[32m*[m Starting busybox crond[K ...
 [ ok ]
 [1m[32m*[m Starting Rancher Desktop Guest Agent[K ...
 [ ok ]

 * /etc/fstab does not exist
Service `hwdrivers' needs non existent service `dev'
 * Caching service dependencies ... [ ok ]
 * Starting Rancher Desktop Docker Daemon ...
 * supervise-daemon: Please increase the value of --respawn-period to more than 50 to avoid infinite respawning
 [ ok ]
 * /mnt/c/Users/calvep/AppData/Local/rancher-desktop/logs/k3s.log: creating file
 * Starting k3s ... [ ok ]
 duplicate certificate in file ca-cert-rd-60.pem
WARNING: Skipping duplicate certificate in file ca-cert-rd-35.pem

I've seen this /etc/fstab does not exist in other logs.

docker.log: (only the fun stuff)

time="2022-02-04T01:02:05.898976600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
time="2022-02-04T01:02:05Z" level=warning msg="deprecated version : `1`, please switch to version `2`"
time="2022-02-04T01:02:05.976119700Z" level=info msg="starting containerd" revision=1407cab509ff0d96baa4f0eb6ff9980270e6e620 version=v1.5.9

time="2022-02-04T01:02:05.999702800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
time="2022-02-04T01:02:05.999824000Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
time="2022-02-04T01:02:05.999919900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1

time="2022-02-04T01:02:06.000805400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
time="2022-02-04T01:02:06.000911000Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
time="2022-02-04T01:02:06.001000400Z" level=info msg="metadata content store policy set" policy=shared

time="2022-02-04T01:02:06.031833100Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
time="2022-02-04T01:02:06.103035900Z" level=warning msg="Your kernel does not support cgroup blkio weight"
time="2022-02-04T01:02:06.103193300Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
time="2022-02-04T01:02:06.103326000Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
time="2022-02-04T01:02:06.103444800Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
time="2022-02-04T01:02:06.103584200Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
time="2022-02-04T01:02:06.103688600Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
time="2022-02-04T01:02:06.104079400Z" level=info msg="Loading containers: start."
time="2022-02-04T01:02:06.420560500Z" level=info msg="Removing stale sandbox 5e152cbc5728499e4cccb0422686d0eaeed0f9623ae7ea0006398dac26201e6b (8eaf98e1eb5738ba00d1317e25354d73f0fe590b0a3fa9388c66736d9f69f68e)"
time="2022-02-04T01:02:06.428739800Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint addff4f21f7bd33008ecaae653f64fd592a592a2e406b455f0dc17b692c67c02 37cda5d5349051803df29b45d8c3804311c131c5dd9a125118f9804804fa72e4], retrying...."
time="2022-02-04T01:02:06.575640100Z" level=info msg="Removing stale sandbox 73216d81d02aedc221fb0b0a62fd89849cac35e5991d31768580c163a6fae81e (d8c00cca132016f0ea25e26ce3bb0a811603f9e5dbb4dd73f05828fc11e49231)"
time="2022-02-04T01:02:06.582142600Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint addff4f21f7bd33008ecaae653f64fd592a592a2e406b455f0dc17b692c67c02 b292a2cfec485b9fb8a5f65577b93e8bd7b526cd11f2beb4f62daae13e346d37], retrying...."
time="2022-02-04T01:02:06.695171900Z" level=info msg="Removing stale sandbox eae1b0cd1738d52d4debac1e2551c069414e6fa934bb0a82aa1b5434424f1fd5 (b76d1f60c2641ad643744b55c099a58cd68b35c0fc6c4ffee3e55eca661ad978)"
time="2022-02-04T01:02:06.701901500Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint addff4f21f7bd33008ecaae653f64fd592a592a2e406b455f0dc17b692c67c02 4f7899816502e2ad68b661bcbdea06be70e3a16bc2a0682e93d8a3eac854132e], retrying...."
time="2022-02-04T01:02:06.812745500Z" level=info msg="Removing stale sandbox 38ce05fd0e849f1f93f2f120ae39e97c2f4e9eaa96e5d92d9c39b39d8c620f85 (99e51ac22469accad3d0abb1ad1a3c398b2318ae065a8e52448e6f36eeac1378)"
time="2022-02-04T01:02:06.818763100Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint addff4f21f7bd33008ecaae653f64fd592a592a2e406b455f0dc17b692c67c02 7797e935e16ee0bea33ada41956b91b6c65bb6fbcae1d54d7b26f7e1e299a673], retrying...."
time="2022-02-04T01:02:06.905416700Z" level=info msg="Removing stale sandbox 4e35d9edff5512720ffcc86a357eccbad3aec27259e95f7bfb33f16d04554e81 (63cc1b4e3ced77ca31b06e2c45bf6ca98d6eef5756f41dbb886ac36c121e77c0)"
time="2022-02-04T01:02:06.911422900Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint addff4f21f7bd33008ecaae653f64fd592a592a2e406b455f0dc17b692c67c02 ecfb6599b5f0b300b86dc6a63e8ad735ca105089283ca4cbe04f42dbd4fd8b07], retrying...."
time="2022-02-04T01:02:06.960943100Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
time="2022-02-04T01:02:07.040980200Z" level=info msg="Loading containers: done."

time="2022-02-04T01:02:17.368908900Z" level=info msg="ignoring event" container=9cb6a77724d27bf1799cfb048017a9740045eaa52358d471f50cbe671b5a3e53 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
time="2022-02-04T01:02:17.368945100Z" level=info msg="shim disconnected" id=9cb6a77724d27bf1799cfb048017a9740045eaa52358d471f50cbe671b5a3e53
time="2022-02-04T01:02:17.369306100Z" level=warning msg="cleaning up after shim disconnected" id=9cb6a77724d27bf1799cfb048017a9740045eaa52358d471f50cbe671b5a3e53 namespace=moby
time="2022-02-04T01:02:17.369421000Z" level=info msg="cleaning up dead shim"
time="2022-02-04T01:02:17.375675100Z" level=warning msg="cleanup warnings time=\"2022-02-04T01:02:17Z\" level=info msg=\"starting signal loop\" namespace=moby pid=2647\n"
time="2022-02-04T01:02:32.784483200Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1d07f6f3c6d6e57c167c3111098762fddd8633d07819fd6bb6b1bce8bde9e684 pid=2980

k3s.log: same thing, only showing warnings and errors

time="2022-02-04T01:02:06Z" level=info msg="Module nf_conntrack was already loaded"
time="2022-02-04T01:02:06Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe"
time="2022-02-04T01:02:06Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
W0204 01:02:06.249293     264 sysinfo.go:203] Nodes topology is not available, providing CPU topology
time="2022-02-04T01:02:06Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2022-02-04T01:02:06Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2022-02-04T01:02:06Z" level=info msg="Set sysctl 'net/ipv4/conf/all/forwarding' to 1"
time="2022-02-04T01:02:06Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2022-02-04T01:02:06Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2022-02-04T01:02:06Z" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"
I0204 01:02:06.268342     264 rest.go:130] the default service ipfamily for this cluster is: IPv4

I0204 01:02:11.531245     264 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
E0204 01:02:11.532996     264 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 65
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [0:0] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]
:OUTPUT ACCEPT [0:0] - [0:0]
:DOCKER - [0:0] - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0] - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0] - [0:0]
:DOCKER-USER - [0:0] - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0] - [0:0]
:KUBE-FORWARD - [0:0] - [0:0]
:KUBE-NODEPORTS - [0:0] - [0:0]
:KUBE-NWPLCY-DEFAULT - [0:0] - [0:0]
:KUBE-PROXY-CANARY - [0:0] - [0:0]
:KUBE-ROUTER-FORWARD - [0:0] - [0:0]
:KUBE-ROUTER-INPUT - [0:0] - [0:0]
:KUBE-ROUTER-OUTPUT - [0:0] - [0:0]
:KUBE-SERVICES - [0:0] - [0:0]
:KUBE-POD-FW-R3CRTNVZ2YX4TOXP - [0:0]
:KUBE-POD-FW-YK73VEXE57MOSSWS - [0:0]
:KUBE-POD-FW-CONVAAYXO3B3IMCB - [0:0]
:KUBE-POD-FW-67WQCADMMW6EC75V - [0:0]
:KUBE-POD-FW-CZ3647C2ARG3Q7B3 - [0:0]
-A INPUT -m comment --comment "kube-router netpol - 4IA2OSFRMVNDXBVV" -j KUBE-ROUTER-INPUT
-A INPUT -m comment --comment "kubernetes health check service ports" -j KUBE-NODEPORTS
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -m comment --comment "kube-router netpol - TEMCG2JMHZYE7H7T" -j KUBE-ROUTER-FORWARD
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A OUTPUT -m comment --comment "kube-router netpol - VEAAIY32XVBHCSCY" -j KUBE-ROUTER-OUTPUT
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-NWPLCY-DEFAULT -m comment --comment "rule to mark traffic matching a network policy" -j MARK --set-xmark 0x10000/0x10000
-A KUBE-ROUTER-FORWARD -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT
-A KUBE-ROUTER-INPUT -d 10.43.0.0/16 -m comment --comment "allow traffic to cluster IP - M66LPN4N3KB5HTJR" -j RETURN
-A KUBE-ROUTER-INPUT -p tcp -m comment --comment "allow LOCAL TCP traffic to node ports - LR7XO7NXDBGQJD2M" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN
-A KUBE-ROUTER-INPUT -p udp -m comment --comment "allow LOCAL UDP traffic to node ports - 76UCBPIZNGJNWNUZ" -m addrtype --dst-type LOCAL -m multiport --dports 30000:32767 -j RETURN
-A KUBE-ROUTER-INPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT
-A KUBE-ROUTER-OUTPUT -m comment --comment "rule to explicitly ACCEPT traffic that comply to network policies" -m mark --mark 0x20000/0x20000 -j ACCEPT
-I KUBE-POD-FW-R3CRTNVZ2YX4TOXP 1 -d 10.42.0.10 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-R3CRTNVZ2YX4TOXP 1 -s 10.42.0.10 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-R3CRTNVZ2YX4TOXP 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.10 -j ACCEPT 
-I KUBE-POD-FW-R3CRTNVZ2YX4TOXP 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -d 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -d 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -d 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP 
-I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -s 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -s 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP 
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -s 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP 
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:coredns-96cc4f57d-j2x9g namespace: kube-system to chain KUBE-POD-FW-R3CRTNVZ2YX4TOXP" -s 10.42.0.10 -j KUBE-POD-FW-R3CRTNVZ2YX4TOXP 
-A KUBE-POD-FW-R3CRTNVZ2YX4TOXP -m comment --comment "rule to log dropped traffic POD name:coredns-96cc4f57d-j2x9g namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 
-A KUBE-POD-FW-R3CRTNVZ2YX4TOXP -m comment --comment "rule to REJECT traffic destined for POD name:coredns-96cc4f57d-j2x9g namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT 
-A KUBE-POD-FW-R3CRTNVZ2YX4TOXP -j MARK --set-mark 0/0x10000 
-A KUBE-POD-FW-R3CRTNVZ2YX4TOXP -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 
-I KUBE-POD-FW-YK73VEXE57MOSSWS 1 -d 10.42.0.11 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-YK73VEXE57MOSSWS 1 -s 10.42.0.11 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-YK73VEXE57MOSSWS 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.11 -j ACCEPT 
-I KUBE-POD-FW-YK73VEXE57MOSSWS 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -d 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -d 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -d 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS 
-I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -s 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -s 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS 
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -s 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS 
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system to chain KUBE-POD-FW-YK73VEXE57MOSSWS" -s 10.42.0.11 -j KUBE-POD-FW-YK73VEXE57MOSSWS 
-A KUBE-POD-FW-YK73VEXE57MOSSWS -m comment --comment "rule to log dropped traffic POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 
-A KUBE-POD-FW-YK73VEXE57MOSSWS -m comment --comment "rule to REJECT traffic destined for POD name:local-path-provisioner-84bb864455-jp4mm namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT 
-A KUBE-POD-FW-YK73VEXE57MOSSWS -j MARK --set-mark 0/0x10000 
-A KUBE-POD-FW-YK73VEXE57MOSSWS -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 
-I KUBE-POD-FW-CONVAAYXO3B3IMCB 1 -d 10.42.0.12 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-CONVAAYXO3B3IMCB 1 -s 10.42.0.12 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-CONVAAYXO3B3IMCB 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.12 -j ACCEPT 
-I KUBE-POD-FW-CONVAAYXO3B3IMCB 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -d 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -d 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -d 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB 
-I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -s 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -s 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB 
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -s 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB 
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system to chain KUBE-POD-FW-CONVAAYXO3B3IMCB" -s 10.42.0.12 -j KUBE-POD-FW-CONVAAYXO3B3IMCB 
-A KUBE-POD-FW-CONVAAYXO3B3IMCB -m comment --comment "rule to log dropped traffic POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 
-A KUBE-POD-FW-CONVAAYXO3B3IMCB -m comment --comment "rule to REJECT traffic destined for POD name:traefik-55fdc6d984-f9lz8 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT 
-A KUBE-POD-FW-CONVAAYXO3B3IMCB -j MARK --set-mark 0/0x10000 
-A KUBE-POD-FW-CONVAAYXO3B3IMCB -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 
-I KUBE-POD-FW-67WQCADMMW6EC75V 1 -d 10.42.0.13 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-67WQCADMMW6EC75V 1 -s 10.42.0.13 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-67WQCADMMW6EC75V 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.13 -j ACCEPT 
-I KUBE-POD-FW-67WQCADMMW6EC75V 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -d 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -d 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -d 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V 
-I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -s 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -s 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V 
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -s 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V 
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system to chain KUBE-POD-FW-67WQCADMMW6EC75V" -s 10.42.0.13 -j KUBE-POD-FW-67WQCADMMW6EC75V 
-A KUBE-POD-FW-67WQCADMMW6EC75V -m comment --comment "rule to log dropped traffic POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 
-A KUBE-POD-FW-67WQCADMMW6EC75V -m comment --comment "rule to REJECT traffic destined for POD name:metrics-server-ff9dbcb6c-5gcx4 namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT 
-A KUBE-POD-FW-67WQCADMMW6EC75V -j MARK --set-mark 0/0x10000 
-A KUBE-POD-FW-67WQCADMMW6EC75V -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 
-I KUBE-POD-FW-CZ3647C2ARG3Q7B3 1 -d 10.42.0.9 -m comment --comment "run through default ingress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-CZ3647C2ARG3Q7B3 1 -s 10.42.0.9 -m comment --comment "run through default egress network policy  chain" -j KUBE-NWPLCY-DEFAULT 
-I KUBE-POD-FW-CZ3647C2ARG3Q7B3 1 -m comment --comment "rule to permit the traffic traffic to pods when source is the pod's local node" -m addrtype --src-type LOCAL -d 10.42.0.9 -j ACCEPT 
-I KUBE-POD-FW-CZ3647C2ARG3Q7B3 1 -m comment --comment "rule for stateful firewall for pod" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -d 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -d 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic destined to POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -d 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3 
-I KUBE-ROUTER-INPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -s 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3 
-I KUBE-ROUTER-FORWARD 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -s 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3 
-I KUBE-ROUTER-OUTPUT 1 -m comment --comment "rule to jump traffic from POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -s 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3 
-I KUBE-ROUTER-FORWARD 1 -m physdev --physdev-is-bridged -m comment --comment "rule to jump traffic from POD name:svclb-traefik-kfm9j namespace: kube-system to chain KUBE-POD-FW-CZ3647C2ARG3Q7B3" -s 10.42.0.9 -j KUBE-POD-FW-CZ3647C2ARG3Q7B3 
-A KUBE-POD-FW-CZ3647C2ARG3Q7B3 -m comment --comment "rule to log dropped traffic POD name:svclb-traefik-kfm9j namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j NFLOG --nflog-group 100 -m limit --limit 10/minute --limit-burst 10 
-A KUBE-POD-FW-CZ3647C2ARG3Q7B3 -m comment --comment "rule to REJECT traffic destined for POD name:svclb-traefik-kfm9j namespace: kube-system" -m mark ! --mark 0x10000/0x10000 -j REJECT 
-A KUBE-POD-FW-CZ3647C2ARG3Q7B3 -j MARK --set-mark 0/0x10000 
-A KUBE-POD-FW-CZ3647C2ARG3Q7B3 -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 
COMMIT
I0204 01:02:11.561477     264 scope.go:110] "RemoveContainer" containerID="97de99745a8f04530d81cc0678a8f96d59c52ffa0d13290d99b7aeb54114d7c3"
I0204 01:02:11.561905     264 kubelet_node_status.go:71] "Attempting to register node" node="h303898"
I0204 01:02:11.570652     264 kubelet_node_status.go:109] "Node was previously registered" node="h303898"
I0204 01:02:11.571248     264 kubelet_node_status.go:74] "Successfully registered node" node="h303898"
I0204 01:02:11.583643     264 manager.go:609] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"

I0204 01:02:12.441653     264 reconciler.go:157] "Reconciler: start to sync state"
E0204 01:02:12.454320     264 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 72
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [0:0] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]
:OUTPUT ACCEPT [0:0] - [0:0]

I0204 01:02:13.397518     264 docker_sandbox.go:240] "Both sandbox container and checkpoint could not be found. Proceed without further sandbox information." podSandboxID="1fb9b10307e41fcd5c9045204471a71ef4855804d81950432f9f289662a9a56f"
I0204 01:02:13.398288     264 cni.go:333] "CNI failed to retrieve network namespace path" err="Error: No such container: 1fb9b10307e41fcd5c9045204471a71ef4855804d81950432f9f289662a9a56f"
E0204 01:02:13.635584     264 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 71
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [76:62966] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]

E0204 01:02:14.048654     264 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 70
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [120:106656] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]
:OUTPUT ACCEPT [134:107496] - [0:0]

E0204 01:02:14.450971     264 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 69
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [239:182841] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]
:OUTPUT ACCEPT [251:189529] - [0:0]

E0204 01:02:15.241431     264 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 69
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [615:291655] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]
:OUTPUT ACCEPT [616:294840] - [0:0]

W0204 01:02:28.820564     264 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.22.229.51]
time="2022-02-04T01:02:28Z" level=info msg="Stopped tunnel to 172.22.233.163:6443"
E0204 01:02:30.872085     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: connection refused
E0204 01:02:30.872671     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: connection refused
E0204 01:02:30.877824     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: connection refused
I0204 01:02:32.656341     264 scope.go:110] "RemoveContainer" containerID="9cb6a77724d27bf1799cfb048017a9740045eaa52358d471f50cbe671b5a3e53"
E0204 01:02:38.745329     264 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: dial tcp 10.43.251.19:443: i/o timeout
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0204 01:02:38.745654     264 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0204 01:07:11.489355     264 sysinfo.go:203] Nodes topology is not available, providing CPU topology
W0204 01:12:11.488820     264 sysinfo.go:203] Nodes topology is not available, providing CPU topology
>_ nerdctl info
FATA[0000] cannot access containerd socket "/run/k3s/containerd/containerd.sock" (hint: try running with `--address /var/run/docker/containerd/containerd.sock` to connect to Docker-managed containerd): no such file or directory

>_ nerdctl info --address /var/run/docker/containerd/containerd.sock                                                                                                                                 
Client:
 Namespace:     default
 Debug Mode:    false

Server:
 Server Version: v1.5.9
 Storage Driver: overlayfs
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Log: json-file
  Storage: native overlayfs
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.10.60.1-microsoft-standard-WSL2
 Operating System: Rancher Desktop WSL Distribution
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 24.75GiB
 Name: H303898
 ID: d2b0bcd5-bbed-4005-a44c-1682b8fef56e

>_ nerdctl run hell-world
FATA[0000] cannot access containerd socket "/run/k3s/containerd/containerd.sock" (hint: try running with `--address /var/run/docker/containerd/containerd.sock` to connect to Docker-managed containerd): no such file or directory

>_ nerdctl run --address /var/run/docker/containerd/containerd.sock hello-world                                                                                                          
docker.io/library/hello-world:latest:                                             resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:507ecde44b8eb741278274653120c2bf793b174c06ff4eaa672b713b3263477b:    done           |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:f54a58bc1aac5ea1a25d796ae155dc228b3f0e11d046ae276b39c4bf2f13d8c4: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:2db29710123e3e53a794f2694094b9b4338aa9ee5c40b930cb8063a1be392c54:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 1.2 s                                                                    total:  4.4 Ki (3.7 KiB/s)

Hello from Docker!
This message shows that your installation appears to be working correctly.
>_ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.6+k3s1", GitCommit:"3228d9cb9a4727d48f60de4f1ab472f7c50df904", GitTreeState:"clean", BuildDate:"2022-01-25T01:27:44Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

>_ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://172.22.229.51:6443
  name: rancher-desktop
contexts:
- context:
    cluster: rancher-desktop
    user: rancher-desktop
  name: rancher-desktop
current-context: rancher-desktop
kind: Config
preferences: {}
users:
- name: rancher-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

server: https://172.22.229.51:6443 is tied to the Ethernet adapter vEthernet (WSL) IPv4 Address. . . . . . . . . . . : 172.22.224.1(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.240.0

_ nerdctl network inspect bridge

[
{
"Name": "bridge",
"Id": "0",
"IPAM": {
"Config": [
{
"Subnet": "10.4.0.0/24",
"Gateway": "10.4.0.1"
}
]
},
"Labels": {}
}
]
>_ kubectl cluster-info
Kubernetes control plane is running at https://172.22.229.51:6443
CoreDNS is running at https://172.22.229.51:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://172.22.229.51:6443/api/v1/namespaces/kube-system/services/https:metrics-server:https/proxy

in the error logs above, I don't know where this 10.43.251.19 refers to.

Last point: in the nerdctl -h output, it mentions that the --address default is -H, --H string Alias of --address (default "/run/containerd/containerd.sock")

however, in the nerdctl commands above, I found the hint odd: FATA[0000] cannot access containerd socket "/run/k3s/containerd/containerd.sock" (hint: try running with --address /var/run/docker/containerd/containerd.sock to connect to Docker-managed containerd)

In the nerdctl -h

--address string containerd address, optionally with "unix://" prefix [$CONTAINERD_ADDRESS] (default "/run/containerd/containerd.sock")

patware commented 2 years ago

More info from logs

k3s.log:

I0204 01:02:26.291386     264 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
E0204 01:02:28.310996     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: no route to host
W0204 01:02:28.820564     264 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.22.229.51]
time="2022-02-04T01:02:28Z" level=info msg="Stopped tunnel to 172.22.233.163:6443"
E0204 01:02:30.872085     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: connection refused
E0204 01:02:30.872671     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: connection refused
E0204 01:02:30.877824     264 available_controller.go:524] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1: Get "https://10.43.251.19:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.43.251.19:443: connect: connection refused
I0204 01:02:32.656341     264 scope.go:110] "RemoveContainer" containerID="9cb6a77724d27bf1799cfb048017a9740045eaa52358d471f50cbe671b5a3e53"
E0204 01:02:38.745329     264 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: dial tcp 10.43.251.19:443: i/o timeout
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0204 01:02:38.745654     264 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
W0204 01:07:11.489355     264 sysinfo.go:203] Nodes topology is not available, providing CPU topology
W0204 01:12:11.488820     264 sysinfo.go:203] Nodes topology is not available, providing CPU topology
W0204 01:17:11.488905     264 sysinfo.go:203] Nodes topology is not available, providing CPU topology
patware commented 2 years ago

I switched back to containerD.

docker.log:

time="2022-02-04T01:02:32.784483200Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1d07f6f3c6d6e57c167c3111098762fddd8633d07819fd6bb6b1bce8bde9e684 pid=2980
time="2022-02-04T01:20:53.280640800Z" level=info msg="starting signal loop" namespace=default path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/default/3afa20ce6f0005f30c9fe668b518c1c5591e23a7806ce7da48fcd065ddf2aaef pid=10807

k3s.log: similar errors

-A KUBE-POD-FW-QOPUPQPF5ZTG7QKY -m comment --comment "set mark to ACCEPT traffic that comply to network policies" -j MARK --set-mark 0x20000/0x20000 
COMMIT
E0204 01:48:04.113081     236 network_policy_controller.go:252] Aborting sync. Failed to run iptables-restore: exit status 2 (iptables-restore v1.8.7 (legacy): Couldn't load match `limit':No such file or directory

Error occurred at line: 93
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
*filter
:INPUT ACCEPT [137:118136] - [0:0]
:FORWARD ACCEPT [0:0] - [0:0]

but now, the nerdctl info works without specifying the --address

_ nerdctl info


Client:
Namespace:     default
Debug Mode:    false

Server: Server Version: v1.5.9-k3s1 Storage Driver: overlayfs Logging Driver: json-file Cgroup Driver: cgroupfs Cgroup Version: 1 Plugins: Log: json-file Storage: native overlayfs fuse-overlayfs stargz Security Options: seccomp Profile: default Kernel Version: 5.10.60.1-microsoft-standard-WSL2 Operating System: Rancher Desktop WSL Distribution OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 24.75GiB Name: H303898 ID: f81e9c84-9b97-450c-95b2-8573a7a86fb8



>_ nerdctl run hello-world worked.

Hoping these info helps
obo-spi commented 2 years ago

I have the same error here on one of my macOS (10.15.7 Catalina) for both docker and conteinerd. Also tried some K8s versions.

It mainly starts here: (k3s.log)

time="2022-02-04T07:46:24Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2022-02-04T07:46:24Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2022-02-04T07:46:24Z" level=error msg="Remotedialer proxy error" error="websocket: bad handshake"

...
k-randmaa commented 2 years ago

I get the same error, although I can use kubectl and the cluster actually works fine. Happens with both containerd and docker with different k8s versions as already described above.

mook-as commented 2 years ago

1536 has been fixed, though I'm not 100% certain this is the same issue (which is why I opened a new one instead). If possible, please try the build at https://github.com/rancher-sandbox/rancher-desktop/actions/runs/1814793393 (you'll need a GitHub account — presumably anybody who can comment already has one) to double check?

k-randmaa commented 2 years ago

1536 has been fixed, though I'm not 100% certain this is the same issue (which is why I opened a new one instead). If possible, please try the build at https://github.com/rancher-sandbox/rancher-desktop/actions/runs/1814793393 (you'll need a GitHub account — presumably anybody who can comment already has one) to double check?

For me, this build has fixed it, thanks!

jandubois commented 2 years ago

It looks like the "no active cluster" error on Windows is fixed then, so I'm going to close this.

@obo-spi Since you are on macOS, that must be a different problem. Please file a new issue with additional information / more logs about it!

patware commented 2 years ago

Confirmed ! Great job everyone.

Quick note, the Version is labelled 1.0.0-159-gcd00735, I believe it should be labelled at least "1.0.1-" so the auto-updater doesn't overwrite the manual fix/package