k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.45k stars 461 forks source link

[BUG] rootless podman with >1 server never starts #1178

Open ecksun opened 2 years ago

ecksun commented 2 years ago

What did you do

What did you expect to happen

I expected to get a working cluster with three nodes

What happened

After trying to start the cluster the startup seem to hang indefinetly after the log line Starting Node 'k3d-k3s-default-server-1'. The k3d-k3s-default-server-0 seems to be re-trying some operation forever.

Some errors from the logs that might be of interest:

Screenshots or terminal output

$ k3d cluster create --servers 3
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created image volume k3d-k3s-default-images  
INFO[0000] Creating initializing server node            
INFO[0000] Creating node 'k3d-k3s-default-server-0'     
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-k3s-default-tools'        
INFO[0001] Creating node 'k3d-k3s-default-server-1'     
INFO[0003] Creating node 'k3d-k3s-default-server-2'     
INFO[0003] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
INFO[0003] Using the k3d-tools node to gather environment information 
INFO[0004] HostIP: using network gateway 10.89.1.1 address 
INFO[0004] Starting cluster 'k3s-default'               
INFO[0004] Starting the initializing server...          
INFO[0004] Starting Node 'k3d-k3s-default-server-0'     
INFO[0021] Starting servers...                          
INFO[0021] Starting Node 'k3d-k3s-default-server-1'     

The full log of the two pods are in this gist. Because of github issue limits I could not include the entire logs directly in the issue. I have also removed the time and ts fields from the logs to reduce their size.

k3d-k3s-default-server-0

...
level=info msg="Module overlay was already loaded"
level=info msg="Module nf_conntrack was already loaded"
level=warning msg="Failed to load kernel module br_netfilter with modprobe"
level=info msg="Module iptable_nat was already loaded"
level=info msg="Set sysctl 'net/bridge/bridge-nf-call-iptables' to 1"
level=error msg="Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"
level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 524288"
level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
{"level":"info","caller":"backend/backend.go:549","msg":"finished defragmenting directory","path":"/var/lib/rancher/k3s/server/db/etcd/member/snap/db","current-db-size-bytes-diff":0,"current-db-size-bytes":20480,"current-db-size":"20 kB","current-db-size-in-use-bytes-diff":-4096,"current-db-size-in-use-bytes":12288,"current-db-size-in-use":"12 kB","took":"81.46371ms"}
{"level":"info","caller":"v3rpc/maintenance.go:95","msg":"finished defragment"}
level=info msg="etcd data store connection OK"
level=info msg="Waiting for API server to become available"
...
level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=DevicePlugins=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-infra-container-image=rancher/mirrored-pause:3.6 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
I1031 18:29:11.669049       2 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I1031 18:29:11.669076       2 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I1031 18:29:11.669267       2 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I1031 18:29:11.669275       2 secure_serving.go:210] Serving securely on 127.0.0.1:6444
I1031 18:29:11.669318       2 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1031 18:29:11.669427       2 customresource_discovery_controller.go:209] Starting DiscoveryController
I1031 18:29:11.669466       2 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1031 18:29:11.669469       2 available_controller.go:491] Starting AvailableConditionController
I1031 18:29:11.669475       2 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1031 18:29:11.669479       2 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1031 18:29:11.669496       2 controller.go:85] Starting OpenAPI V3 controller
I1031 18:29:11.669517       2 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I1031 18:29:11.669520       2 establishing_controller.go:76] Starting EstablishingController
I1031 18:29:11.669557       2 crd_finalizer.go:266] Starting CRDFinalizer
I1031 18:29:11.669499       2 naming_controller.go:291] Starting NamingConditionController
I1031 18:29:11.669641       2 controller.go:80] Starting OpenAPI V3 AggregationController
I1031 18:29:11.669694       2 autoregister_controller.go:141] Starting autoregister controller
I1031 18:29:11.669682       2 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1031 18:29:11.669705       2 cache.go:32] Waiting for caches to sync for autoregister controller
I1031 18:29:11.669748       2 controller.go:85] Starting OpenAPI controller
I1031 18:29:11.669797       2 crdregistration_controller.go:111] Starting crd-autoregister controller
I1031 18:29:11.669801       2 apf_controller.go:317] Starting API Priority and Fairness config controller
I1031 18:29:11.669808       2 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
I1031 18:29:11.669810       2 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1031 18:29:11.669842       2 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I1031 18:29:11.669801       2 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1031 18:29:11.669866       2 controller.go:83] Starting OpenAPI AggregationController
I1031 18:29:11.669871       2 shared_informer.go:255] Waiting for caches to sync for cluster_authentication_trust_controller
I1031 18:29:11.669896       2 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I1031 18:29:11.683652       2 controller.go:611] quota admission added evaluator for: namespaces
E1031 18:29:11.702264       2 controller.go:166] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocate IP 10.43.0.1: cannot allocate resources of type serviceipallocations at this time
I1031 18:29:11.718607       2 shared_informer.go:262] Caches are synced for node_authorizer
I1031 18:29:11.769596       2 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1031 18:29:11.769647       2 cache.go:39] Caches are synced for AvailableConditionController controller
I1031 18:29:11.769762       2 cache.go:39] Caches are synced for autoregister controller
I1031 18:29:11.769862       2 shared_informer.go:262] Caches are synced for crd-autoregister
I1031 18:29:11.769912       2 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
I1031 18:29:11.770063       2 apf_controller.go:322] Running API Priority and Fairness config worker
{"level":"info","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abf2f7e3519cb866 switched to configuration voters=(12390238080548517990) learners=(5038257859986176226)"}
{"level":"info","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa6567adf24df8cc","local-member-id":"abf2f7e3519cb866","added-peer-id":"45eb7cd46f4bb8e2","added-peer-peer-urls":["https://10.89.1.3:2380"]}
{"level":"info","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2","remote-peer-urls":["https://10.89.1.3:2380"]}
{"level":"info","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"etcdserver/server.go:1922","msg":"applied a configuration change through raft","local-member-id":"abf2f7e3519cb866","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"abf2f7e3519cb866","to":"45eb7cd46f4bb8e2","stream-type":"stream MsgApp v2"}
{"level":"info","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
I1031 18:29:12.464483       2 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
{"level":"info","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"abf2f7e3519cb866","to":"45eb7cd46f4bb8e2","stream-type":"stream Message"}
{"level":"info","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"warn","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"10.89.1.3:43780","server-name":"","error":"EOF"}
{"level":"info","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
{"level":"info","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"abf2f7e3519cb866","remote-peer-id":"45eb7cd46f4bb8e2"}
I1031 18:29:12.674051       2 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
I1031 18:29:12.691362       2 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
I1031 18:29:12.691385       2 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
I1031 18:29:13.366163       2 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1031 18:29:13.418580       2 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
I1031 18:29:13.516075       2 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.43.0.1]
W1031 18:29:13.521613       2 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.89.1.2]
I1031 18:29:13.522681       2 controller.go:611] quota admission added evaluator for: endpoints
I1031 18:29:13.532395       2 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
level=info msg="Waiting for cloud-controller-manager privileges to become available"
level=info msg="Kube API server is now running"
level=info msg="ETCD server is now running"
level=info msg="k3s is up and running"
level=info msg="Applying CRD addons.k3s.cattle.io"
level=info msg="Applying CRD helmcharts.helm.cattle.io"
level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI.
I1031 18:29:13.857872       2 server.go:192] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
I1031 18:29:13.858148       2 server.go:395] "Kubelet version" kubeletVersion="v1.24.4+k3s1"
I1031 18:29:13.858160       2 server.go:397] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1031 18:29:13.858748       2 serving.go:355] Generated self-signed cert in-memory
W1031 18:29:13.858939       2 manager.go:159] Cannot detect current cgroup on cgroup v2
I1031 18:29:13.859014       2 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
W1031 18:29:13.861322       2 reflector.go:324] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
E1031 18:29:13.861371       2 reflector.go:138] k8s.io/client-go@v1.24.4-k3s1/tools/cache/reflector.go:167: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: endpoints "kubernetes" is forbidden: User "system:k3s-controller" cannot list resource "endpoints" in API group "" in the namespace "default"
E1031 18:29:13.863246       2 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W1031 18:29:13.864250       2 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I1031 18:29:13.864765       2 server.go:644] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I1031 18:29:13.864904       2 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I1031 18:29:13.864953       2 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I1031 18:29:13.864969       2 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I1031 18:29:13.864978       2 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=false
I1031 18:29:13.864990       2 state_mem.go:36] "Initialized new in-memory state store"
level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
I1031 18:29:14.065278       2 server.go:748] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
I1031 18:29:14.068285       2 kubelet.go:376] "Attempting to sync node with API server"
I1031 18:29:14.068303       2 kubelet.go:267] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I1031 18:29:14.068328       2 kubelet.go:278] "Adding apiserver pod source"
I1031 18:29:14.068344       2 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
E1031 18:29:14.068385       2 kubelet.go:456] "Failed to create an oomWatcher (running in UserNS, Hint: enable KubeletInUserNamespace feature flag to ignore the error)" err="open /dev/kmsg: operation not permitted"
Error: failed to run Kubelet: failed to create kubelet: open /dev/kmsg: operation not permitted
Usage:
  kubelet [flags]

Flags:
      ...

level=fatal msg="kubelet exited: failed to run Kubelet: failed to create kubelet: open /dev/kmsg: operation not permitted"
level=info msg="Starting k3s v1.24.4+k3s1 (c3f830e9)"
...

Which OS & Architecture

$ k3d runtime-info
arch: amd64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: /run/user/1000/podman/podman.sock
filesystem: UNKNOWN
name: docker
os: debian
ostype: linux
version: 4.2.0

Which version of k3d

$ k3d version
k3d version v5.4.6
k3s version v1.24.4-k3s1 (default)

Which version of docker

$ docker version
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Client:       Podman Engine
Version:      4.2.0
API Version:  4.2.0
Go Version:   go1.19
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64
$ docker info
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
host:
  arch: amd64
  buildahVersion: 1.27.0
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon_2.1.3+ds1-1_amd64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.3, commit: unknown'
  cpuUtilization:
    idlePercent: 97.74
    systemPercent: 1.04
    userPercent: 1.22
  cpus: 16
  distribution:
    codename: bookworm
    distribution: debian
    version: unknown
  eventLogger: journald
  hostname: vile
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.0.0-2-amd64
  linkmode: dynamic
  logDriver: journald
  memFree: 3543769088
  memTotal: 23964467200
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun_1.5+dfsg-1+b1_amd64
    path: /usr/bin/crun
    version: |-
      crun version 1.5
      commit: 54ebb8ca8bf7e6ddae2eb919f5b82d1d96863dea
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.0-1_amd64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 4244631552
  swapTotal: 4244631552
  uptime: 7h 58m 36.00s (Approximately 0.29 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  nimbraedge.azurecr.io:
    Blocked: false
    Insecure: false
    Location: nimbraedge.azurecr.io
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: nimbraedge.azurecr.io
    PullFromMirror: ""
  search:
  - docker.io
store:
  configFile: /home/linus/.config/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 0
    stopped: 5
  graphDriverName: vfs
  graphOptions: {}
  graphRoot: /home/linus/.local/share/containers/storage
  graphRootAllocated: 214397792256
  graphRootUsed: 74631761920
  graphStatus: {}
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 16
  runRoot: /run/user/1000/containers
  volumePath: /home/linus/.local/share/containers/storage/volumes
version:
  APIVersion: 4.2.0
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.19
  Os: linux
  OsArch: linux/amd64
  Version: 4.2.0
pavloos commented 1 year ago

It can be started using the provided hint

Hint: enable KubeletInUserNamespace feature flag to ignore the error
~ k3d cluster create --servers 3 --k3s-arg '--kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*'
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Creating initializing server node
INFO[0000] Creating node 'k3d-k3s-default-server-0'
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-1'
INFO[0002] Creating node 'k3d-k3s-default-server-2'
INFO[0002] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0002] Using the k3d-tools node to gather environment information
INFO[0002] HostIP: using network gateway 10.89.0.1 address
INFO[0002] Starting cluster 'k3s-default'
INFO[0002] Starting the initializing server...
INFO[0002] Starting Node 'k3d-k3s-default-server-0'
INFO[0004] Starting servers...
INFO[0004] Starting Node 'k3d-k3s-default-server-1'
INFO[0023] Starting Node 'k3d-k3s-default-server-2'
INFO[0040] All agents already running.
INFO[0040] Starting helpers...
INFO[0040] Starting Node 'k3d-k3s-default-serverlb'
INFO[0046] Injecting records for hostAliases (incl. host.k3d.internal) and for 4 network members into CoreDNS configmap...
INFO[0048] Cluster 'k3s-default' created successfully!
INFO[0048] You can now use it like this:
kubectl cluster-info
⎈ k3d-k3s-default () ~ k get nodes
NAME                       STATUS   ROLES                       AGE   VERSION
k3d-k3s-default-server-0   Ready    control-plane,etcd,master   84s   v1.26.4+k3s1
k3d-k3s-default-server-1   Ready    control-plane,etcd,master   72s   v1.26.4+k3s1
k3d-k3s-default-server-2   Ready    control-plane,etcd,master   55s   v1.26.4+k3s1