k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.44k stars 461 forks source link

K3D Fails to Create Cluster with Podman on RHEL 9 #1105

Open BrentFathom5 opened 2 years ago

BrentFathom5 commented 2 years ago

What did you do

Screenshots or terminal output

Here is the resulting terminal output:

DEBU[0000] DOCKER_SOCK=unix:///run/user/1000/podman/podman.sock 
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:unix:///run/user/1000/podman/podman.sock Version:4.0.2 OSType:linux OS:"rhel" Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:xfs} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
hostaliases: [] 
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.23.8-k3s1
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: "" 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.23.8-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:42195} Image:docker.io/rancher/k3s:v1.23.8-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-k3s-default-server-0
settings:
  workerConnections: 1024 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00050aea0 0xc00050b040] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000387e00 ServerLoadBalancer:0xc0001d66e0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created image volume k3d-k3s-default-images  
DEBU[0000] [Docker] DockerHost: '' (unix:///run/user/1000/podman/podman.sock) 
INFO[0000] Starting new tools node...                   
DEBU[0000] DOCKER_SOCK=unix:///run/user/1000/podman/podman.sock 
DEBU[0000] DOCKER_SOCK=unix:///run/user/1000/podman/podman.sock 
ERRO[0000] Failed to run tools container for cluster 'k3s-default' 
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
DEBU[0001] Created container k3d-k3s-default-server-0 (ID: 51b087953f873d6aff667ab4859ec59daba1ad822c5c8e0acfddb222de098ff4) 
DEBU[0001] Created node 'k3d-k3s-default-server-0'      
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
DEBU[0001] Created container k3d-k3s-default-serverlb (ID: 7366a2eac3538335fa10c56c584b6ead11e51624bc5dc3a974af3f3ce02f73db) 
DEBU[0001] Created loadbalancer 'k3d-k3s-default-serverlb' 
DEBU[0001] DOCKER_SOCK=unix:///run/user/1000/podman/podman.sock 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] Starting new tools node...                   
DEBU[0001] DOCKER_SOCK=unix:///run/user/1000/podman/podman.sock 
DEBU[0001] DOCKER_SOCK=unix:///run/user/1000/podman/podman.sock 
ERRO[0001] Failed to run tools container for cluster 'k3s-default' 
ERRO[0001] failed to gather environment information used for cluster creation: failed to run k3d-tools node for cluster 'k3s-default': failed to create node 'k3d-k3s-default-tools': runtime failed to create node 'k3d-k3s-default-tools': failed to create container for node 'k3d-k3s-default-tools': docker failed to create container 'k3d-k3s-default-tools': Error response from daemon: fill out specgen: unix:///run/user/1000/podman/podman.sock:unix:///run/user/1000/podman/podman.sock: incorrect volume format, should be [host-dir:]ctr-dir[:option] 
ERRO[0001] Failed to create cluster >>> Rolling Back    
INFO[0001] Deleting cluster 'k3s-default'               
DEBU[0001] Cluster Details: &{Name:k3s-default Network:{Name:k3d-k3s-default ID:3bcfae1b84bf83cec350c341231119950ce06d08e693c6b7b9f2560486ac3e7a External:false IPAM:{IPPrefix:10.89.0.0/24 IPsUsed:[] Managed:false} Members:[]} Token:RinmRJXvQxLPufLaxgpw Nodes:[0xc00050aea0 0xc00050b040] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000387e00 ServerLoadBalancer:0xc0001d66e0 ImageVolume:k3d-k3s-default-images Volumes:[k3d-k3s-default-images k3d-k3s-default-images]} 
DEBU[0001] Deleting node k3d-k3s-default-serverlb ...   
DEBU[0001] Deleting node k3d-k3s-default-server-0 ...   
INFO[0001] Deleting cluster network 'k3d-k3s-default'   
INFO[0001] Deleting 2 attached volumes...               
DEBU[0001] Deleting volume k3d-k3s-default-images...    
DEBU[0001] Deleting volume k3d-k3s-default-images...    
WARN[0001] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually 
FATA[0001] Cluster creation FAILED, all changes have been rolled back!

I can verify that the Podman service is up and running as well by running systemctl status podman:

○ podman.service - Podman API Service
     Loaded: loaded (/usr/lib/systemd/system/podman.service; enabled; vendor preset: disabled)
     Active: inactive (dead) since Tue 2022-07-19 23:52:04 EDT; 1min 32s ago
TriggeredBy: ● podman.socket
       Docs: man:podman-system-service(1)
    Process: 914 ExecStart=/usr/bin/podman $LOGGING system service (code=exited, status=0/SUCCESS)
   Main PID: 914 (code=exited, status=0/SUCCESS)
        CPU: 92ms

Jul 19 23:51:59 localhost systemd[1]: Starting Podman API Service...
Jul 19 23:51:59 localhost systemd[1]: Started Podman API Service.
Jul 19 23:51:59 localhost podman[914]: time="2022-07-19T23:51:59-04:00" level=info msg="/usr/bin/podman>
Jul 19 23:51:59 localhost podman[914]: time="2022-07-19T23:51:59-04:00" level=info msg="Not using nativ>
Jul 19 23:51:59 localhost podman[914]: 2022-07-19 23:51:59.639150375 -0400 EDT m=+0.109647436 system re>
Jul 19 23:51:59 localhost podman[914]: time="2022-07-19T23:51:59-04:00" level=info msg="Setting paralle>
Jul 19 23:51:59 localhost podman[914]: time="2022-07-19T23:51:59-04:00" level=info msg="Using systemd s>
Jul 19 23:51:59 localhost podman[914]: time="2022-07-19T23:51:59-04:00" level=info msg="API service lis>
Jul 19 23:52:04 localhost.localdomain systemd[1]: podman.service: Deactivated successfully.

Which OS & Architecture

Output of k3d runtime-info:

arch: amd64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: unix:///run/user/1000/podman/podman.sock
filesystem: xfs
name: docker
os: '"rhel"'
ostype: linux
version: 4.0.2

I'm also setting the following environment variables in my ~/.zshrc file:

...
# K3D
export XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)}
export DOCKER_SOCK=unix://$XDG_RUNTIME_DIR/podman/podman.sock
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
export K3D_FIX_CGROUPV2=false

Which version of k3d

Output of k3d version:

k3d version v5.4.4
k3s version v1.23.8-k3s1 (default)

Which version of docker

Output of docker version:

Client:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.17.7

Built:      Thu May 19 14:18:11 2022
OS/Arch:    linux/amd64

Output of docker info:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-1.el9.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: 3a898eb433ae426e729088ccdc2bdae44a3164da'
  cpus: 8
  distribution:
    distribution: '"rhel"'
    version: "9.0"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.14.0-70.17.1.el9_0.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 11473022976
  memTotal: 16078839808
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.4-2.el9_0.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.4
      commit: 6521fcc5806f20f6187eb933f9f45130c86da230
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-4.el9.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 8300523520
  swapTotal: 8300523520
  uptime: 4m 52.27s
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - quay.io
  - docker.io
store:
  configFile: /home/brent/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/brent/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  volumePath: /home/brent/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1652984291
  BuiltTime: Thu May 19 14:18:11 2022
  GitCommit: ""
  GoVersion: go1.17.7
  OsArch: linux/amd64
  Version: 4.0.2
ekarlso commented 2 years ago

I think I am hitting the same issues on Ubuntu too.

Cubxity commented 2 years ago

The same issue is reproducable on Fedora Workstation 36

idlefella commented 2 years ago

I have the same issue on rhel8 when I try to use rootless podman with the following versions:

I intercepted the call from k3d to podman with socat and found the following request which fails:

{
    "Hostname": "k3d-mlp-tools",
    "Domainname": "",
    "User": "",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Env": [
        "K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml"
    ],
    "Cmd": [
        "noop"
    ],
    "Image": "ghcr.io/k3d-io/k3d-tools:5.4.6",
    "Volumes": null,
    "WorkingDir": "",
    "Entrypoint": null,
    "OnBuild": null,
    "Labels": {
        "app": "k3d",
        "k3d.cluster": "mlp",
        "k3d.role": "noRole",
        "k3d.version": "v5.4.6"
    },
    "HostConfig": {
        "Binds": [
            "k3d-mlp-images:/k3d/images",
            "unix:///run/user/2625/podman/podman2.sock:unix:///run/user/2625/podman/podman2.sock"
        ],
        "ContainerIDFile": "",
        "LogConfig": {
            "Type": "",
            "Config": null
        },
        "NetworkMode": "bridge",
        "PortBindings": null,
        "RestartPolicy": {
            "Name": "",
            "MaximumRetryCount": 0
        },
        "AutoRemove": false,
        "VolumeDriver": "",
        "VolumesFrom": null,
        "CapAdd": null,
        "CapDrop": null,
        "CgroupnsMode": "",
        "Dns": null,
        "DnsOptions": null,
        "DnsSearch": null,
        "ExtraHosts": [
            "host.k3d.internal:host-gateway"
        ],
        "GroupAdd": null,
        "IpcMode": "",
        "Cgroup": "",
        "Links": null,
        "OomScoreAdj": 0,
        "PidMode": "",
        "Privileged": true,
        "PublishAllPorts": false,
        "ReadonlyRootfs": false,
        "SecurityOpt": null,
        "Tmpfs": {
            "/run": "",
            "/var/run": ""
        },
        "UTSMode": "",
        "UsernsMode": "",
        "ShmSize": 0,
        "ConsoleSize": [
            0,
            0
        ],
        "Isolation": "",
        "CpuShares": 0,
        "Memory": 0,
        "NanoCpus": 0,
        "CgroupParent": "",
        "BlkioWeight": 0,
        "BlkioWeightDevice": null,
        "BlkioDeviceReadBps": null,
        "BlkioDeviceWriteBps": null,
        "BlkioDeviceReadIOps": null,
        "BlkioDeviceWriteIOps": null,
        "CpuPeriod": 0,
        "CpuQuota": 0,
        "CpuRealtimePeriod": 0,
        "CpuRealtimeRuntime": 0,
        "CpusetCpus": "",
        "CpusetMems": "",
        "Devices": null,
        "DeviceCgroupRules": null,
        "DeviceRequests": null,
        "KernelMemory": 0,
        "KernelMemoryTCP": 0,
        "MemoryReservation": 0,
        "MemorySwap": 0,
        "MemorySwappiness": null,
        "OomKillDisable": null,
        "PidsLimit": null,
        "Ulimits": null,
        "CpuCount": 0,
        "CpuPercent": 0,
        "IOMaximumIOps": 0,
        "IOMaximumBandwidth": 0,
        "MaskedPaths": null,
        "ReadonlyPaths": null,
        "Init": true
    },
    "NetworkingConfig": {
        "EndpointsConfig": {
            "k3d-mlp": {
                "IPAMConfig": null,
                "Links": null,
                "Aliases": null,
                "NetworkID": "",
                "EndpointID": "",
                "Gateway": "",
                "IPAddress": "",
                "IPPrefixLen": 0,
                "IPv6Gateway": "",
                "GlobalIPv6Address": "",
                "GlobalIPv6PrefixLen": 0,
                "MacAddress": "",
                "DriverOpts": null
            }
        }
    }
}

The response:

{
    "cause": "incorrect volume format, should be [host-dir:]ctr-dir[:option]",
    "message": "fill out specgen: unix:///run/user/2625/podman/podman2.sock:unix:///run/user/2625/podman/podman2.sock: incorrect volume format, should be [host-dir:]ctr-dir[:option]",
    "response": 500
}

The problem seems to be that Volumes doesn't have the correct format. In the request it is null...

idlefella commented 2 years ago

The error came because I've set the environment variable DOCKER_SOCK=unix://... (starting with unix://) The following option worked for me, but I later ran into another issue:

DOCKER_SOCK=$XDG_RUNTIME_DIR/podman/podman.sock
DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
testdruid commented 1 year ago

I ran into the same issue on a RHEL 8 setup with latest versions.

What did you do The commands here: https://k3d.io/v5.4.9/usage/advanced/podman/?h=podman#using-podman

Screenshots or terminal output INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
INFO[0000] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb' INFO[0001] Using the k3d-tools node to gather environment information INFO[0001] HostIP: using network gateway 10.89.0.1 address INFO[0001] Starting cluster 'k3s-default'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
INFO[0006] All agents already running.
INFO[0006] Starting helpers...
INFO[0007] Starting Node 'k3d-k3s-default-serverlb'
ERRO[0018] Failed Cluster Start: Failed to add one or more helper nodes: Node k3d-k3s-default-serverlb failed to get ready: error waiting for log line start worker processes from node 'k3d-k3s-default-serverlb': stopped returning log lines: node k3d-k3s-default-serverlb is running=true in status=running ERRO[0018] Failed to create cluster >>> Rolling Back
INFO[0018] Deleting cluster 'k3s-default'
INFO[0019] Deleting cluster network 'k3d-k3s-default'
INFO[0019] Deleting 2 attached volumes...
WARN[0019] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually FATA[0019] Cluster creation FAILED, all changes have been rolled back!

Versions $ cat /etc/redhat-release Red Hat Enterprise Linux release 8.3 (Ootpa) $ k3d --version k3d version v5.4.6 k3s version v1.24.4-k3s1 (default) $ podman --version podman version 4.2.0

jonahbrawley commented 1 year ago

Running into the same error as @testdruid. I've tried following the docs on k3d's site to setup for use with podman, and have also tried clean reinstalling. Podman service is running, and I've made a symlink with the instructions on the previously mentioned docs.

Versions: $ podman --version

podman version 4.0.2

$ k3d --version

k3d version v5.5.1 k3s version v1.26.4-k3s1 (default)

$ cat /etc/redhat-release

Red Hat Enterprise Linux release 8.7 (Ootpa)

Logs: $ sudo $(which k3d) cluster create mycluster

INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network 'k3d-mycluster' (123f6c30a3b584f5192b402e975a4d4e5a95716c4be505eb92b5023a782c38de) 
INFO[0000] Created image volume k3d-mycluster-images    
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-mycluster-tools'          
INFO[0001] Creating node 'k3d-mycluster-server-0'       
INFO[0001] Creating LoadBalancer 'k3d-mycluster-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] HostIP: using network gateway 10.89.1.1 address 
INFO[0001] Starting cluster 'mycluster'                 
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-mycluster-server-0'       
INFO[0004] All agents already running.                  
INFO[0004] Starting helpers...                          
INFO[0004] Starting Node 'k3d-mycluster-serverlb'       
ERRO[0011] Failed Cluster Start: Failed to add one or more helper nodes: Node k3d-mycluster-serverlb failed to get ready: error waiting for log line `start worker processes` from node 'k3d-mycluster-serverlb': stopped returning log lines: node k3d-mycluster-serverlb is running=true in status=running 
ERRO[0011] Failed to create cluster >>> Rolling Back    
INFO[0011] Deleting cluster 'mycluster'                 
INFO[0012] Deleting 1 attached volumes...               
FATA[0012] Cluster creation FAILED, all changes have been rolled back!
gedw99 commented 1 year ago

ah me too :) on mac

spectdecomp commented 8 months ago

Running into the same error as @testdruid. I've tried following the docs on k3d's site to setup for use with podman, and have also tried clean reinstalling. Podman service is running, and I've made a symlink with the instructions on the previously mentioned docs.

Versions: $ podman --version

podman version 4.0.2

$ k3d --version

k3d version v5.5.1 k3s version v1.26.4-k3s1 (default)

$ cat /etc/redhat-release

Red Hat Enterprise Linux release 8.7 (Ootpa)

Logs: $ sudo $(which k3d) cluster create mycluster

INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network 'k3d-mycluster' (123f6c30a3b584f5192b402e975a4d4e5a95716c4be505eb92b5023a782c38de) 
INFO[0000] Created image volume k3d-mycluster-images    
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-mycluster-tools'          
INFO[0001] Creating node 'k3d-mycluster-server-0'       
INFO[0001] Creating LoadBalancer 'k3d-mycluster-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] HostIP: using network gateway 10.89.1.1 address 
INFO[0001] Starting cluster 'mycluster'                 
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-mycluster-server-0'       
INFO[0004] All agents already running.                  
INFO[0004] Starting helpers...                          
INFO[0004] Starting Node 'k3d-mycluster-serverlb'       
ERRO[0011] Failed Cluster Start: Failed to add one or more helper nodes: Node k3d-mycluster-serverlb failed to get ready: error waiting for log line `start worker processes` from node 'k3d-mycluster-serverlb': stopped returning log lines: node k3d-mycluster-serverlb is running=true in status=running 
ERRO[0011] Failed to create cluster >>> Rolling Back    
INFO[0011] Deleting cluster 'mycluster'                 
INFO[0012] Deleting 1 attached volumes...               
FATA[0012] Cluster creation FAILED, all changes have been rolled back!

The exact same error message appeared. More than six months have passed and the issue is still unresolved.