k3d-io / k3d

Little helper to run CNCF's k3s in Docker
https://k3d.io/
MIT License
5.36k stars 456 forks source link

[Docs] Need compat matrix: v5.2.0+ can't start with images <1.19 #897

Closed lahabana closed 2 years ago

lahabana commented 2 years ago

What did you do

What did you expect to happen

Cluster should have started. Instead getting:

ERRO[0132] Failed Cluster Start: error during post-start cluster preparation: errors waiting for log line Cluster dns configmap from node 'k3d-foo-server-0': stopped returning log lines: context deadline exceeded

Screenshots or terminal output

➜  k3d git:(main) ✗ K3D_LOG_NODE_WAIT_LOGS=server /Users/cmolter/code/k3d/bin/k3d cluster create  foo -i rancher/k3s:v1.18.20-k3s1  --trace --timeout 120s
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:20.10.7 OSType:linux OS:Docker Desktop Arch:x86_64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs}
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
DEBU[0000] Configuration:
agents: 0
image: rancher/k3s:v1.18.20-k3s1
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 2m0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    serversmemory: ""
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: ""
TRAC[0000] Trying to read config apiVersion='k3d.io/v1alpha3', kind='simple'
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:rancher/k3s:v1.18.20-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:2m0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
TRAC[0000] VolumeFilterMap: map[]
TRAC[0000] PortFilterMap: map[]
TRAC[0000] K3sNodeLabelFilterMap: map[]
TRAC[0000] RuntimeLabelFilterMap: map[]
TRAC[0000] EnvFilterMap: map[]
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha3} Name: Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:57575} Image:rancher/k3s:v1.18.20-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:2m0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:}}
==========================
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-foo-server-0
settings:
  workerConnections: 1024
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:foo Network:{Name:k3d-foo ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc00017a600 0xc00017b500] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc0004a2940 ServerLoadBalancer:0xc0001c76f0 ImageVolume:} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:2m0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== =====
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-foo ID:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 Created:2021-12-17 13:28:42.8225665 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.27.0.0/16 IPRange: Gateway:172.27.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[com.docker.network.bridge.enable_ip_masquerade:true] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-foo' (edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787)
INFO[0000] Created volume 'k3d-foo-images'
TRAC[0000] Using Registries: []
TRAC[0000]
===== Creating Cluster =====

Runtime:
{}

Cluster:
&{Name:foo Network:{Name:k3d-foo ID:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 External:false IPAM:{IPPrefix:172.27.0.0/16 IPsUsed:[172.27.0.1] Managed:false} Members:[]} Token: Nodes:[0xc00017a600 0xc00017b500] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc0004a2940 ServerLoadBalancer:0xc0001c76f0 ImageVolume:k3d-foo-images}

ClusterCreatOpts:
&{DisableImageVolume:false WaitForServer:true Timeout:2m0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d k3d.cluster.imageVolume:k3d-foo-images k3d.cluster.network:k3d-foo k3d.cluster.network.external:true k3d.cluster.network.id:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 k3d.cluster.network.iprange:172.27.0.0/16] GlobalEnv:[] Registries:{Create:<nil> Use:[] Config:<nil>}}

============================

TRAC[0000] Docker Machine not specified via DOCKER_MACHINE_NAME env var
TRAC[0000] [Docker] Not using docker-machine
TRAC[0000] [Docker] GetHost: DOCKER_HOST empty/unset
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] [Docker] Local DfD: using 'host.docker.internal'
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
TRAC[0000] Creating node from spec
&{Name:k3d-foo-tools Role:noRole Image:docker.io/rancher/k3d-tools:5.2.2 Volumes:[k3d-foo-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] Env:[] Cmd:[] Args:[noop] Ports:map[] Restart:false Created: RuntimeLabels:map[app:k3d k3d.cluster:foo k3d.version:v5.2.2] K3sNodeLabels:map[] Networks:[k3d-foo] ExtraHosts:[host.k3d.internal:host-gateway] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0000] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-foo-tools Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[noop] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-tools:5.2.2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:foo k3d.role:noRole k3d.version:v5.2.2] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-foo-images:/k3d/images /var/run/docker.sock:/var/run/docker.sock] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name: MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[host.k3d.internal:host-gateway] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00048f8bf} NetworkingConfig:{EndpointsConfig:map[k3d-foo:0xc00001c000]}}
DEBU[0000] Created container k3d-foo-tools (ID: 85a6412a5af28b9c01655dbf15ed414da71e1e3f4cd222748d785089f57e12fe)
DEBU[0000] Node k3d-foo-tools Start Time: 2021-12-17 14:28:58.306388 +0100 CET m=+0.159337738
TRAC[0000] Starting node 'k3d-foo-tools'
INFO[0000] Starting Node 'k3d-foo-tools'
DEBU[0000] Truncated 2021-12-17 13:28:58.8764767 +0000 UTC to 2021-12-17 13:28:58 +0000 UTC
DEBU[0010] [Docker] wanted to use 'host.docker.internal' as docker host, but it's not resolvable locally: lookup host.docker.internal on [2a01:cb00:1075:7000:229a:7dff:fe47:4ba0]:53: no such host
INFO[0011] Creating node 'k3d-foo-server-0'
DEBU[0011] DOCKER_SOCK=/var/run/docker.sock
TRAC[0011] Creating node from spec
&{Name:k3d-foo-server-0 Role:server Image:rancher/k3s:v1.18.20-k3s1 Volumes:[k3d-foo-images:/k3d/images] Env:[K3S_TOKEN=yztIjqyUabVuNdGoTgxR] Cmd:[] Args:[] Ports:map[] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:foo k3d.cluster.imageVolume:k3d-foo-images k3d.cluster.network:k3d-foo k3d.cluster.network.external:true k3d.cluster.network.id:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 k3d.cluster.network.iprange:172.27.0.0/16 k3d.cluster.token:yztIjqyUabVuNdGoTgxR k3d.cluster.url:https://k3d-foo-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-foo] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:0xc0004a2940} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[]}
TRAC[0011] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-foo-server-0 Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_TOKEN=yztIjqyUabVuNdGoTgxR K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[server --tls-san 0.0.0.0] Healthcheck:<nil> ArgsEscaped:false Image:rancher/k3s:v1.18.20-k3s1 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:foo k3d.cluster.imageVolume:k3d-foo-images k3d.cluster.network:k3d-foo k3d.cluster.network.external:true k3d.cluster.network.id:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 k3d.cluster.network.iprange:172.27.0.0/16 k3d.cluster.token:yztIjqyUabVuNdGoTgxR k3d.cluster.url:https://k3d-foo-server-0:6443 k3d.role:server k3d.server.api.host:0.0.0.0 k3d.server.api.hostIP:0.0.0.0 k3d.server.api.port:57575 k3d.version:v5.2.2] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-foo-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc00021c41f} NetworkingConfig:{EndpointsConfig:map[k3d-foo:0xc0000f60c0]}}
DEBU[0011] Created container k3d-foo-server-0 (ID: 46f598a03ca2fb64af31c5e007377f27c5258c2408b3cf041ba53cf9ec739833)
DEBU[0011] Created node 'k3d-foo-server-0'
INFO[0011] Creating LoadBalancer 'k3d-foo-serverlb'
DEBU[0011] DOCKER_SOCK=/var/run/docker.sock
TRAC[0011] Creating node from spec
&{Name:k3d-foo-serverlb Role:loadbalancer Image:docker.io/rancher/k3d-proxy:5.2.2 Volumes:[k3d-foo-images:/k3d/images] Env:[] Cmd:[] Args:[] Ports:map[6443:[{HostIP:0.0.0.0 HostPort:57575}]] Restart:true Created: RuntimeLabels:map[app:k3d k3d.cluster:foo k3d.cluster.imageVolume:k3d-foo-images k3d.cluster.network:k3d-foo k3d.cluster.network.external:true k3d.cluster.network.id:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 k3d.cluster.network.iprange:172.27.0.0/16 k3d.cluster.token:yztIjqyUabVuNdGoTgxR k3d.cluster.url:https://k3d-foo-server-0:6443] K3sNodeLabels:map[] Networks:[k3d-foo] ExtraHosts:[] ServerOpts:{IsInit:false KubeAPI:<nil>} AgentOpts:{} GPURequest: Memory: State:{Running:false Status: Started:} IP:{IP:zero IP Static:false} HookActions:[{Stage:preStart Action:{Runtime:{} Content:[112 111 114 116 115 58 10 32 32 54 52 52 51 46 116 99 112 58 10 32 32 45 32 107 51 100 45 102 111 111 45 115 101 114 118 101 114 45 48 10 115 101 116 116 105 110 103 115 58 10 32 32 119 111 114 107 101 114 67 111 110 110 101 99 116 105 111 110 115 58 32 49 48 50 52 10] Dest:/etc/confd/values.yaml Mode:-rwxr--r-- Description:Write Loadbalancer Configuration}}]}
TRAC[0011] Creating docker container with translated config
&{ContainerConfig:{Hostname:k3d-foo-serverlb Domainname: User: AttachStdin:false AttachStdout:false AttachStderr:false ExposedPorts:map[6443:{}] Tty:false OpenStdin:false StdinOnce:false Env:[K3S_KUBECONFIG_OUTPUT=/output/kubeconfig.yaml] Cmd:[] Healthcheck:<nil> ArgsEscaped:false Image:docker.io/rancher/k3d-proxy:5.2.2 Volumes:map[] WorkingDir: Entrypoint:[] NetworkDisabled:false MacAddress: OnBuild:[] Labels:map[app:k3d k3d.cluster:foo k3d.cluster.imageVolume:k3d-foo-images k3d.cluster.network:k3d-foo k3d.cluster.network.external:true k3d.cluster.network.id:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 k3d.cluster.network.iprange:172.27.0.0/16 k3d.cluster.token:yztIjqyUabVuNdGoTgxR k3d.cluster.url:https://k3d-foo-server-0:6443 k3d.role:loadbalancer k3d.version:v5.2.2] StopSignal: StopTimeout:<nil> Shell:[]} HostConfig:{Binds:[k3d-foo-images:/k3d/images] ContainerIDFile: LogConfig:{Type: Config:map[]} NetworkMode: PortBindings:map[6443:[{HostIP:0.0.0.0 HostPort:57575}]] RestartPolicy:{Name:unless-stopped MaximumRetryCount:0} AutoRemove:false VolumeDriver: VolumesFrom:[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[] IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false SecurityOpt:[] StorageOpt:map[] Tmpfs:map[/run: /var/run:] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime: ConsoleSize:[0 0] Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[] BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0 CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[] KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:<nil> OomKillDisable:<nil> PidsLimit:<nil> Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0 IOMaximumBandwidth:0} Mounts:[] MaskedPaths:[] ReadonlyPaths:[] Init:0xc0001bce8a} NetworkingConfig:{EndpointsConfig:map[k3d-foo:0xc0000f6000]}}
DEBU[0011] Created container k3d-foo-serverlb (ID: 6f31343d9242c2f1359a7662251b17b482ebe36031ba27795803ab13a295e165)
DEBU[0011] Created loadbalancer 'k3d-foo-serverlb'
DEBU[0011] DOCKER_SOCK=/var/run/docker.sock
INFO[0011] Using the k3d-tools node to gather environment information
TRAC[0011] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-foo-tools
DEBU[0011] no netlabel present on container /k3d-foo-tools
DEBU[0011] failed to get IP for container /k3d-foo-tools as we couldn't find the cluster network
DEBU[0011] DOCKER_SOCK=/var/run/docker.sock
TRAC[0011] GOOS: darwin / Runtime OS: linux (Docker Desktop)
TRAC[0011] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-foo-tools
DEBU[0011] no netlabel present on container /k3d-foo-tools
DEBU[0011] failed to get IP for container /k3d-foo-tools as we couldn't find the cluster network
DEBU[0011] Executing command '[sh -c getent ahostsv4 'host.k3d.internal']' in node 'k3d-foo-tools'
TRAC[0011] Exec process '[sh -c getent ahostsv4 'host.k3d.internal']' still running in node 'k3d-foo-tools'.. sleeping for 1 second...
DEBU[0012] Exec process in node 'k3d-foo-tools' exited with '0'
TRAC[0012] Scanning Log Line '192.168.65.2    STREAM host.k3d.internal'
TRAC[0012] -> Match(es): '[192.168.65.2    STREAM host.k3d.internal 192.168.65.2]' -> Submatch(es): map[:192.168.65.2    STREAM host.k3d.internal ip:192.168.65.2]
DEBU[0012] Hostname 'host.k3d.internal' resolved to address '192.168.65.2' inside node k3d-foo-tools
INFO[0012] Starting cluster 'foo'
INFO[0012] Starting servers...
DEBU[0012] Deleting node k3d-foo-tools ...
DEBU[0012] DOCKER_SOCK=/var/run/docker.sock
DEBU[0012] No fix enabled.
DEBU[0012] Node k3d-foo-server-0 Start Time: 2021-12-17 14:29:10.638661 +0100 CET m=+12.491281222
TRAC[0012] Starting node 'k3d-foo-server-0'
INFO[0012] Starting Node 'k3d-foo-server-0'
TRAC[0012] [Docker] Deleted Container k3d-foo-tools
DEBU[0013] Truncated 2021-12-17 13:29:11.2323925 +0000 UTC to 2021-12-17 13:29:11 +0000 UTC
DEBU[0013] Waiting for node k3d-foo-server-0 to get ready (Log: 'k3s is up and running')
TRAC[0013] NodeWaitForLogMessage: Node 'k3d-foo-server-0' waiting for log message 'k3s is up and running' since '2021-12-17 13:29:11 +0000 UTC'
TRAC[0013] >>> Parsing log line: `]time="2021-12-17T13:29:11.411116700Z" level=info msg="Starting k3s v1.18.20+k3s1 (57b73dc9)"`
TRAC[0013] >>> Parsing log line: `�time="2021-12-17T13:29:11.416641300Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"`
TRAC[0013] >>> Parsing log line: `�time="2021-12-17T13:29:11.416872600Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."`
TRAC[0013] >>> Parsing log line: `btime="2021-12-17T13:29:11.419086400Z" level=info msg="Database tables and indexes are up to date"`
TRAC[0013] >>> Parsing log line: `Ztime="2021-12-17T13:29:11.420534700Z" level=info msg="Kine listening on unix://kine.sock"`
TRAC[0013] >>> Parsing log line: `Rtime="2021-12-17T13:29:11.533829900Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.27.0.3:172.27.0.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:7c2dad3ff2b4ca3389602af9c224e2823369555a7256e3e567c1bb6d60b1265d]"`
TRAC[0013] >>> Parsing log line: `>time="2021-12-17T13:29:11.540297700Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"`
TRAC[0013] >>> Parsing log line: `�Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.`
TRAC[0013] >>> Parsing log line: `_I1217 13:29:11.540702       9 server.go:645] external host was not specified, using 172.27.0.3`
TRAC[0013] >>> Parsing log line: `DI1217 13:29:11.541001       9 server.go:162] Version: v1.18.20+k3s1`
TRAC[0013] >>> Parsing log line: ``I1217 13:29:11.746950       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.`
TRAC[0013] >>> Parsing log line: `<I1217 13:29:11.746976       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.`
TRAC[0013] >>> Parsing log line: ``I1217 13:29:11.748027       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.`
TRAC[0013] >>> Parsing log line: `<I1217 13:29:11.748084       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.`
TRAC[0013] >>> Parsing log line: `EI1217 13:29:11.773466       9 master.go:270] Using reconciler: lease`
TRAC[0013] >>> Parsing log line: `bI1217 13:29:11.818131       9 rest.go:113] the default service ipfamily for this cluster is: IPv4`
TRAC[0013] >>> Parsing log line: `pW1217 13:29:12.087524       9 genericapiserver.go:415] Skipping API batch/v2alpha1 because it has no resources.`
TRAC[0013] >>> Parsing log line: `{W1217 13:29:12.115771       9 genericapiserver.go:415] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.`
TRAC[0013] >>> Parsing log line: `vW1217 13:29:12.127311       9 genericapiserver.go:415] Skipping API node.k8s.io/v1alpha1 because it has no resources.`
TRAC[0013] >>> Parsing log line: `�W1217 13:29:12.143397       9 genericapiserver.go:415] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.`
TRAC[0013] >>> Parsing log line: `|W1217 13:29:12.147785       9 genericapiserver.go:415] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.`
TRAC[0014] >>> Parsing log line: `yW1217 13:29:12.159452       9 genericapiserver.go:415] Skipping API storage.k8s.io/v1alpha1 because it has no resources.`
TRAC[0014] >>> Parsing log line: `nW1217 13:29:12.176617       9 genericapiserver.go:415] Skipping API apps/v1beta2 because it has no resources.`
TRAC[0014] >>> Parsing log line: `nW1217 13:29:12.176676       9 genericapiserver.go:415] Skipping API apps/v1beta1 because it has no resources.`
TRAC[0014] >>> Parsing log line: ``I1217 13:29:12.184621       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.`
TRAC[0014] >>> Parsing log line: `<I1217 13:29:12.184676       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.`
TRAC[0015] >>> Parsing log line: `�I1217 13:29:13.557386       9 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt`
TRAC[0015] >>> Parsing log line: `�I1217 13:29:13.557483       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt`
TRAC[0015] >>> Parsing log line: `XI1217 13:29:13.557723       9 secure_serving.go:178] Serving securely on 127.0.0.1:6444`
TRAC[0015] >>> Parsing log line: `�I1217 13:29:13.557730       9 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key`
TRAC[0015] >>> Parsing log line: `]I1217 13:29:13.557813       9 tlsconfig.go:240] Starting DynamicServingCertificateController`
TRAC[0015] >>> Parsing log line: `aI1217 13:29:13.558474       9 available_controller.go:404] Starting AvailableConditionController`
TRAC[0015] >>> Parsing log line: `rI1217 13:29:13.558520       9 cache.go:32] Waiting for caches to sync for AvailableConditionController controller`
TRAC[0015] >>> Parsing log line: `_I1217 13:29:13.558560       9 autoregister_controller.go:141] Starting autoregister controller`
TRAC[0015] >>> Parsing log line: `bI1217 13:29:13.558604       9 cache.go:32] Waiting for caches to sync for autoregister controller`
TRAC[0015] >>> Parsing log line: `fI1217 13:29:13.558686       9 crdregistration_controller.go:111] Starting crd-autoregister controller`
TRAC[0015] >>> Parsing log line: `fI1217 13:29:13.558737       9 shared_informer.go:223] Waiting for caches to sync for crd-autoregister`
TRAC[0015] >>> Parsing log line: `WI1217 13:29:13.559097       9 controller.go:81] Starting OpenAPI AggregationController`
TRAC[0015] >>> Parsing log line: `�I1217 13:29:13.560120       9 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller`
TRAC[0015] >>> Parsing log line: `}I1217 13:29:13.560162       9 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller`
TRAC[0015] >>> Parsing log line: `�I1217 13:29:13.560318       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt`
TRAC[0015] >>> Parsing log line: `�I1217 13:29:13.560634       9 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt`
TRAC[0015] >>> Parsing log line: `eI1217 13:29:13.561391       9 apiservice_controller.go:94] Starting APIServiceRegistrationController`
TRAC[0015] >>> Parsing log line: `vI1217 13:29:13.561582       9 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller`
TRAC[0015] >>> Parsing log line: `JI1217 13:29:13.562161       9 crd_finalizer.go:266] Starting CRDFinalizer`
TRAC[0015] >>> Parsing log line: `LI1217 13:29:13.566676       9 controller.go:86] Starting OpenAPI controller`
TRAC[0015] >>> Parsing log line: `gI1217 13:29:13.567349       9 customresource_discovery_controller.go:209] Starting DiscoveryController`
TRAC[0015] >>> Parsing log line: `[I1217 13:29:13.567791       9 naming_controller.go:291] Starting NamingConditionController`
TRAC[0015] >>> Parsing log line: `]I1217 13:29:13.567916       9 establishing_controller.go:76] Starting EstablishingController`
TRAC[0015] >>> Parsing log line: `uI1217 13:29:13.568402       9 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController`
TRAC[0015] >>> Parsing log line: `I1217 13:29:13.569035       9 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController`
TRAC[0015] >>> Parsing log line: `�E1217 13:29:13.601353       9 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time`
TRAC[0015] >>> Parsing log line: `�E1217 13:29:13.602646       9 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.27.0.3, ResourceVersion: 0, AdditionalErrorMsg: `
TRAC[0015] >>> Parsing log line: `iI1217 13:29:13.661290       9 cache.go:39] Caches are synced for AvailableConditionController controller`
TRAC[0015] >>> Parsing log line: `^I1217 13:29:13.661379       9 shared_informer.go:230] Caches are synced for crd-autoregister `
TRAC[0015] >>> Parsing log line: `YI1217 13:29:13.661445       9 cache.go:39] Caches are synced for autoregister controller`
TRAC[0015] >>> Parsing log line: `mI1217 13:29:13.662294       9 cache.go:39] Caches are synced for APIServiceRegistrationController controller`
TRAC[0015] >>> Parsing log line: `uI1217 13:29:13.670781       9 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller `
TRAC[0016] >>> Parsing log line: `�I1217 13:29:14.562411       9 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).`
TRAC[0016] >>> Parsing log line: `�I1217 13:29:14.563885       9 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).`
TRAC[0016] >>> Parsing log line: `zI1217 13:29:14.578190       9 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000`
TRAC[0016] >>> Parsing log line: `}I1217 13:29:14.587717       9 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000`
TRAC[0016] >>> Parsing log line: `�I1217 13:29:14.587796       9 storage_scheduling.go:143] all system priority classes are created successfully or already exist.`
TRAC[0016] >>> Parsing log line: `vI1217 13:29:15.054872       9 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io`
TRAC[0016] >>> Parsing log line: `}I1217 13:29:15.105845       9 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io`
TRAC[0017] >>> Parsing log line: `pW1217 13:29:15.256695       9 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.27.0.3]`
TRAC[0017] >>> Parsing log line: ``I1217 13:29:15.259330       9 controller.go:609] quota admission added evaluator for: endpoints`
TRAC[0017] >>> Parsing log line: `vI1217 13:29:15.266083       9 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io`
TRAC[0017] >>> Parsing log line: `jI1217 13:29:15.583350       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0017] >>> Parsing log line: `jI1217 13:29:15.583487       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0017] >>> Parsing log line: `�time="2021-12-17T13:29:15.584767800Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"`
TRAC[0017] >>> Parsing log line: `ctime="2021-12-17T13:29:15.586432600Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"`
TRAC[0017] >>> Parsing log line: `OI1217 13:29:15.601409       9 controllermanager.go:161] Version: v1.18.20+k3s1`
TRAC[0017] >>> Parsing log line: `ktime="2021-12-17T13:29:15.602603300Z" level=info msg="Waiting for cloudcontroller rbac role to be created"`
TRAC[0017] >>> Parsing log line: `bI1217 13:29:15.603821       9 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252`
TRAC[0017] >>> Parsing log line: `jI1217 13:29:15.608412       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0017] >>> Parsing log line: `jI1217 13:29:15.608459       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0017] >>> Parsing log line: `Ytime="2021-12-17T13:29:15.611254400Z" level=info msg="Creating CRD addons.k3s.cattle.io"`
TRAC[0017] >>> Parsing log line: `MW1217 13:29:15.613375       9 authorization.go:47] Authorization is disabled`
TRAC[0017] >>> Parsing log line: `OW1217 13:29:15.613460       9 authentication.go:40] Authentication is disabled`
TRAC[0017] >>> Parsing log line: `jI1217 13:29:15.613794       9 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251`
TRAC[0017] >>> Parsing log line: `^time="2021-12-17T13:29:15.624764800Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"`
TRAC[0017] >>> Parsing log line: `dtime="2021-12-17T13:29:15.634550900Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"`
TRAC[0017] >>> Parsing log line: `utime="2021-12-17T13:29:15.650543900Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"`
TRAC[0017] >>> Parsing log line: `KI1217 13:29:15.882206       9 plugins.go:100] No cloud provider specified.`
TRAC[0017] >>> Parsing log line: `\I1217 13:29:15.887541       9 shared_informer.go:223] Waiting for caches to sync for tokens`
TRAC[0017] >>> Parsing log line: `fI1217 13:29:15.902818       9 controller.go:609] quota admission added evaluator for: serviceaccounts`
TRAC[0017] >>> Parsing log line: `LI1217 13:29:15.910501       9 controllermanager.go:533] Started "daemonset"`
TRAC[0017] >>> Parsing log line: `XI1217 13:29:15.910825       9 daemon_controller.go:286] Starting daemon sets controller`
TRAC[0017] >>> Parsing log line: `aI1217 13:29:15.910904       9 shared_informer.go:223] Waiting for caches to sync for daemon sets`
TRAC[0017] >>> Parsing log line: `NI1217 13:29:15.926473       9 controllermanager.go:533] Started "statefulset"`
TRAC[0017] >>> Parsing log line: `TI1217 13:29:15.926824       9 stateful_set.go:146] Starting stateful set controller`
TRAC[0017] >>> Parsing log line: `bI1217 13:29:15.926875       9 shared_informer.go:223] Waiting for caches to sync for stateful set`
TRAC[0017] >>> Parsing log line: `OI1217 13:29:15.942260       9 controllermanager.go:533] Started "csrapproving"`
TRAC[0017] >>> Parsing log line: `�W1217 13:29:15.942305       9 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.`
TRAC[0017] >>> Parsing log line: `IW1217 13:29:15.942322       9 controllermanager.go:525] Skipping "route"`
TRAC[0017] >>> Parsing log line: `lI1217 13:29:15.942481       9 certificate_controller.go:119] Starting certificate controller "csrapproving"`
TRAC[0017] >>> Parsing log line: `nI1217 13:29:15.942531       9 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving`
TRAC[0017] >>> Parsing log line: `ZI1217 13:29:15.957528       9 controllermanager.go:533] Started "persistentvolume-binder"`
TRAC[0017] >>> Parsing log line: `_I1217 13:29:15.958368       9 pv_controller_base.go:295] Starting persistent volume controller`
TRAC[0017] >>> Parsing log line: `gI1217 13:29:15.958435       9 shared_informer.go:223] Waiting for caches to sync for persistent volume`
TRAC[0017] >>> Parsing log line: `QI1217 13:29:15.974469       9 controllermanager.go:533] Started "pvc-protection"`
TRAC[0017] >>> Parsing log line: `VW1217 13:29:15.974517       9 controllermanager.go:525] Skipping "ttl-after-finished"`
TRAC[0017] >>> Parsing log line: `cI1217 13:29:15.974517       9 pvc_protection_controller.go:101] Starting PVC protection controller`
TRAC[0017] >>> Parsing log line: `dI1217 13:29:15.974618       9 shared_informer.go:223] Waiting for caches to sync for PVC protection`
TRAC[0017] >>> Parsing log line: `HI1217 13:29:15.984384       9 controllermanager.go:533] Started "podgc"`
TRAC[0017] >>> Parsing log line: `JI1217 13:29:15.984792       9 gc_controller.go:89] Starting GC controller`
TRAC[0017] >>> Parsing log line: `XI1217 13:29:15.984813       9 shared_informer.go:223] Waiting for caches to sync for GC`
TRAC[0017] >>> Parsing log line: `TI1217 13:29:15.988062       9 shared_informer.go:230] Caches are synced for tokens `
TRAC[0017] >>> Parsing log line: `FI1217 13:29:16.007419       9 controllermanager.go:533] Started "job"`
TRAC[0017] >>> Parsing log line: `MI1217 13:29:16.007614       9 job_controller.go:145] Starting job controller`
TRAC[0017] >>> Parsing log line: `YI1217 13:29:16.007653       9 shared_informer.go:223] Waiting for caches to sync for job`
TRAC[0017] >>> Parsing log line: `MI1217 13:29:16.031622       9 controllermanager.go:533] Started "disruption"`
TRAC[0017] >>> Parsing log line: `PI1217 13:29:16.031648       9 disruption.go:331] Starting disruption controller`
TRAC[0017] >>> Parsing log line: `SW1217 13:29:16.031668       9 controllermanager.go:512] "tokencleaner" is disabled`
TRAC[0017] >>> Parsing log line: ``I1217 13:29:16.031668       9 shared_informer.go:223] Waiting for caches to sync for disruption`
TRAC[0017] >>> Parsing log line: `�W1217 13:29:16.043819       9 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.`
TRAC[0017] >>> Parsing log line: `OI1217 13:29:16.044603       9 controllermanager.go:533] Started "attachdetach"`
TRAC[0017] >>> Parsing log line: `aI1217 13:29:16.044858       9 attach_detach_controller.go:348] Starting attach detach controller`
TRAC[0017] >>> Parsing log line: `cI1217 13:29:16.044899       9 shared_informer.go:223] Waiting for caches to sync for attach detach`
TRAC[0017] >>> Parsing log line: `ZI1217 13:29:16.060673       9 controllermanager.go:533] Started "clusterrole-aggregation"`
TRAC[0017] >>> Parsing log line: `gI1217 13:29:16.060724       9 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator`
TRAC[0017] >>> Parsing log line: `kI1217 13:29:16.060750       9 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator`
TRAC[0017] >>> Parsing log line: `PI1217 13:29:16.098385       9 controllermanager.go:533] Started "pv-protection"`
TRAC[0017] >>> Parsing log line: ``I1217 13:29:16.098577       9 pv_protection_controller.go:83] Starting PV protection controller`
TRAC[0017] >>> Parsing log line: `cI1217 13:29:16.098615       9 shared_informer.go:223] Waiting for caches to sync for PV protection`
TRAC[0017] >>> Parsing log line: `ztime="2021-12-17T13:29:16.157325100Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"`
TRAC[0017] >>> Parsing log line: `{time="2021-12-17T13:29:16.157412500Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"`
TRAC[0018] >>> Parsing log line: `PI1217 13:29:16.251733       9 controllermanager.go:533] Started "endpointslice"`
TRAC[0018] >>> Parsing log line: `bI1217 13:29:16.252134       9 endpointslice_controller.go:213] Starting endpoint slice controller`
TRAC[0018] >>> Parsing log line: `dI1217 13:29:16.252215       9 shared_informer.go:223] Waiting for caches to sync for endpoint_slice`
TRAC[0018] >>> Parsing log line: `ktime="2021-12-17T13:29:16.607484700Z" level=info msg="Waiting for cloudcontroller rbac role to be created"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.662204000Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.671644300Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.672165400Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.672345800Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.672868300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"`
TRAC[0018] >>> Parsing log line: `xtime="2021-12-17T13:29:16.673290100Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"`
TRAC[0018] >>> Parsing log line: `|time="2021-12-17T13:29:16.673545300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.673872500Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.674099200Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.674325500Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.674746700Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.675129300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.675336000Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"`
TRAC[0018] >>> Parsing log line: `|time="2021-12-17T13:29:16.675494300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"`
TRAC[0018] >>> Parsing log line: `htime="2021-12-17T13:29:16.776464200Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"`
TRAC[0018] >>> Parsing log line: `ttime="2021-12-17T13:29:16.776787700Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"`
TRAC[0018] >>> Parsing log line: `�time="2021-12-17T13:29:16.776860000Z" level=info msg="To join node to cluster: k3s agent -s https://172.27.0.3:6443 -t ${NODE_TOKEN}"`
TRAC[0018] >>> Parsing log line: `xtime="2021-12-17T13:29:16.777342400Z" level=info msg="Waiting for master node  startup: resource name may not be empty"`
TRAC[0018] >>> Parsing log line: `r2021-12-17 13:29:16.799873 I | http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate`
TRAC[0018] >>> Parsing log line: `}I1217 13:29:16.814761       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps`
TRAC[0018] >>> Parsing log line: `~I1217 13:29:16.814843       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps`
TRAC[0018] >>> Parsing log line: `|I1217 13:29:16.814891       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch`
TRAC[0018] >>> Parsing log line: `wI1217 13:29:16.814956       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints`
TRAC[0018] >>> Parsing log line: `zI1217 13:29:16.815200       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.815260       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.815372       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.815436       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.815497       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io`
TRAC[0018] >>> Parsing log line: `I1217 13:29:16.815587       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.815820       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.815891       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.816646       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.816730       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps`
TRAC[0018] >>> Parsing log line: `xI1217 13:29:16.817179       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.817648       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.818151       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.818218       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.818364       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io`
TRAC[0018] >>> Parsing log line: `yI1217 13:29:16.818535       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges`
TRAC[0018] >>> Parsing log line: `�W1217 13:29:16.818573       9 shared_informer.go:461] resyncPeriod 45948566473703 is smaller than resyncCheckPeriod 73170852474962 and the informer has already started. Changing it to 73170852474962`
TRAC[0018] >>> Parsing log line: `}I1217 13:29:16.819901       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts`
TRAC[0018] >>> Parsing log line: `~I1217 13:29:16.820074       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.820159       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io`
TRAC[0018] >>> Parsing log line: `�I1217 13:29:16.820221       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io`
TRAC[0018] >>> Parsing log line: `PI1217 13:29:16.820267       9 controllermanager.go:533] Started "resourcequota"`
TRAC[0018] >>> Parsing log line: `cI1217 13:29:16.821345       9 resource_quota_controller.go:272] Starting resource quota controller`
TRAC[0018] >>> Parsing log line: `dI1217 13:29:16.821390       9 shared_informer.go:223] Waiting for caches to sync for resource quota`
TRAC[0018] >>> Parsing log line: `RI1217 13:29:16.821419       9 resource_quota_monitor.go:303] QuotaMonitor running`
TRAC[0018] >>> Parsing log line: ``time="2021-12-17T13:29:16.840311700Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"`
TRAC[0018] >>> Parsing log line: `Htime="2021-12-17T13:29:16.840425200Z" level=info msg="Run: k3s kubectl"`
TRAC[0018] >>> Parsing log line: `Mtime="2021-12-17T13:29:16.840468900Z" level=info msg="k3s is up and running"`
TRAC[0018] Found target message `k3s is up and running` in log line `Mtime="2021-12-17T13:29:16.840468900Z" level=info msg="k3s is up and running"`
DEBU[0018] Finished waiting for log message 'k3s is up and running' from node 'k3d-foo-server-0'
INFO[0018] All agents already running.
INFO[0018] Starting helpers...
DEBU[0018] Node k3d-foo-serverlb Start Time: 2021-12-17 14:29:16.829372 +0100 CET m=+18.681827395
TRAC[0018] Node k3d-foo-serverlb: Executing preStartAction 'types.NodeHook'
TRAC[0018] Starting node 'k3d-foo-serverlb'
INFO[0018] Starting Node 'k3d-foo-serverlb'
DEBU[0019] Truncated 2021-12-17 13:29:17.7072915 +0000 UTC to 2021-12-17 13:29:17 +0000 UTC
DEBU[0019] Waiting for node k3d-foo-serverlb to get ready (Log: 'start worker processes')
TRAC[0019] NodeWaitForLogMessage: Node 'k3d-foo-serverlb' waiting for log message 'start worker processes' since '2021-12-17 13:29:17 +0000 UTC'
TRAC[0025] Found target message `start worker processes` in log line `;2021/12/17 13:29:23 [notice] 36#36: start worker processes`
DEBU[0025] Finished waiting for log message 'start worker processes' from node 'k3d-foo-serverlb'
INFO[0025] Injecting '192.168.65.2 host.k3d.internal' into /etc/hosts of all nodes...
DEBU[0025] Executing command '[sh -c echo '192.168.65.2 host.k3d.internal' >> /etc/hosts]' in node 'k3d-foo-server-0'
DEBU[0025] Executing command '[sh -c echo '192.168.65.2 host.k3d.internal' >> /etc/hosts]' in node 'k3d-foo-serverlb'
DEBU[0025] Found network {Name:k3d-foo ID:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 Created:2021-12-17 13:28:42.8225665 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:172.27.0.0/16 IPRange: Gateway:172.27.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[46f598a03ca2fb64af31c5e007377f27c5258c2408b3cf041ba53cf9ec739833:{Name:k3d-foo-server-0 EndpointID:041d1fac98294cc67f1b45feac3537c015f850e953c64a7ce530ca85ffaa257d MacAddress:02:42:ac:1b:00:03 IPv4Address:172.27.0.3/16 IPv6Address:} 6f31343d9242c2f1359a7662251b17b482ebe36031ba27795803ab13a295e165:{Name:k3d-foo-serverlb EndpointID:d750426d4532ca4603804bf30cc4e66385aeae5bf4e6f2a88731644b63543ca0 MacAddress:02:42:ac:1b:00:02 IPv4Address:172.27.0.2/16 IPv6Address:}] Options:map[com.docker.network.bridge.enable_ip_masquerade:true] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0025] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
TRAC[0025] NodeWaitForLogMessage: Node 'k3d-foo-server-0' waiting for log message 'Cluster dns configmap' since '2021-12-17 13:29:11 +0000 UTC'
TRAC[0025] >>> Parsing log line: `]time="2021-12-17T13:29:11.411116700Z" level=info msg="Starting k3s v1.18.20+k3s1 (57b73dc9)"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:11.416641300Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:11.416872600Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."`
TRAC[0025] >>> Parsing log line: `btime="2021-12-17T13:29:11.419086400Z" level=info msg="Database tables and indexes are up to date"`
TRAC[0025] >>> Parsing log line: `Ztime="2021-12-17T13:29:11.420534700Z" level=info msg="Kine listening on unix://kine.sock"`
TRAC[0025] >>> Parsing log line: `Rtime="2021-12-17T13:29:11.533829900Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.27.0.3:172.27.0.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:7c2dad3ff2b4ca3389602af9c224e2823369555a7256e3e567c1bb6d60b1265d]"`
TRAC[0025] >>> Parsing log line: `>time="2021-12-17T13:29:11.540297700Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"`
TRAC[0025] >>> Parsing log line: `�Flag --basic-auth-file has been deprecated, Basic authentication mode is deprecated and will be removed in a future release. It is not recommended for production environments.`
TRAC[0025] >>> Parsing log line: `_I1217 13:29:11.540702       9 server.go:645] external host was not specified, using 172.27.0.3`
TRAC[0025] >>> Parsing log line: `DI1217 13:29:11.541001       9 server.go:162] Version: v1.18.20+k3s1`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:11.746950       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.`
TRAC[0025] >>> Parsing log line: `<I1217 13:29:11.746976       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:11.748027       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.`
TRAC[0025] >>> Parsing log line: `<I1217 13:29:11.748084       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.`
TRAC[0025] >>> Parsing log line: `EI1217 13:29:11.773466       9 master.go:270] Using reconciler: lease`
TRAC[0025] >>> Parsing log line: `bI1217 13:29:11.818131       9 rest.go:113] the default service ipfamily for this cluster is: IPv4`
TRAC[0025] >>> Parsing log line: `pW1217 13:29:12.087524       9 genericapiserver.go:415] Skipping API batch/v2alpha1 because it has no resources.`
TRAC[0025] >>> Parsing log line: `{W1217 13:29:12.115771       9 genericapiserver.go:415] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.`
TRAC[0025] >>> Parsing log line: `vW1217 13:29:12.127311       9 genericapiserver.go:415] Skipping API node.k8s.io/v1alpha1 because it has no resources.`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:12.143397       9 genericapiserver.go:415] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.`
TRAC[0025] >>> Parsing log line: `|W1217 13:29:12.147785       9 genericapiserver.go:415] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.`
TRAC[0025] >>> Parsing log line: `yW1217 13:29:12.159452       9 genericapiserver.go:415] Skipping API storage.k8s.io/v1alpha1 because it has no resources.`
TRAC[0025] >>> Parsing log line: `nW1217 13:29:12.176617       9 genericapiserver.go:415] Skipping API apps/v1beta2 because it has no resources.`
TRAC[0025] >>> Parsing log line: `nW1217 13:29:12.176676       9 genericapiserver.go:415] Skipping API apps/v1beta1 because it has no resources.`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:12.184621       9 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.`
TRAC[0025] >>> Parsing log line: `<I1217 13:29:12.184676       9 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:13.557386       9 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:13.557483       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt`
TRAC[0025] >>> Parsing log line: `XI1217 13:29:13.557723       9 secure_serving.go:178] Serving securely on 127.0.0.1:6444`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:13.557730       9 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key`
TRAC[0025] >>> Parsing log line: `]I1217 13:29:13.557813       9 tlsconfig.go:240] Starting DynamicServingCertificateController`
TRAC[0025] >>> Parsing log line: `aI1217 13:29:13.558474       9 available_controller.go:404] Starting AvailableConditionController`
TRAC[0025] >>> Parsing log line: `rI1217 13:29:13.558520       9 cache.go:32] Waiting for caches to sync for AvailableConditionController controller`
TRAC[0025] >>> Parsing log line: `_I1217 13:29:13.558560       9 autoregister_controller.go:141] Starting autoregister controller`
TRAC[0025] >>> Parsing log line: `bI1217 13:29:13.558604       9 cache.go:32] Waiting for caches to sync for autoregister controller`
TRAC[0025] >>> Parsing log line: `fI1217 13:29:13.558686       9 crdregistration_controller.go:111] Starting crd-autoregister controller`
TRAC[0025] >>> Parsing log line: `fI1217 13:29:13.558737       9 shared_informer.go:223] Waiting for caches to sync for crd-autoregister`
TRAC[0025] >>> Parsing log line: `WI1217 13:29:13.559097       9 controller.go:81] Starting OpenAPI AggregationController`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:13.560120       9 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller`
TRAC[0025] >>> Parsing log line: `}I1217 13:29:13.560162       9 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:13.560318       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:13.560634       9 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt`
TRAC[0025] >>> Parsing log line: `eI1217 13:29:13.561391       9 apiservice_controller.go:94] Starting APIServiceRegistrationController`
TRAC[0025] >>> Parsing log line: `vI1217 13:29:13.561582       9 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller`
TRAC[0025] >>> Parsing log line: `JI1217 13:29:13.562161       9 crd_finalizer.go:266] Starting CRDFinalizer`
TRAC[0025] >>> Parsing log line: `LI1217 13:29:13.566676       9 controller.go:86] Starting OpenAPI controller`
TRAC[0025] >>> Parsing log line: `gI1217 13:29:13.567349       9 customresource_discovery_controller.go:209] Starting DiscoveryController`
TRAC[0025] >>> Parsing log line: `[I1217 13:29:13.567791       9 naming_controller.go:291] Starting NamingConditionController`
TRAC[0025] >>> Parsing log line: `]I1217 13:29:13.567916       9 establishing_controller.go:76] Starting EstablishingController`
TRAC[0025] >>> Parsing log line: `uI1217 13:29:13.568402       9 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController`
TRAC[0025] >>> Parsing log line: `I1217 13:29:13.569035       9 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:13.601353       9 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:13.602646       9 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.27.0.3, ResourceVersion: 0, AdditionalErrorMsg: `
TRAC[0025] >>> Parsing log line: `iI1217 13:29:13.661290       9 cache.go:39] Caches are synced for AvailableConditionController controller`
TRAC[0025] >>> Parsing log line: `^I1217 13:29:13.661379       9 shared_informer.go:230] Caches are synced for crd-autoregister `
TRAC[0025] >>> Parsing log line: `YI1217 13:29:13.661445       9 cache.go:39] Caches are synced for autoregister controller`
TRAC[0025] >>> Parsing log line: `mI1217 13:29:13.662294       9 cache.go:39] Caches are synced for APIServiceRegistrationController controller`
TRAC[0025] >>> Parsing log line: `uI1217 13:29:13.670781       9 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller `
TRAC[0025] >>> Parsing log line: `�I1217 13:29:14.562411       9 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:14.563885       9 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).`
TRAC[0025] >>> Parsing log line: `zI1217 13:29:14.578190       9 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000`
TRAC[0025] >>> Parsing log line: `}I1217 13:29:14.587717       9 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:14.587796       9 storage_scheduling.go:143] all system priority classes are created successfully or already exist.`
TRAC[0025] >>> Parsing log line: `vI1217 13:29:15.054872       9 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io`
TRAC[0025] >>> Parsing log line: `}I1217 13:29:15.105845       9 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io`
TRAC[0025] >>> Parsing log line: `pW1217 13:29:15.256695       9 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.27.0.3]`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:15.259330       9 controller.go:609] quota admission added evaluator for: endpoints`
TRAC[0025] >>> Parsing log line: `vI1217 13:29:15.266083       9 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io`
TRAC[0025] >>> Parsing log line: `jI1217 13:29:15.583350       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0025] >>> Parsing log line: `jI1217 13:29:15.583487       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:15.584767800Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --secure-port=0"`
TRAC[0025] >>> Parsing log line: `ctime="2021-12-17T13:29:15.586432600Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"`
TRAC[0025] >>> Parsing log line: `OI1217 13:29:15.601409       9 controllermanager.go:161] Version: v1.18.20+k3s1`
TRAC[0025] >>> Parsing log line: `ktime="2021-12-17T13:29:15.602603300Z" level=info msg="Waiting for cloudcontroller rbac role to be created"`
TRAC[0025] >>> Parsing log line: `bI1217 13:29:15.603821       9 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252`
TRAC[0025] >>> Parsing log line: `jI1217 13:29:15.608412       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0025] >>> Parsing log line: `jI1217 13:29:15.608459       9 registry.go:150] Registering EvenPodsSpread predicate and priority function`
TRAC[0025] >>> Parsing log line: `Ytime="2021-12-17T13:29:15.611254400Z" level=info msg="Creating CRD addons.k3s.cattle.io"`
TRAC[0025] >>> Parsing log line: `MW1217 13:29:15.613375       9 authorization.go:47] Authorization is disabled`
TRAC[0025] >>> Parsing log line: `OW1217 13:29:15.613460       9 authentication.go:40] Authentication is disabled`
TRAC[0025] >>> Parsing log line: `jI1217 13:29:15.613794       9 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251`
TRAC[0025] >>> Parsing log line: `^time="2021-12-17T13:29:15.624764800Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"`
TRAC[0025] >>> Parsing log line: `dtime="2021-12-17T13:29:15.634550900Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"`
TRAC[0025] >>> Parsing log line: `utime="2021-12-17T13:29:15.650543900Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"`
TRAC[0025] >>> Parsing log line: `KI1217 13:29:15.882206       9 plugins.go:100] No cloud provider specified.`
TRAC[0025] >>> Parsing log line: `\I1217 13:29:15.887541       9 shared_informer.go:223] Waiting for caches to sync for tokens`
TRAC[0025] >>> Parsing log line: `fI1217 13:29:15.902818       9 controller.go:609] quota admission added evaluator for: serviceaccounts`
TRAC[0025] >>> Parsing log line: `LI1217 13:29:15.910501       9 controllermanager.go:533] Started "daemonset"`
TRAC[0025] >>> Parsing log line: `XI1217 13:29:15.910825       9 daemon_controller.go:286] Starting daemon sets controller`
TRAC[0025] >>> Parsing log line: `aI1217 13:29:15.910904       9 shared_informer.go:223] Waiting for caches to sync for daemon sets`
TRAC[0025] >>> Parsing log line: `NI1217 13:29:15.926473       9 controllermanager.go:533] Started "statefulset"`
TRAC[0025] >>> Parsing log line: `TI1217 13:29:15.926824       9 stateful_set.go:146] Starting stateful set controller`
TRAC[0025] >>> Parsing log line: `bI1217 13:29:15.926875       9 shared_informer.go:223] Waiting for caches to sync for stateful set`
TRAC[0025] >>> Parsing log line: `OI1217 13:29:15.942260       9 controllermanager.go:533] Started "csrapproving"`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:15.942305       9 core.go:243] configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes.`
TRAC[0025] >>> Parsing log line: `IW1217 13:29:15.942322       9 controllermanager.go:525] Skipping "route"`
TRAC[0025] >>> Parsing log line: `lI1217 13:29:15.942481       9 certificate_controller.go:119] Starting certificate controller "csrapproving"`
TRAC[0025] >>> Parsing log line: `nI1217 13:29:15.942531       9 shared_informer.go:223] Waiting for caches to sync for certificate-csrapproving`
TRAC[0025] >>> Parsing log line: `ZI1217 13:29:15.957528       9 controllermanager.go:533] Started "persistentvolume-binder"`
TRAC[0025] >>> Parsing log line: `_I1217 13:29:15.958368       9 pv_controller_base.go:295] Starting persistent volume controller`
TRAC[0025] >>> Parsing log line: `gI1217 13:29:15.958435       9 shared_informer.go:223] Waiting for caches to sync for persistent volume`
TRAC[0025] >>> Parsing log line: `QI1217 13:29:15.974469       9 controllermanager.go:533] Started "pvc-protection"`
TRAC[0025] >>> Parsing log line: `VW1217 13:29:15.974517       9 controllermanager.go:525] Skipping "ttl-after-finished"`
TRAC[0025] >>> Parsing log line: `cI1217 13:29:15.974517       9 pvc_protection_controller.go:101] Starting PVC protection controller`
TRAC[0025] >>> Parsing log line: `dI1217 13:29:15.974618       9 shared_informer.go:223] Waiting for caches to sync for PVC protection`
TRAC[0025] >>> Parsing log line: `HI1217 13:29:15.984384       9 controllermanager.go:533] Started "podgc"`
TRAC[0025] >>> Parsing log line: `JI1217 13:29:15.984792       9 gc_controller.go:89] Starting GC controller`
TRAC[0025] >>> Parsing log line: `XI1217 13:29:15.984813       9 shared_informer.go:223] Waiting for caches to sync for GC`
TRAC[0025] >>> Parsing log line: `TI1217 13:29:15.988062       9 shared_informer.go:230] Caches are synced for tokens `
TRAC[0025] >>> Parsing log line: `FI1217 13:29:16.007419       9 controllermanager.go:533] Started "job"`
TRAC[0025] >>> Parsing log line: `MI1217 13:29:16.007614       9 job_controller.go:145] Starting job controller`
TRAC[0025] >>> Parsing log line: `YI1217 13:29:16.007653       9 shared_informer.go:223] Waiting for caches to sync for job`
TRAC[0025] >>> Parsing log line: `MI1217 13:29:16.031622       9 controllermanager.go:533] Started "disruption"`
TRAC[0025] >>> Parsing log line: `PI1217 13:29:16.031648       9 disruption.go:331] Starting disruption controller`
TRAC[0025] >>> Parsing log line: `SW1217 13:29:16.031668       9 controllermanager.go:512] "tokencleaner" is disabled`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:16.031668       9 shared_informer.go:223] Waiting for caches to sync for disruption`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:16.043819       9 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.`
TRAC[0025] >>> Parsing log line: `OI1217 13:29:16.044603       9 controllermanager.go:533] Started "attachdetach"`
TRAC[0025] >>> Parsing log line: `aI1217 13:29:16.044858       9 attach_detach_controller.go:348] Starting attach detach controller`
TRAC[0025] >>> Parsing log line: `cI1217 13:29:16.044899       9 shared_informer.go:223] Waiting for caches to sync for attach detach`
TRAC[0025] >>> Parsing log line: `ZI1217 13:29:16.060673       9 controllermanager.go:533] Started "clusterrole-aggregation"`
TRAC[0025] >>> Parsing log line: `gI1217 13:29:16.060724       9 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator`
TRAC[0025] >>> Parsing log line: `kI1217 13:29:16.060750       9 shared_informer.go:223] Waiting for caches to sync for ClusterRoleAggregator`
TRAC[0025] >>> Parsing log line: `PI1217 13:29:16.098385       9 controllermanager.go:533] Started "pv-protection"`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:16.098577       9 pv_protection_controller.go:83] Starting PV protection controller`
TRAC[0025] >>> Parsing log line: `cI1217 13:29:16.098615       9 shared_informer.go:223] Waiting for caches to sync for PV protection`
TRAC[0025] >>> Parsing log line: `ztime="2021-12-17T13:29:16.157325100Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"`
TRAC[0025] >>> Parsing log line: `{time="2021-12-17T13:29:16.157412500Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"`
TRAC[0025] >>> Parsing log line: `PI1217 13:29:16.251733       9 controllermanager.go:533] Started "endpointslice"`
TRAC[0025] >>> Parsing log line: `bI1217 13:29:16.252134       9 endpointslice_controller.go:213] Starting endpoint slice controller`
TRAC[0025] >>> Parsing log line: `dI1217 13:29:16.252215       9 shared_informer.go:223] Waiting for caches to sync for endpoint_slice`
TRAC[0025] >>> Parsing log line: `ktime="2021-12-17T13:29:16.607484700Z" level=info msg="Waiting for cloudcontroller rbac role to be created"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.662204000Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.671644300Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.81.0.tgz"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.672165400Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.672345800Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.672868300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"`
TRAC[0025] >>> Parsing log line: `xtime="2021-12-17T13:29:16.673290100Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"`
TRAC[0025] >>> Parsing log line: `|time="2021-12-17T13:29:16.673545300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.673872500Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.674099200Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.674325500Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.674746700Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.675129300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.675336000Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"`
TRAC[0025] >>> Parsing log line: `|time="2021-12-17T13:29:16.675494300Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"`
TRAC[0025] >>> Parsing log line: `htime="2021-12-17T13:29:16.776464200Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"`
TRAC[0025] >>> Parsing log line: `ttime="2021-12-17T13:29:16.776787700Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.776860000Z" level=info msg="To join node to cluster: k3s agent -s https://172.27.0.3:6443 -t ${NODE_TOKEN}"`
TRAC[0025] >>> Parsing log line: `xtime="2021-12-17T13:29:16.777342400Z" level=info msg="Waiting for master node  startup: resource name may not be empty"`
TRAC[0025] >>> Parsing log line: `r2021-12-17 13:29:16.799873 I | http: TLS handshake error from 127.0.0.1:36042: remote error: tls: bad certificate`
TRAC[0025] >>> Parsing log line: `}I1217 13:29:16.814761       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps`
TRAC[0025] >>> Parsing log line: `~I1217 13:29:16.814843       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps`
TRAC[0025] >>> Parsing log line: `|I1217 13:29:16.814891       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch`
TRAC[0025] >>> Parsing log line: `wI1217 13:29:16.814956       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints`
TRAC[0025] >>> Parsing log line: `zI1217 13:29:16.815200       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for podtemplates`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.815260       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.815372       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmchartconfigs.helm.cattle.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.815436       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for helmcharts.helm.cattle.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.815497       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for addons.k3s.cattle.io`
TRAC[0025] >>> Parsing log line: `I1217 13:29:16.815587       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.815820       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.815891       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.816646       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.816730       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for controllerrevisions.apps`
TRAC[0025] >>> Parsing log line: `xI1217 13:29:16.817179       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.817648       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.818151       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.818218       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.818364       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io`
TRAC[0025] >>> Parsing log line: `yI1217 13:29:16.818535       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:16.818573       9 shared_informer.go:461] resyncPeriod 45948566473703 is smaller than resyncCheckPeriod 73170852474962 and the informer has already started. Changing it to 73170852474962`
TRAC[0025] >>> Parsing log line: `}I1217 13:29:16.819901       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts`
TRAC[0025] >>> Parsing log line: `~I1217 13:29:16.820074       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.820159       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:16.820221       9 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io`
TRAC[0025] >>> Parsing log line: `PI1217 13:29:16.820267       9 controllermanager.go:533] Started "resourcequota"`
TRAC[0025] >>> Parsing log line: `cI1217 13:29:16.821345       9 resource_quota_controller.go:272] Starting resource quota controller`
TRAC[0025] >>> Parsing log line: `dI1217 13:29:16.821390       9 shared_informer.go:223] Waiting for caches to sync for resource quota`
TRAC[0025] >>> Parsing log line: `RI1217 13:29:16.821419       9 resource_quota_monitor.go:303] QuotaMonitor running`
TRAC[0025] >>> Parsing log line: ``time="2021-12-17T13:29:16.840311700Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"`
TRAC[0025] >>> Parsing log line: `Htime="2021-12-17T13:29:16.840425200Z" level=info msg="Run: k3s kubectl"`
TRAC[0025] >>> Parsing log line: `Mtime="2021-12-17T13:29:16.840468900Z" level=info msg="k3s is up and running"`
TRAC[0025] >>> Parsing log line: `r2021-12-17 13:29:16.842882 I | http: TLS handshake error from 127.0.0.1:36050: remote error: tls: bad certificate`
TRAC[0025] >>> Parsing log line: `r2021-12-17 13:29:16.858646 I | http: TLS handshake error from 127.0.0.1:36056: remote error: tls: bad certificate`
TRAC[0025] >>> Parsing log line: `kI1217 13:29:16.872747       9 controller.go:609] quota admission added evaluator for: addons.k3s.cattle.io`
TRAC[0025] >>> Parsing log line: `stime="2021-12-17T13:29:16.880755900Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"`
TRAC[0025] >>> Parsing log line: `mtime="2021-12-17T13:29:16.880973600Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"`
TRAC[0025] >>> Parsing log line: `^time="2021-12-17T13:29:16.881014500Z" level=info msg="Starting batch/v1, Kind=Job controller"`
TRAC[0025] >>> Parsing log line: `Ztime="2021-12-17T13:29:16.881045600Z" level=info msg="Starting /v1, Kind=Node controller"`
TRAC[0025] >>> Parsing log line: `]time="2021-12-17T13:29:16.881265800Z" level=info msg="Starting /v1, Kind=Service controller"`
TRAC[0025] >>> Parsing log line: `Ytime="2021-12-17T13:29:16.881460900Z" level=info msg="Starting /v1, Kind=Pod controller"`
TRAC[0025] >>> Parsing log line: `_time="2021-12-17T13:29:16.881516000Z" level=info msg="Starting /v1, Kind=Endpoints controller"`
TRAC[0025] >>> Parsing log line: `Ytime="2021-12-17T13:29:16.894564500Z" level=info msg="module overlay was already loaded"`
TRAC[0025] >>> Parsing log line: `^time="2021-12-17T13:29:16.894623100Z" level=info msg="module nf_conntrack was already loaded"`
TRAC[0025] >>> Parsing log line: `rtime="2021-12-17T13:29:16.907856100Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe"`
TRAC[0025] >>> Parsing log line: `qtime="2021-12-17T13:29:16.915174200Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"`
TRAC[0025] >>> Parsing log line: `~time="2021-12-17T13:29:16.917368700Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"`
TRAC[0025] >>> Parsing log line: `ftime="2021-12-17T13:29:16.917612400Z" level=info msg="Set sysctl 'net/ipv6/conf/all/forwarding' to 1"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.917802400Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.919977800Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:16.921261500Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"`
TRAC[0025] >>> Parsing log line: `LI1217 13:29:16.923214       9 controllermanager.go:533] Started "namespace"`
TRAC[0025] >>> Parsing log line: `YI1217 13:29:16.923547       9 namespace_controller.go:200] Starting namespace controller`
TRAC[0025] >>> Parsing log line: `_I1217 13:29:16.923655       9 shared_informer.go:223] Waiting for caches to sync for namespace`
TRAC[0025] >>> Parsing log line: ``
TRAC[0025] >>> Parsing log line: `time="2021-12-17T13:29:16.937446500Z" level=info msg="Waiting for containerd startup: rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory\""`
TRAC[0025] >>> Parsing log line: `gI1217 13:29:17.017863       9 controller.go:609] quota admission added evaluator for: deployments.apps`
TRAC[0025] >>> Parsing log line: `[I1217 13:29:17.051254       9 controllermanager.go:533] Started "horizontalpodautoscaling"`
TRAC[0025] >>> Parsing log line: `II1217 13:29:17.051463       9 horizontal.go:169] Starting HPA controller`
TRAC[0025] >>> Parsing log line: `YI1217 13:29:17.051497       9 shared_informer.go:223] Waiting for caches to sync for HPA`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:17.377226       9 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server could not find the requested resource`
TRAC[0025] >>> Parsing log line: `pI1217 13:29:17.402355       9 controller.go:609] quota admission added evaluator for: helmcharts.helm.cattle.io`
TRAC[0025] >>> Parsing log line: `aI1217 13:29:17.436447       9 controller.go:609] quota admission added evaluator for: jobs.batch`
TRAC[0025] >>> Parsing log line: `jtime="2021-12-17T13:29:17.612595000Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --allow-untagged-cloud=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m --secure-port=0"`
TRAC[0025] >>> Parsing log line: `�Flag --allow-untagged-cloud has been deprecated, This flag is deprecated and will be removed in a future release. A cluster-id will be required on cloud instances.`
TRAC[0025] >>> Parsing log line: `OI1217 13:29:17.616344       9 controllermanager.go:120] Version: v1.18.20+k3s1`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.616414       9 controllermanager.go:132] detected a cluster without a ClusterID.  A ClusterID will be required in the future.  Please tag your cluster to avoid any future issues`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:17.618513       9 core.go:90] Failed to start service controller: the cloud provider does not support external load balancers`
TRAC[0025] >>> Parsing log line: `KW1217 13:29:17.618555       9 controllermanager.go:244] Skipping "service"`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.618569       9 core.go:108] configure-cloud-routes is set, but cloud provider does not support routes. Will not configure cloud provider routes.`
TRAC[0025] >>> Parsing log line: `IW1217 13:29:17.618587       9 controllermanager.go:244] Skipping "route"`
TRAC[0025] >>> Parsing log line: `TI1217 13:29:17.619772       9 node_controller.go:110] Sending events to api server.`
TRAC[0025] >>> Parsing log line: `MI1217 13:29:17.619961       9 controllermanager.go:247] Started "cloud-node"`
TRAC[0025] >>> Parsing log line: `\I1217 13:29:17.621475       9 node_lifecycle_controller.go:78] Sending events to api server`
TRAC[0025] >>> Parsing log line: `WI1217 13:29:17.621531       9 controllermanager.go:247] Started "cloud-node-lifecycle"`
TRAC[0025] >>> Parsing log line: `\time="2021-12-17T13:29:17.668396700Z" level=info msg="Starting /v1, Kind=Secret controller"`
TRAC[0025] >>> Parsing log line: ``time="2021-12-17T13:29:17.673877900Z" level=info msg="Active TLS secret k3s-serving (ver=273) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.27.0.3:172.27.0.3 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/hash:7c2dad3ff2b4ca3389602af9c224e2823369555a7256e3e567c1bb6d60b1265d]"`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.709110       9 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server could not find the requested resource]`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:17.747130       9 memcache.go:206] couldn't get resource list for metrics.k8s.io/v1beta1: the server could not find the requested resource`
TRAC[0025] >>> Parsing log line: `�time="2021-12-17T13:29:17.781673000Z" level=info msg="Waiting for master node k3d-foo-server-0 startup: nodes \"k3d-foo-server-0\" not found"`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:17.956243       9 request.go:621] Throttling request took 1.0424218s, request: GET:https://127.0.0.1:6444/apis/rbac.authorization.k8s.io/v1?timeout=32s`
TRAC[0025] >>> Parsing log line: `vtime="2021-12-17T13:29:17.959272100Z" level=info msg="Connecting to proxy" url="wss://172.27.0.3:6443/v1-k3s/connect"`
TRAC[0025] >>> Parsing log line: `ntime="2021-12-17T13:29:17.962886100Z" level=info msg="Handling backend connection request [k3d-foo-server-0]"`
TRAC[0025] >>> Parsing log line: `ptime="2021-12-17T13:29:17.963891900Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"`
TRAC[0025] >>> Parsing log line: `wtime="2021-12-17T13:29:17.965019700Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=/run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-foo-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd --node-labels= --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --runtime-cgroups=/systemd --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"`
TRAC[0025] >>> Parsing log line: `�Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.`
TRAC[0025] >>> Parsing log line: `gtime="2021-12-17T13:29:17.965188600Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-foo-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.965878       9 server.go:225] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.`
TRAC[0025] >>> Parsing log line: `DI1217 13:29:17.966689       9 server.go:413] Version: v1.18.20+k3s1`
TRAC[0025] >>> Parsing log line: `time="2021-12-17T13:29:17.975432100Z" level=info msg="waiting for node k3d-foo-server-0: nodes \"k3d-foo-server-0\" not found"`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:17.987214       9 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.993827       9 info.go:51] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"`
TRAC[0025] >>> Parsing log line: `~I1217 13:29:17.994459       9 server.go:645] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /`
TRAC[0025] >>> Parsing log line: `I1217 13:29:17.994895       9 container_manager_linux.go:277] container manager verified user specified cgroup-root exists: []`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:17.995022       9 container_manager_linux.go:282] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd SystemCgroupsName: KubeletCgroupsName:/systemd ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}`
TRAC[0025] >>> Parsing log line: `tI1217 13:29:17.995101       9 topology_manager.go:126] [topologymanager] Creating topology manager with none policy`
TRAC[0025] >>> Parsing log line: `I1217 13:29:17.995132       9 container_manager_linux.go:312] [topologymanager] Initializing Topology Manager with none policy`
TRAC[0025] >>> Parsing log line: `cI1217 13:29:17.995170       9 container_manager_linux.go:317] Creating device plugin manager: true`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.995302       9 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:17.995495       9 util_unix.go:103] Using "/run/k3s/containerd/containerd.sock" as endpoint is deprecated, please consider using full url format "unix:///run/k3s/containerd/containerd.sock".`
TRAC[0025] >>> Parsing log line: `VI1217 13:29:17.995658       9 kubelet.go:483] Attempting to sync node with API server`
TRAC[0025] >>> Parsing log line: `JI1217 13:29:17.995706       9 kubelet.go:313] Adding apiserver pod source`
TRAC[0025] >>> Parsing log line: `dI1217 13:29:17.995734       9 apiserver.go:43] Waiting for node sync before watching apiserver pods`
TRAC[0025] >>> Parsing log line: `0W1217 13:29:18.003900       9 proxier.go:625] Failed to read file /lib/modules/5.10.25-linuxkit/modules.builtin with error open /lib/modules/5.10.25-linuxkit/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:18.004700       9 proxier.go:635] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:18.005488       9 proxier.go:635] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:18.006309       9 proxier.go:635] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:18.008420       9 proxier.go:635] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:18.009287       9 proxier.go:635] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules`
TRAC[0025] >>> Parsing log line: `lE1217 13:29:18.018637       9 node.go:125] Failed to retrieve node info: nodes "k3d-foo-server-0" not found`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:18.027405       9 kuberuntime_manager.go:217] Container runtime containerd initialized, version: v1.3.10-k3s2, apiVersion: v1alpha2`
TRAC[0025] >>> Parsing log line: `>I1217 13:29:18.027926       9 server.go:1124] Started kubelet`
TRAC[0025] >>> Parsing log line: `QI1217 13:29:18.028347       9 server.go:145] Starting to listen on 0.0.0.0:10250`
TRAC[0025] >>> Parsing log line: `WI1217 13:29:18.029563       9 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer`
TRAC[0025] >>> Parsing log line: `UI1217 13:29:18.030091       9 volume_manager.go:265] Starting Kubelet Volume Manager`
TRAC[0025] >>> Parsing log line: `VI1217 13:29:18.030818       9 server.go:393] Adding debug handlers to kubelet server.`
TRAC[0025] >>> Parsing log line: `mI1217 13:29:18.033687       9 desired_state_of_world_populator.go:139] Desired state populator starts to run`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.035407       9 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.035476       9 kubelet.go:1321] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.047877       9 controller.go:228] failed to get node "k3d-foo-server-0" when trying to set owner ref to the node lease: nodes "k3d-foo-server-0" not found`
TRAC[0025] >>> Parsing log line: `|I1217 13:29:18.049384       9 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach`
TRAC[0025] >>> Parsing log line: `YI1217 13:29:18.052863       9 cpu_manager.go:184] [cpumanager] starting with none policy`
TRAC[0025] >>> Parsing log line: `UI1217 13:29:18.052988       9 cpu_manager.go:185] [cpumanager] reconciling every 10s`
TRAC[0025] >>> Parsing log line: `cI1217 13:29:18.053017       9 state_mem.go:36] [cpumanager] initializing new in-memory state store`
TRAC[0025] >>> Parsing log line: `QI1217 13:29:18.054982       9 policy_none.go:43] [cpumanager] none policy: Start`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:18.075751       9 manager.go:597] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found`
TRAC[0025] >>> Parsing log line: `UI1217 13:29:18.076050       9 plugin_manager.go:114] Starting Kubelet Plugin Manager`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.076499       9 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "k3d-foo-server-0" not found`
TRAC[0025] >>> Parsing log line: `]I1217 13:29:18.123780       9 garbagecollector.go:133] Starting garbage collector controller`
TRAC[0025] >>> Parsing log line: `gI1217 13:29:18.123826       9 shared_informer.go:223] Waiting for caches to sync for garbage collector`
TRAC[0025] >>> Parsing log line: `II1217 13:29:18.123868       9 graph_builder.go:282] GraphBuilder running`
TRAC[0025] >>> Parsing log line: `SI1217 13:29:18.123815       9 controllermanager.go:533] Started "garbagecollector"`
TRAC[0025] >>> Parsing log line: `QE1217 13:29:18.135349       9 kubelet.go:2292] node "k3d-foo-server-0" not found`
TRAC[0025] >>> Parsing log line: `|I1217 13:29:18.135368       9 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach`
TRAC[0025] >>> Parsing log line: `fI1217 13:29:18.137250       9 kubelet_node_status.go:70] Attempting to register node k3d-foo-server-0`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.140912       9 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail`
TRAC[0025] >>> Parsing log line: `KW1217 13:29:18.140957       9 controllermanager.go:525] Skipping "service"`
TRAC[0025] >>> Parsing log line: `ZW1217 13:29:18.140980       9 controllermanager.go:525] Skipping "root-ca-cert-publisher"`
TRAC[0025] >>> Parsing log line: `KI1217 13:29:18.150740       9 controllermanager.go:533] Started "endpoint"`
TRAC[0025] >>> Parsing log line: `XI1217 13:29:18.151140       9 endpoints_controller.go:181] Starting endpoint controller`
TRAC[0025] >>> Parsing log line: `^I1217 13:29:18.151180       9 shared_informer.go:223] Waiting for caches to sync for endpoint`
TRAC[0025] >>> Parsing log line: `MI1217 13:29:18.155716       9 controllermanager.go:533] Started "csrsigning"`
TRAC[0025] >>> Parsing log line: `VW1217 13:29:18.155761       9 controllermanager.go:512] "bootstrapsigner" is disabled`
TRAC[0025] >>> Parsing log line: `jI1217 13:29:18.155908       9 certificate_controller.go:119] Starting certificate controller "csrsigning"`
TRAC[0025] >>> Parsing log line: `lI1217 13:29:18.156002       9 shared_informer.go:223] Waiting for caches to sync for certificate-csrsigning`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:18.156091       9 dynamic_serving_content.go:130] Starting csr-controller::/var/lib/rancher/k3s/server/tls/server-ca.crt::/var/lib/rancher/k3s/server/tls/server-ca.key`
TRAC[0025] >>> Parsing log line: `XI1217 13:29:18.165373       9 node_ipam_controller.go:94] Sending events to api server.`
TRAC[0025] >>> Parsing log line: `gI1217 13:29:18.202267       9 kubelet_node_status.go:73] Successfully registered node k3d-foo-server-0`
TRAC[0025] >>> Parsing log line: `mI1217 13:29:18.203284       9 node_controller.go:325] Initializing node k3d-foo-server-0 with cloud provider`
TRAC[0025] >>> Parsing log line: `utime="2021-12-17T13:29:18.206573300Z" level=info msg="couldn't find node internal ip label on node k3d-foo-server-0"`
TRAC[0025] >>> Parsing log line: `rtime="2021-12-17T13:29:18.206856200Z" level=info msg="couldn't find node hostname label on node k3d-foo-server-0"`
TRAC[0025] >>> Parsing log line: `vtime="2021-12-17T13:29:18.210432400Z" level=info msg="Updated coredns node hosts entry [172.27.0.3 k3d-foo-server-0]"`
TRAC[0025] >>> Parsing log line: `utime="2021-12-17T13:29:18.215388600Z" level=info msg="couldn't find node internal ip label on node k3d-foo-server-0"`
TRAC[0025] >>> Parsing log line: `rtime="2021-12-17T13:29:18.215445700Z" level=info msg="couldn't find node hostname label on node k3d-foo-server-0"`
TRAC[0025] >>> Parsing log line: `yI1217 13:29:18.215473       9 node_controller.go:397] Successfully initialized node k3d-foo-server-0 with cloud provider`
TRAC[0025] >>> Parsing log line: ``I1217 13:29:18.361710       9 status_manager.go:158] Starting to sync pod status with apiserver`
TRAC[0025] >>> Parsing log line: `PI1217 13:29:18.362127       9 kubelet.go:1846] Starting kubelet main sync loop.`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.362825       9 kubelet.go:1870] skipping pod synchronization - PLEG is not healthy: pleg has yet to be successful`
TRAC[0025] >>> Parsing log line: `|time="2021-12-17T13:29:18.792845200Z" level=info msg="master role label has been set succesfully on node: k3d-foo-server-0"`
TRAC[0025] >>> Parsing log line: `XW1217 13:29:18.843222       9 handler_proxy.go:102] no RequestInfo found in the context`
TRAC[0025] >>> Parsing log line: `�E1217 13:29:18.843975       9 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable`
TRAC[0025] >>> Parsing log line: `Y, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]`
TRAC[0025] >>> Parsing log line: `�I1217 13:29:18.844506       9 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.`
TRAC[0025] >>> Parsing log line: `BI1217 13:29:18.996932       9 apiserver.go:53] Watching apiserver`
TRAC[0025] >>> Parsing log line: `VI1217 13:29:19.027106       9 node.go:136] Successfully retrieved node IP: 172.27.0.3`
TRAC[0025] >>> Parsing log line: `LI1217 13:29:19.027217       9 server_others.go:187] Using iptables Proxier.`
TRAC[0025] >>> Parsing log line: `DI1217 13:29:19.028762       9 server.go:583] Version: v1.18.20+k3s1`
TRAC[0025] >>> Parsing log line: `PI1217 13:29:19.030329       9 config.go:315] Starting service config controller`
TRAC[0025] >>> Parsing log line: `dI1217 13:29:19.030399       9 shared_informer.go:223] Waiting for caches to sync for service config`
TRAC[0025] >>> Parsing log line: `RI1217 13:29:19.030483       9 config.go:133] Starting endpoints config controller`
TRAC[0025] >>> Parsing log line: `fI1217 13:29:19.030527       9 shared_informer.go:223] Waiting for caches to sync for endpoints config`
TRAC[0025] >>> Parsing log line: `\I1217 13:29:19.131506       9 shared_informer.go:230] Caches are synced for service config `
TRAC[0025] >>> Parsing log line: `^I1217 13:29:19.133243       9 shared_informer.go:230] Caches are synced for endpoints config `
TRAC[0025] >>> Parsing log line: `QI1217 13:29:19.245319       9 reconciler.go:157] Reconciler: start to sync state`
TRAC[0025] >>> Parsing log line: `otime="2021-12-17T13:29:19.980287300Z" level=info msg="waiting for node k3d-foo-server-0 CIDR not assigned yet"`
TRAC[0025] >>> Parsing log line: `�W1217 13:29:21.510257       9 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]`
TRAC[0025] >>> Parsing log line: `otime="2021-12-17T13:29:21.984763300Z" level=info msg="waiting for node k3d-foo-server-0 CIDR not assigned yet"`
TRAC[0025] Exec process '[sh -c echo '192.168.65.2 host.k3d.internal' >> /etc/hosts]' still running in node 'k3d-foo-server-0'.. sleeping for 1 second...
TRAC[0025] Exec process '[sh -c echo '192.168.65.2 host.k3d.internal' >> /etc/hosts]' still running in node 'k3d-foo-serverlb'.. sleeping for 1 second...
TRAC[0025] >>> Parsing log line: `otime="2021-12-17T13:29:23.988779500Z" level=info msg="waiting for node k3d-foo-server-0 CIDR not assigned yet"`
DEBU[0026] Exec process in node 'k3d-foo-server-0' exited with '0'
DEBU[0026] Exec process in node 'k3d-foo-serverlb' exited with '0'
DEBU[0026] Successfully added host record "192.168.65.2 host.k3d.internal" to /etc/hosts in all nodes
TRAC[0027] >>> Parsing log line: `otime="2021-12-17T13:29:25.999498500Z" level=info msg="waiting for node k3d-foo-server-0 CIDR not assigned yet"`
TRAC[0029] >>> Parsing log line: `otime="2021-12-17T13:29:28.004719900Z" level=info msg="waiting for node k3d-foo-server-0 CIDR not assigned yet"`
TRAC[0029] >>> Parsing log line: `qI1217 13:29:28.062506       9 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io`
TRAC[0030] >>> Parsing log line: `SI1217 13:29:28.191735       9 range_allocator.go:82] Sending events to api server.`
TRAC[0030] >>> Parsing log line: `zI1217 13:29:28.191895       9 range_allocator.go:110] No Service CIDR provided. Skipping filtering out service addresses.`
TRAC[0030] >>> Parsing log line: `�I1217 13:29:28.192145       9 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.`
TRAC[0030] >>> Parsing log line: `KI1217 13:29:28.192295       9 controllermanager.go:533] Started "nodeipam"`
TRAC[0030] >>> Parsing log line: `TI1217 13:29:28.192961       9 node_ipam_controller.go:162] Starting ipam controller`
TRAC[0030] >>> Parsing log line: `ZI1217 13:29:28.193570       9 shared_informer.go:223] Waiting for caches to sync for node`
TRAC[0030] >>> Parsing log line: `^I1217 13:29:28.199092       9 node_lifecycle_controller.go:384] Sending events to api server.`
TRAC[0030] >>> Parsing log line: `RI1217 13:29:28.199368       9 taint_manager.go:163] Sending events to api server.`
TRAC[0030] >>> Parsing log line: `bI1217 13:29:28.199478       9 node_lifecycle_controller.go:512] Controller will reconcile labels.`
TRAC[0030] >>> Parsing log line: `PI1217 13:29:28.199590       9 controllermanager.go:533] Started "nodelifecycle"`
TRAC[0030] >>> Parsing log line: `YI1217 13:29:28.199925       9 node_lifecycle_controller.go:546] Starting node controller`
TRAC[0030] >>> Parsing log line: `[I1217 13:29:28.200057       9 shared_informer.go:223] Waiting for caches to sync for taint`
TRAC[0030] >>> Parsing log line: `MI1217 13:29:28.212148       9 controllermanager.go:533] Started "deployment"`
TRAC[0030] >>> Parsing log line: `[I1217 13:29:28.212335       9 deployment_controller.go:153] Starting deployment controller`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.212377       9 shared_informer.go:223] Waiting for caches to sync for deployment`
TRAC[0030] >>> Parsing log line: `MI1217 13:29:28.224616       9 controllermanager.go:533] Started "replicaset"`
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.224809       9 replica_set.go:182] Starting replicaset controller`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.224844       9 shared_informer.go:223] Waiting for caches to sync for ReplicaSet`
TRAC[0030] >>> Parsing log line: `MI1217 13:29:28.228175       9 controllermanager.go:533] Started "csrcleaner"`
TRAC[0030] >>> Parsing log line: `MI1217 13:29:28.228624       9 cleaner.go:82] Starting CSR cleaner controller`
TRAC[0030] >>> Parsing log line: `FI1217 13:29:28.241931       9 controllermanager.go:533] Started "ttl"`
TRAC[0030] >>> Parsing log line: `MI1217 13:29:28.242532       9 ttl_controller.go:118] Starting TTL controller`
TRAC[0030] >>> Parsing log line: `YI1217 13:29:28.242601       9 shared_informer.go:223] Waiting for caches to sync for TTL`
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.253910       9 controllermanager.go:533] Started "persistentvolume-expander"`
TRAC[0030] >>> Parsing log line: `SI1217 13:29:28.254216       9 expand_controller.go:303] Starting expand controller`
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.254252       9 shared_informer.go:223] Waiting for caches to sync for expand`
TRAC[0030] >>> Parsing log line: `XI1217 13:29:28.262693       9 controllermanager.go:533] Started "replicationcontroller"`
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.263737       9 replica_set.go:182] Starting replicationcontroller controller`
TRAC[0030] >>> Parsing log line: `kI1217 13:29:28.263793       9 shared_informer.go:223] Waiting for caches to sync for ReplicationController`
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.277051       9 controllermanager.go:533] Started "serviceaccount"`
TRAC[0030] >>> Parsing log line: `eI1217 13:29:28.277318       9 serviceaccounts_controller.go:117] Starting service account controller`
TRAC[0030] >>> Parsing log line: `eI1217 13:29:28.277359       9 shared_informer.go:223] Waiting for caches to sync for service account`
TRAC[0030] >>> Parsing log line: `JI1217 13:29:28.289495       9 controllermanager.go:533] Started "cronjob"`
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.289679       9 cronjob_controller.go:97] Starting CronJob Manager`
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.296430       9 node_lifecycle_controller.go:78] Sending events to api server`
TRAC[0030] >>> Parsing log line: `wE1217 13:29:28.296499       9 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided`
TRAC[0030] >>> Parsing log line: `XW1217 13:29:28.296525       9 controllermanager.go:525] Skipping "cloud-node-lifecycle"`
TRAC[0030] >>> Parsing log line: `dI1217 13:29:28.296944       9 shared_informer.go:223] Waiting for caches to sync for resource quota`
TRAC[0030] >>> Parsing log line: `gI1217 13:29:28.305530       9 shared_informer.go:223] Waiting for caches to sync for garbage collector`
TRAC[0030] >>> Parsing log line: `�W1217 13:29:28.320986       9 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="k3d-foo-server-0" does not exist`
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.348508       9 shared_informer.go:230] Caches are synced for TTL `
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.352248       9 shared_informer.go:230] Caches are synced for HPA `
TRAC[0030] >>> Parsing log line: `fI1217 13:29:28.352685       9 shared_informer.go:230] Caches are synced for certificate-csrapproving `
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.353358       9 shared_informer.go:230] Caches are synced for endpoint_slice `
TRAC[0030] >>> Parsing log line: `dI1217 13:29:28.356432       9 shared_informer.go:230] Caches are synced for certificate-csrsigning `
TRAC[0030] >>> Parsing log line: `cI1217 13:29:28.364197       9 shared_informer.go:230] Caches are synced for ReplicationController `
TRAC[0030] >>> Parsing log line: `PI1217 13:29:28.385463       9 shared_informer.go:230] Caches are synced for GC `
TRAC[0030] >>> Parsing log line: `RI1217 13:29:28.396299       9 shared_informer.go:230] Caches are synced for node `
TRAC[0030] >>> Parsing log line: `TI1217 13:29:28.396367       9 range_allocator.go:172] Starting range CIDR allocator`
TRAC[0030] >>> Parsing log line: `cI1217 13:29:28.396381       9 shared_informer.go:223] Waiting for caches to sync for cidrallocator`
TRAC[0030] >>> Parsing log line: `[I1217 13:29:28.396393       9 shared_informer.go:230] Caches are synced for cidrallocator `
TRAC[0030] >>> Parsing log line: `[I1217 13:29:28.399202       9 shared_informer.go:230] Caches are synced for PV protection `
TRAC[0030] >>> Parsing log line: `SI1217 13:29:28.400608       9 shared_informer.go:230] Caches are synced for taint `
TRAC[0030] >>> Parsing log line: `iI1217 13:29:28.400822       9 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: `
TRAC[0030] >>> Parsing log line: `�W1217 13:29:28.401902       9 node_lifecycle_controller.go:1048] Missing timestamp for Node k3d-foo-server-0. Assuming now as a timestamp.`
TRAC[0030] >>> Parsing log line: `xI1217 13:29:28.403135       9 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.`
TRAC[0030] >>> Parsing log line: `NI1217 13:29:28.403288       9 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k3d-foo-server-0", UID:"7bdc73ce-3918-4fc5-8fe5-193d8b3f1b67", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node k3d-foo-server-0 event: Registered Node k3d-foo-server-0 in Controller`
TRAC[0030] >>> Parsing log line: `SI1217 13:29:28.403469       9 taint_manager.go:187] Starting NoExecuteTaintManager`
TRAC[0030] >>> Parsing log line: `jI1217 13:29:28.403602       9 range_allocator.go:373] Set node k3d-foo-server-0 PodCIDR to [10.42.0.0/24]`
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.407985       9 shared_informer.go:230] Caches are synced for job `
TRAC[0030] >>> Parsing log line: `CI1217 13:29:28.429122       9 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"helm-install-traefik", UID:"d0276316-95c5-429d-b11f-39b4a5be51e2", APIVersion:"batch/v1", ResourceVersion:"268", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: helm-install-traefik-dlfhc`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.436252       9 topology_manager.go:235] [topologymanager] Topology Admit Handler`
TRAC[0030] >>> Parsing log line: `kI1217 13:29:28.438909       9 controller.go:609] quota admission added evaluator for: events.events.k8s.io`
TRAC[0030] >>> Parsing log line: `xI1217 13:29:28.459704       9 kuberuntime_manager.go:995] updating runtime config through cri with podcidr 10.42.0.0/24`
TRAC[0030] >>> Parsing log line: `XI1217 13:29:28.461552       9 kubelet_network.go:77] Setting Pod CIDR:  -> 10.42.0.0/24`
TRAC[0030] >>> Parsing log line: `"I1217 13:29:28.472294       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-values") pod "helm-install-traefik-dlfhc" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e") `
TRAC[0030] >>> Parsing log line: `$I1217 13:29:28.472557       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-content") pod "helm-install-traefik-dlfhc" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e") `
TRAC[0030] >>> Parsing log line: `CI1217 13:29:28.472874       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "helm-traefik-token-2j92l" (UniqueName: "kubernetes.io/secret/f408f9c0-2664-4c13-87f3-623dd272648e-helm-traefik-token-2j92l") pod "helm-install-traefik-dlfhc" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e") `
TRAC[0030] >>> Parsing log line: `]I1217 13:29:28.477711       9 shared_informer.go:230] Caches are synced for service account `
TRAC[0030] >>> Parsing log line: `WI1217 13:29:28.524288       9 shared_informer.go:230] Caches are synced for namespace `
TRAC[0030] >>> Parsing log line: `TI1217 13:29:28.555133       9 shared_informer.go:230] Caches are synced for expand `
TRAC[0030] >>> Parsing log line: `_I1217 13:29:28.559047       9 shared_informer.go:230] Caches are synced for persistent volume `
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.575172       9 shared_informer.go:230] Caches are synced for PVC protection `
TRAC[0030] >>> Parsing log line: `XI1217 13:29:28.613380       9 shared_informer.go:230] Caches are synced for deployment `
TRAC[0030] >>> Parsing log line: `XI1217 13:29:28.625154       9 shared_informer.go:230] Caches are synced for ReplicaSet `
TRAC[0030] >>> Parsing log line: `gI1217 13:29:28.630390       9 controller.go:609] quota admission added evaluator for: replicasets.apps`
TRAC[0030] >>> Parsing log line: `QI1217 13:29:28.635623       9 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"metrics-server", UID:"d9635973-c8d4-40af-a55f-ea145157cc75", APIVersion:"apps/v1", ResourceVersion:"247", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set metrics-server-7566d596c8 to 1`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.636373       9 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"local-path-provisioner", UID:"459a0907-b22c-43ba-8d7d-e66a786b8687", APIVersion:"apps/v1", ResourceVersion:"230", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set local-path-provisioner-6d59f47c7 to 1`
TRAC[0030] >>> Parsing log line: `CI1217 13:29:28.637773       9 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"75212af8-2fda-44d2-8f73-819571dd1b42", APIVersion:"apps/v1", ResourceVersion:"221", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-7944c66d8d to 1`
TRAC[0030] >>> Parsing log line: `[I1217 13:29:28.645396       9 shared_informer.go:230] Caches are synced for attach detach `
TRAC[0030] >>> Parsing log line: `aI1217 13:29:28.683486       9 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"local-path-provisioner-6d59f47c7", UID:"170ac94d-a027-4a2f-aaef-b195cfb60261", APIVersion:"apps/v1", ResourceVersion:"350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: local-path-provisioner-6d59f47c7-h45qx`
TRAC[0030] >>> Parsing log line: `SI1217 13:29:28.685590       9 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-7566d596c8", UID:"1f4833f7-8135-4ea5-976d-d773845d8a74", APIVersion:"apps/v1", ResourceVersion:"351", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-7566d596c8-r8qfn`
TRAC[0030] >>> Parsing log line: `EI1217 13:29:28.685853       9 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-7944c66d8d", UID:"553b1432-3867-41d5-947a-2c9aa1711cd5", APIVersion:"apps/v1", ResourceVersion:"352", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-7944c66d8d-7zhl9`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.693743       9 topology_manager.go:235] [topologymanager] Topology Admit Handler`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.723203       9 topology_manager.go:235] [topologymanager] Topology Admit Handler`
TRAC[0030] >>> Parsing log line: ``I1217 13:29:28.742416       9 topology_manager.go:235] [topologymanager] Topology Admit Handler`
TRAC[0030] >>> Parsing log line: `VI1217 13:29:28.762700       9 shared_informer.go:230] Caches are synced for endpoint `
TRAC[0030] >>> Parsing log line: `cI1217 13:29:28.767182       9 shared_informer.go:230] Caches are synced for ClusterRoleAggregator `
TRAC[0030] >>> Parsing log line: `7I1217 13:29:28.779281       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-qrrhv" (UniqueName: "kubernetes.io/secret/8ca5950b-fce7-47e5-a6d6-4cad0f7b3ceb-coredns-token-qrrhv") pod "coredns-7944c66d8d-7zhl9" (UID: "8ca5950b-fce7-47e5-a6d6-4cad0f7b3ceb") `
TRAC[0030] >>> Parsing log line: `.I1217 13:29:28.779525       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8ca5950b-fce7-47e5-a6d6-4cad0f7b3ceb-config-volume") pod "coredns-7944c66d8d-7zhl9" (UID: "8ca5950b-fce7-47e5-a6d6-4cad0f7b3ceb") `
TRAC[0030] >>> Parsing log line: `<I1217 13:29:28.779644       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6d589fc5-f9cd-462e-b3ad-6f394f300474-config-volume") pod "local-path-provisioner-6d59f47c7-h45qx" (UID: "6d589fc5-f9cd-462e-b3ad-6f394f300474") `
TRAC[0030] >>> Parsing log line: `�I1217 13:29:28.779753       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-path-provisioner-service-account-token-dztx5" (UniqueName: "kubernetes.io/secret/6d589fc5-f9cd-462e-b3ad-6f394f300474-local-path-provisioner-service-account-token-dztx5") pod "local-path-provisioner-6d59f47c7-h45qx" (UID: "6d589fc5-f9cd-462e-b3ad-6f394f300474") `
TRAC[0030] >>> Parsing log line: `)I1217 13:29:28.779860       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/c6cf54cc-7bd9-4a48-832d-fc32dafda249-tmp-dir") pod "metrics-server-7566d596c8-r8qfn" (UID: "c6cf54cc-7bd9-4a48-832d-fc32dafda249") `
TRAC[0030] >>> Parsing log line: `LI1217 13:29:28.779934       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-8lvg8" (UniqueName: "kubernetes.io/secret/c6cf54cc-7bd9-4a48-832d-fc32dafda249-metrics-server-token-8lvg8") pod "metrics-server-7566d596c8-r8qfn" (UID: "c6cf54cc-7bd9-4a48-832d-fc32dafda249") `
TRAC[0030] >>> Parsing log line: `YI1217 13:29:28.811546       9 shared_informer.go:230] Caches are synced for daemon sets `
TRAC[0030] >>> Parsing log line: `ZI1217 13:29:28.827267       9 shared_informer.go:230] Caches are synced for stateful set `
TRAC[0030] >>> Parsing log line: `E1217 13:29:28.888138       9 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again`
TRAC[0030] >>> Parsing log line: `E1217 13:29:28.896426       9 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again`
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.900781       9 shared_informer.go:230] Caches are synced for resource quota `
TRAC[0030] >>> Parsing log line: `_I1217 13:29:28.906235       9 shared_informer.go:230] Caches are synced for garbage collector `
TRAC[0030] >>> Parsing log line: `\I1217 13:29:28.921785       9 shared_informer.go:230] Caches are synced for resource quota `
TRAC[0030] >>> Parsing log line: `_I1217 13:29:28.924178       9 shared_informer.go:230] Caches are synced for garbage collector `
TRAC[0030] >>> Parsing log line: `�I1217 13:29:28.924234       9 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage`
TRAC[0030] >>> Parsing log line: `XI1217 13:29:28.932123       9 shared_informer.go:230] Caches are synced for disruption `
TRAC[0030] >>> Parsing log line: `OI1217 13:29:28.932184       9 disruption.go:339] Sending events to api server.`
TRAC[0031] >>> Parsing log line: `XW1217 13:29:29.842353       9 handler_proxy.go:102] no RequestInfo found in the context`
TRAC[0031] >>> Parsing log line: `�E1217 13:29:29.842654       9 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable`
TRAC[0031] >>> Parsing log line: `Y, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]`
TRAC[0031] >>> Parsing log line: `�I1217 13:29:29.842803       9 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.`
TRAC[0031] >>> Parsing log line: `YI1217 13:29:30.015435       9 flannel.go:92] Determining IP address of default interface`
TRAC[0031] >>> Parsing log line: `dI1217 13:29:30.016491       9 flannel.go:105] Using interface with name eth0 and address 172.27.0.3`
TRAC[0031] >>> Parsing log line: `HI1217 13:29:30.026061       9 kube.go:300] Starting kube subnet manager`
TRAC[0031] >>> Parsing log line: `UI1217 13:29:30.026088       9 kube.go:117] Waiting 10m0s for node controller to sync`
TRAC[0031] >>> Parsing log line: `stime="2021-12-17T13:29:30.030362500Z" level=info msg="labels have been set successfully on node: k3d-foo-server-0"`
TRAC[0031] >>> Parsing log line: `cI1217 13:29:30.048499       9 network_policy_controller.go:149] Starting network policy controller`
TRAC[0032] >>> Parsing log line: `KI1217 13:29:31.027407       9 kube.go:124] Node controller sync successful`
TRAC[0032] >>> Parsing log line: `tI1217 13:29:31.027564       9 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false`
TRAC[0032] >>> Parsing log line: `ZI1217 13:29:31.049705       9 flannel.go:78] Wrote subnet file to /run/flannel/subnet.env`
TRAC[0032] >>> Parsing log line: `>I1217 13:29:31.049759       9 flannel.go:82] Running backend.`
TRAC[0032] >>> Parsing log line: `RI1217 13:29:31.049778       9 vxlan_network.go:60] watching for new subnet leases`
TRAC[0032] >>> Parsing log line: `nI1217 13:29:31.054128       9 iptables.go:145] Some iptables rules are missing; deleting and recreating rules`
TRAC[0032] >>> Parsing log line: `qI1217 13:29:31.054261       9 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN`
TRAC[0032] >>> Parsing log line: `nI1217 13:29:31.057178       9 iptables.go:145] Some iptables rules are missing; deleting and recreating rules`
TRAC[0032] >>> Parsing log line: `aI1217 13:29:31.057223       9 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT`
TRAC[0032] >>> Parsing log line: `�I1217 13:29:31.060805       9 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully`
TRAC[0032] >>> Parsing log line: `aI1217 13:29:31.063705       9 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT`
TRAC[0032] >>> Parsing log line: `sI1217 13:29:31.066251       9 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN`
TRAC[0032] >>> Parsing log line: `_I1217 13:29:31.069266       9 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT`
TRAC[0032] >>> Parsing log line: `�I1217 13:29:31.072094       9 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully`
TRAC[0032] >>> Parsing log line: `oI1217 13:29:31.076659       9 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN`
TRAC[0032] >>> Parsing log line: `_I1217 13:29:31.090165       9 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT`
TRAC[0032] >>> Parsing log line: `�I1217 13:29:31.109072       9 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully`
TRAC[0032] >>> Parsing log line: `qI1217 13:29:31.132487       9 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/24 -j RETURN`
TRAC[0032] >>> Parsing log line: `�I1217 13:29:31.155493       9 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully`
TRAC[0046] >>> Parsing log line: `'E1217 13:29:44.575109       9 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.43.89.208:443: connect: connection refused`
TRAC[0046] >>> Parsing log line: `'E1217 13:29:44.582681       9 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.43.89.208:443: connect: connection refused`
TRAC[0046] >>> Parsing log line: `'E1217 13:29:44.588396       9 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.43.89.208:443: connect: connection refused`
TRAC[0046] >>> Parsing log line: `'E1217 13:29:44.612445       9 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.43.89.208:443: connect: connection refused`
TRAC[0046] >>> Parsing log line: `'E1217 13:29:44.654823       9 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.43.89.208:443: connect: connection refused`
TRAC[0046] >>> Parsing log line: `'E1217 13:29:44.738359       9 available_controller.go:437] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: Get https://10.43.89.208:443/apis/metrics.k8s.io/v1beta1: dial tcp 10.43.89.208:443: connect: connection refused`
TRAC[0047] >>> Parsing log line: `fI1217 13:29:46.151680       9 controller.go:609] quota admission added evaluator for: daemonsets.apps`
TRAC[0048] >>> Parsing log line: `CI1217 13:29:46.169051       9 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"traefik", UID:"4099c705-ca36-4332-ba98-7217a7079f55", APIVersion:"apps/v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set traefik-758cd5fc85 to 1`
TRAC[0048] >>> Parsing log line: `EI1217 13:29:46.193167       9 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"traefik-758cd5fc85", UID:"e9cfc9eb-6110-481a-8079-74851514732e", APIVersion:"apps/v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: traefik-758cd5fc85-xmv9k`
TRAC[0048] >>> Parsing log line: `oI1217 13:29:46.195134       9 controller.go:609] quota admission added evaluator for: controllerrevisions.apps`
TRAC[0048] >>> Parsing log line: ``I1217 13:29:46.227492       9 topology_manager.go:235] [topologymanager] Topology Admit Handler`
TRAC[0048] >>> Parsing log line: `:I1217 13:29:46.232336       9 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"svclb-traefik", UID:"b5b8fcc6-f076-4ce5-9436-4b9a07d8e8cf", APIVersion:"apps/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: svclb-traefik-wvqzv`
TRAC[0048] >>> Parsing log line: ``I1217 13:29:46.260162       9 topology_manager.go:235] [topologymanager] Topology Admit Handler`
TRAC[0048] >>> Parsing log line: `I1217 13:29:46.390545       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ssl" (UniqueName: "kubernetes.io/secret/9c05046f-9333-4a33-972b-9d58bf3b993c-ssl") pod "traefik-758cd5fc85-xmv9k" (UID: "9c05046f-9333-4a33-972b-9d58bf3b993c") `
TRAC[0048] >>> Parsing log line: `2I1217 13:29:46.390604       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-6ldcj" (UniqueName: "kubernetes.io/secret/54e1ae2b-6f19-4a7d-a5f9-e25c0e4cbce1-default-token-6ldcj") pod "svclb-traefik-wvqzv" (UID: "54e1ae2b-6f19-4a7d-a5f9-e25c0e4cbce1") `
TRAC[0048] >>> Parsing log line: ` I1217 13:29:46.390648       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/9c05046f-9333-4a33-972b-9d58bf3b993c-config") pod "traefik-758cd5fc85-xmv9k" (UID: "9c05046f-9333-4a33-972b-9d58bf3b993c") `
TRAC[0048] >>> Parsing log line: `7I1217 13:29:46.390712       9 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "traefik-token-pl4qm" (UniqueName: "kubernetes.io/secret/9c05046f-9333-4a33-972b-9d58bf3b993c-traefik-token-pl4qm") pod "traefik-758cd5fc85-xmv9k" (UID: "9c05046f-9333-4a33-972b-9d58bf3b993c") `
TRAC[0048] >>> Parsing log line: `�I1217 13:29:46.518572       9 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b6c479f40b913728afd9ee4495a39d83989ca4e45b5a6a04b3095ffeb6886553`
TRAC[0048] >>> Parsing log line: `"I1217 13:29:46.535221       9 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"kube-system", Name:"helm-install-traefik", UID:"d0276316-95c5-429d-b11f-39b4a5be51e2", APIVersion:"batch/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed`
TRAC[0048] >>> Parsing log line: `<I1217 13:29:46.692196       9 reconciler.go:196] operationExecutor.UnmountVolume started for volume "helm-traefik-token-2j92l" (UniqueName: "kubernetes.io/secret/f408f9c0-2664-4c13-87f3-623dd272648e-helm-traefik-token-2j92l") pod "f408f9c0-2664-4c13-87f3-623dd272648e" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e") `
TRAC[0048] >>> Parsing log line: `1217 13:29:46.692268       9 reconciler.go:196] operationExecutor.UnmountVolume started for volume "values" (UniqueName: "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-values") pod "f408f9c0-2664-4c13-87f3-623dd272648e" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e") `
TRAC[0048] >>> Parsing log line: `I1217 13:29:46.692304       9 reconciler.go:196] operationExecutor.UnmountVolume started for volume "content" (UniqueName: "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-content") pod "f408f9c0-2664-4c13-87f3-623dd272648e" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e") `
TRAC[0048] >>> Parsing log line: `�W1217 13:29:46.692458       9 empty_dir.go:469] Warning: Failed to clear quota on /var/lib/kubelet/pods/f408f9c0-2664-4c13-87f3-623dd272648e/volumes/kubernetes.io~configmap/content: ClearQuota called, but quotas disabled`
TRAC[0048] >>> Parsing log line: `I1217 13:29:46.692573       9 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-content" (OuterVolumeSpecName: "content") pod "f408f9c0-2664-4c13-87f3-623dd272648e" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e"). InnerVolumeSpecName "content". PluginName "kubernetes.io/configmap", VolumeGidValue ""`
TRAC[0048] >>> Parsing log line: `�W1217 13:29:46.692696       9 empty_dir.go:469] Warning: Failed to clear quota on /var/lib/kubelet/pods/f408f9c0-2664-4c13-87f3-623dd272648e/volumes/kubernetes.io~configmap/values: ClearQuota called, but quotas disabled`
TRAC[0048] >>> Parsing log line: `|I1217 13:29:46.692813       9 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-values" (OuterVolumeSpecName: "values") pod "f408f9c0-2664-4c13-87f3-623dd272648e" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e"). InnerVolumeSpecName "values". PluginName "kubernetes.io/configmap", VolumeGidValue ""`
TRAC[0048] >>> Parsing log line: `�I1217 13:29:46.693675       9 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f408f9c0-2664-4c13-87f3-623dd272648e-helm-traefik-token-2j92l" (OuterVolumeSpecName: "helm-traefik-token-2j92l") pod "f408f9c0-2664-4c13-87f3-623dd272648e" (UID: "f408f9c0-2664-4c13-87f3-623dd272648e"). InnerVolumeSpecName "helm-traefik-token-2j92l". PluginName "kubernetes.io/secret", VolumeGidValue ""`
TRAC[0048] >>> Parsing log line: `�I1217 13:29:46.792963       9 reconciler.go:319] Volume detached for volume "values" (UniqueName: "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-values") on node "k3d-foo-server-0" DevicePath ""`
TRAC[0048] >>> Parsing log line: `�I1217 13:29:46.793025       9 reconciler.go:319] Volume detached for volume "content" (UniqueName: "kubernetes.io/configmap/f408f9c0-2664-4c13-87f3-623dd272648e-content") on node "k3d-foo-server-0" DevicePath ""`
TRAC[0048] >>> Parsing log line: `�I1217 13:29:46.793052       9 reconciler.go:319] Volume detached for volume "helm-traefik-token-2j92l" (UniqueName: "kubernetes.io/secret/f408f9c0-2664-4c13-87f3-623dd272648e-helm-traefik-token-2j92l") on node "k3d-foo-server-0" DevicePath ""`
TRAC[0049] >>> Parsing log line: `�W1217 13:29:47.531132       9 pod_container_deletor.go:77] Container "d775679474c206ba5e596561553dab15fa875fc27c17d90b6a63074f1db8a5cb" not found in pod's containers`
ERRO[0132] Failed Cluster Start: error during post-start cluster preparation: errors waiting for log line `Cluster dns configmap` from node 'k3d-foo-server-0': stopped returning log lines: context deadline exceeded
ERRO[0132] Failed to create cluster >>> Rolling Back
INFO[0132] Deleting cluster 'foo'
TRAC[0132] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-foo-serverlb
TRAC[0132] TranslateContainerDetailsToNode: Checking for default object label app=k3d on container /k3d-foo-server-0
DEBU[0132] Cluster Details: &{Name:foo Network:{Name:k3d-foo ID:edbe45d281c90032da65b0b9f6c99a868f5621380ad2bb64c6979ddbf9e82787 External:true IPAM:{IPPrefix:172.27.0.0/16 IPsUsed:[172.27.0.1] Managed:false} Members:[]} Token:yztIjqyUabVuNdGoTgxR Nodes:[0xc00017a600 0xc00017b500] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc0004a2940 ServerLoadBalancer:0xc0001c76f0 ImageVolume:k3d-foo-images}
DEBU[0132] Deleting node k3d-foo-serverlb ...
TRAC[0132] [Docker] Deleted Container k3d-foo-serverlb
DEBU[0132] Deleting node k3d-foo-server-0 ...
TRAC[0133] [Docker] Deleted Container k3d-foo-server-0
DEBU[0133] Skip deletion of cluster network 'k3d-foo' because it's managed externally
INFO[0133] Deleting image volume 'k3d-foo-images'
FATA[0133] Cluster creation FAILED, all changes have been rolled back

Which OS & Architecture

Which version of k3d

main (0c57cf24ca65fecba4dd24624a27364def22fbca) matches v5.2.2

Which version of docker

Client:
 Cloud integration: 1.0.17
 Version:           20.10.7
 API version:       1.41
 Go version:        go1.16.4
 Git commit:        f0df350
 Built:             Wed Jun  2 11:56:22 2021
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
lahabana commented 2 years ago

Seems to be caused by the fact that <1.19 doesn't have: https://github.com/k3s-io/k3s/pull/1785

Therefore the server will never succeed there: https://github.com/rancher/k3d/blob/main/pkg/client/cluster.go#L1028

<1.19 is old but I don't see anywhere in k3d a minimal k3s version. Are we expecting this to work? If not should there be a note in the docs?

iwilltry42 commented 2 years ago

Hi @lahabana , thanks for opening this issue! Thanks a lot for the research you did :+1: You're indeed right there. Usually, I test new k3d releases with the latest patch releases of the past three minor versions, which kind of matches the Kubernetes Support Cycle. K8s v1.18 reached EOL quite some time ago and thus wasn't tested. We should definitely add a matrix somewhere in the docs/readme to reflect this :+1:

The change on k3d's end was introduced in v5.0.2, so with v5.0.1 you may get it to work.

iwilltry42 commented 2 years ago