kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.18k stars 4.87k forks source link

Multi-node cluster from minikube #14856

Closed Vikrant1020 closed 1 year ago

Vikrant1020 commented 2 years ago

What Happened?

tryed to create a multi-node cluster from minikube.

minikube start --nodes 3 -p Mulrinode-cluster

got unknown error

Attach the log file

D:\K8S\testing\logs.txt

Operating System

Windows

Driver

Docker

Vikrant1020 commented 2 years ago

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Wmiobject Win32_ComputerSystem).HypervisorPresent failed: Reason: Fix:Start PowerShell as an Administrator Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/ Version:} I0825 11:53:48.990461 10620 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0825 11:53:48.990461 10620 driver.go:300] not recommending "ssh" due to default: false I0825 11:53:48.990461 10620 driver.go:335] Picked: docker I0825 11:53:48.990461 10620 driver.go:336] Alternatives: [ssh] I0825 11:53:48.990461 10620 driver.go:337] Rejects: [qemu2 virtualbox vmware hyperv podman] I0825 11:53:48.991993 10620 out.go:177] ✨ Automatically selected the docker driver I0825 11:53:48.992726 10620 start.go:284] selected driver: docker I0825 11:53:48.993258 10620 start.go:808] validating driver "docker" against I0825 11:53:48.993258 10620 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0825 11:53:49.009350 10620 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0825 11:53:49.533801 10620 info.go:265] docker info: {ID:AZAS:SWGD:272W:ALSR:3HA2:L7OM:WE3T:LNS4:IMAE:7WBA:5TRM:4KPB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:49 SystemTime:2022-08-25 06:23:49.389989988 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:13218865152 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0825 11:53:49.534365 10620 start_flags.go:296] no existing cluster config was found, will generate one from the flags I0825 11:53:49.645481 10620 start_flags.go:377] Using suggested 2200MB memory alloc based on sys=16163MB, container=12606MB I0825 11:53:49.645481 10620 start_flags.go:835] Wait components to verify : map[apiserver:true system_pods:true] I0825 11:53:49.646517 10620 out.go:177] 📌 Using Docker Desktop driver with root privileges I0825 11:53:49.647036 10620 cni.go:95] Creating CNI manager for "" I0825 11:53:49.647036 10620 cni.go:156] 0 nodes found, recommending kindnet I0825 11:53:49.647036 10620 start_flags.go:305] Found "CNI" CNI - setting NetworkPlugin=cni I0825 11:53:49.647036 10620 start_flags.go:310] config: {Name:Mulrinode-cluster KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\vikrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0825 11:53:49.648100 10620 out.go:177] 👍 Starting control plane node Mulrinode-cluster in cluster Mulrinode-cluster I0825 11:53:49.649145 10620 cache.go:120] Beginning downloading kic base image for docker with docker I0825 11:53:49.649720 10620 out.go:177] 🚜 Pulling base image ... I0825 11:53:49.650255 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0825 11:53:49.650255 10620 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon I0825 11:53:49.650255 10620 preload.go:148] Found local preload: C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 I0825 11:53:49.650255 10620 cache.go:57] Caching tarball of preloaded images I0825 11:53:49.650784 10620 preload.go:174] Found C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0825 11:53:49.650784 10620 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker I0825 11:53:49.651175 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 11:53:49.651175 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json: {Name:mka5101497bef9b09504fb1d379120211fb3963e Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:53:50.054122 10620 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull I0825 11:53:50.054122 10620 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load I0825 11:53:50.054122 10620 cache.go:208] Successfully downloaded all kic artifacts I0825 11:53:50.054627 10620 start.go:371] acquiring machines lock for Mulrinode-cluster: {Name:mk22a81efafae26d6a570325f2242db9fff6d916 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0825 11:53:50.054627 10620 start.go:375] acquired machines lock for "Mulrinode-cluster" in 0s I0825 11:53:50.054627 10620 start.go:92] Provisioning new machine with config: &{Name:Mulrinode-cluster KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\vikrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0825 11:53:50.054627 10620 start.go:132] createHost starting for "" (driver="docker") I0825 11:53:50.055608 10620 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... I0825 11:53:50.056625 10620 start.go:166] libmachine.API.Create for "Mulrinode-cluster" (driver="docker") I0825 11:53:50.056625 10620 client.go:168] LocalClient.Create starting I0825 11:53:50.056625 10620 main.go:134] libmachine: Reading certificate data from C:\Users\vikrant.minikube\certs\ca.pem I0825 11:53:50.057124 10620 main.go:134] libmachine: Decoding PEM data... I0825 11:53:50.057124 10620 main.go:134] libmachine: Parsing certificate... I0825 11:53:50.057124 10620 main.go:134] libmachine: Reading certificate data from C:\Users\vikrant.minikube\certs\cert.pem I0825 11:53:50.057124 10620 main.go:134] libmachine: Decoding PEM data... I0825 11:53:50.057124 10620 main.go:134] libmachine: Parsing certificate... I0825 11:53:50.068110 10620 cli_runner.go:164] Run: docker network inspect Mulrinode-cluster --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0825 11:53:50.407052 10620 cli_runner.go:211] docker network inspect Mulrinode-cluster --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0825 11:53:50.413957 10620 network_create.go:272] running [docker network inspect Mulrinode-cluster] to gather additional debugging logs... I0825 11:53:50.413957 10620 cli_runner.go:164] Run: docker network inspect Mulrinode-cluster W0825 11:53:50.744758 10620 cli_runner.go:211] docker network inspect Mulrinode-cluster returned with exit code 1 I0825 11:53:50.744874 10620 network_create.go:275] error running [docker network inspect Mulrinode-cluster]: docker network inspect Mulrinode-cluster: exit status 1 stdout: []

stderr: Error: No such network: Mulrinode-cluster I0825 11:53:50.744874 10620 network_create.go:277] output of [docker network inspect Mulrinode-cluster]: -- stdout -- []

-- /stdout -- stderr Error: No such network: Mulrinode-cluster

/stderr I0825 11:53:50.751904 10620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0825 11:53:51.177310 10620 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b8c848] misses:0} I0825 11:53:51.177310 10620 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0825 11:53:51.177545 10620 network_create.go:115] attempt to create docker network Mulrinode-cluster 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0825 11:53:51.186105 10620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster W0825 11:53:51.556518 10620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster returned with exit code 1 W0825 11:53:51.556518 10620 network_create.go:107] failed to create docker network Mulrinode-cluster 192.168.49.0/24, will retry: subnet is taken I0825 11:53:51.594157 10620 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:false}} dirty:map[] misses:0} I0825 11:53:51.594157 10620 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0825 11:53:51.624375 10620 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:true}} dirty:map[192.168.49.0:0xc000b8c848 192.168.58.0:0xc000006718] misses:0} I0825 11:53:51.624375 10620 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0825 11:53:51.624375 10620 network_create.go:115] attempt to create docker network Mulrinode-cluster 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ... I0825 11:53:51.632298 10620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster W0825 11:53:52.063584 10620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster returned with exit code 1 W0825 11:53:52.063584 10620 network_create.go:107] failed to create docker network Mulrinode-cluster 192.168.58.0/24, will retry: subnet is taken I0825 11:53:52.092450 10620 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:true}} dirty:map[192.168.49.0:0xc000b8c848 192.168.58.0:0xc000006718] misses:1} I0825 11:53:52.092450 10620 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0825 11:53:52.122937 10620 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:true}} dirty:map[192.168.49.0:0xc000b8c848 192.168.58.0:0xc000006718 192.168.67.0:0xc0000067d0] misses:1} I0825 11:53:52.122937 10620 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0825 11:53:52.122937 10620 network_create.go:115] attempt to create docker network Mulrinode-cluster 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ... I0825 11:53:52.130469 10620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster I0825 11:53:53.237541 10620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster: (1.1070726s) I0825 11:53:53.237541 10620 network_create.go:99] docker network Mulrinode-cluster 192.168.67.0/24 created I0825 11:53:53.237541 10620 kic.go:106] calculated static IP "192.168.67.2" for the "Mulrinode-cluster" container I0825 11:53:53.249148 10620 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0825 11:53:53.630038 10620 cli_runner.go:164] Run: docker volume create Mulrinode-cluster --label name.minikube.sigs.k8s.io=Mulrinode-cluster --label created_by.minikube.sigs.k8s.io=true I0825 11:53:53.959929 10620 oci.go:103] Successfully created a docker volume Mulrinode-cluster I0825 11:53:53.968285 10620 cli_runner.go:164] Run: docker run --rm --name Mulrinode-cluster-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster --entrypoint /usr/bin/test -v Mulrinode-cluster:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib I0825 11:53:55.332465 10620 cli_runner.go:217] Completed: docker run --rm --name Mulrinode-cluster-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster --entrypoint /usr/bin/test -v Mulrinode-cluster:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib: (1.3641796s) I0825 11:53:55.332465 10620 oci.go:107] Successfully prepared a docker volume Mulrinode-cluster I0825 11:53:55.332465 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0825 11:53:55.332465 10620 kic.go:179] Starting extracting preloaded images to volume ... I0825 11:53:55.339949 10620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v Mulrinode-cluster:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir I0825 11:56:05.473374 10620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v Mulrinode-cluster:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (2m10.1334249s) I0825 11:56:05.473374 10620 kic.go:188] duration metric: took 130.140909 seconds to extract preloaded images to volume I0825 11:56:05.484263 10620 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0825 11:56:06.215083 10620 info.go:265] docker info: {ID:AZAS:SWGD:272W:ALSR:3HA2:L7OM:WE3T:LNS4:IMAE:7WBA:5TRM:4KPB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:52 SystemTime:2022-08-25 06:26:06.000145256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:13218865152 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0825 11:56:06.226082 10620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0825 11:58:05.849736 10620 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (1m59.6235009s) I0825 11:58:05.867086 10620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname Mulrinode-cluster --name Mulrinode-cluster --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=Mulrinode-cluster --network Mulrinode-cluster --ip 192.168.67.2 --volume Mulrinode-cluster:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 I0825 11:58:09.140660 10620 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname Mulrinode-cluster --name Mulrinode-cluster --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=Mulrinode-cluster --network Mulrinode-cluster --ip 192.168.67.2 --volume Mulrinode-cluster:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8: (3.273482s) I0825 11:58:09.148066 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Running}} I0825 11:58:09.490116 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Status}} I0825 11:58:09.848007 10620 cli_runner.go:164] Run: docker exec Mulrinode-cluster stat /var/lib/dpkg/alternatives/iptables I0825 11:58:10.301137 10620 oci.go:144] the created container "Mulrinode-cluster" has a running status. I0825 11:58:10.301170 10620 kic.go:210] Creating ssh key for kic: C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa... I0825 11:58:10.550326 10620 kic_runner.go:191] docker (temp): C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0825 11:58:11.101953 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Status}} I0825 11:58:11.443366 10620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0825 11:58:11.443366 10620 kic_runner.go:114] Args: [docker exec --privileged Mulrinode-cluster chown docker:docker /home/docker/.ssh/authorized_keys] I0825 11:58:11.918212 10620 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa... W0825 11:58:11.932700 10620 kic.go:256] unable to determine current user's SID. minikube tunnel may not work. I0825 11:58:11.949376 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Status}} I0825 11:58:12.296008 10620 machine.go:88] provisioning docker machine ... I0825 11:58:12.296008 10620 ubuntu.go:169] provisioning hostname "Mulrinode-cluster" I0825 11:58:12.302493 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:12.637224 10620 main.go:134] libmachine: Using SSH client type: native I0825 11:58:12.641026 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64370 } I0825 11:58:12.641026 10620 main.go:134] libmachine: About to run SSH command: sudo hostname Mulrinode-cluster && echo "Mulrinode-cluster" | sudo tee /etc/hostname I0825 11:58:12.772961 10620 main.go:134] libmachine: SSH cmd err, output: : Mulrinode-cluster

I0825 11:58:12.781716 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:13.115511 10620 main.go:134] libmachine: Using SSH client type: native I0825 11:58:13.119916 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64370 } I0825 11:58:13.119916 10620 main.go:134] libmachine: About to run SSH command:

    if ! grep -xq '.*\sMulrinode-cluster' /etc/hosts; then
        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 Mulrinode-cluster/g' /etc/hosts;
        else 
            echo '127.0.1.1 Mulrinode-cluster' | sudo tee -a /etc/hosts; 
        fi
    fi

I0825 11:58:13.243816 10620 main.go:134] libmachine: SSH cmd err, output: : I0825 11:58:13.243816 10620 ubuntu.go:175] set auth options {CertDir:C:\Users\vikrant.minikube CaCertPath:C:\Users\vikrant.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\vikrant.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\vikrant.minikube\machines\server.pem ServerKeyPath:C:\Users\vikrant.minikube\machines\server-key.pem ClientKeyPath:C:\Users\vikrant.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\vikrant.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\vikrant.minikube} I0825 11:58:13.243816 10620 ubuntu.go:177] setting up certificates I0825 11:58:13.243816 10620 provision.go:83] configureAuth start I0825 11:58:13.251315 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster I0825 11:58:13.573129 10620 provision.go:138] copyHostCerts I0825 11:58:13.573129 10620 exec_runner.go:144] found C:\Users\vikrant.minikube/ca.pem, removing ... I0825 11:58:13.573129 10620 exec_runner.go:207] rm: C:\Users\vikrant.minikube\ca.pem I0825 11:58:13.575078 10620 exec_runner.go:151] cp: C:\Users\vikrant.minikube\certs\ca.pem --> C:\Users\vikrant.minikube/ca.pem (1082 bytes) I0825 11:58:13.575655 10620 exec_runner.go:144] found C:\Users\vikrant.minikube/cert.pem, removing ... I0825 11:58:13.575655 10620 exec_runner.go:207] rm: C:\Users\vikrant.minikube\cert.pem I0825 11:58:13.576175 10620 exec_runner.go:151] cp: C:\Users\vikrant.minikube\certs\cert.pem --> C:\Users\vikrant.minikube/cert.pem (1123 bytes) I0825 11:58:13.577722 10620 exec_runner.go:144] found C:\Users\vikrant.minikube/key.pem, removing ... I0825 11:58:13.577722 10620 exec_runner.go:207] rm: C:\Users\vikrant.minikube\key.pem I0825 11:58:13.578239 10620 exec_runner.go:151] cp: C:\Users\vikrant.minikube\certs\key.pem --> C:\Users\vikrant.minikube/key.pem (1679 bytes) I0825 11:58:13.579271 10620 provision.go:112] generating server cert: C:\Users\vikrant.minikube\machines\server.pem ca-key=C:\Users\vikrant.minikube\certs\ca.pem private-key=C:\Users\vikrant.minikube\certs\ca-key.pem org=Vikrant.Mulrinode-cluster san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube Mulrinode-cluster] I0825 11:58:13.738331 10620 provision.go:172] copyRemoteCerts I0825 11:58:13.769102 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0825 11:58:13.781515 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:14.110970 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:14.163144 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes) I0825 11:58:14.185505 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes) I0825 11:58:14.205644 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0825 11:58:14.224533 10620 provision.go:86] duration metric: configureAuth took 980.7171ms I0825 11:58:14.224533 10620 ubuntu.go:193] setting minikube options for container-runtime I0825 11:58:14.225032 10620 config.go:180] Loaded profile config "Mulrinode-cluster": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 11:58:14.232549 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:14.558390 10620 main.go:134] libmachine: Using SSH client type: native I0825 11:58:14.562890 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64370 } I0825 11:58:14.562890 10620 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0825 11:58:14.637266 10620 main.go:134] libmachine: SSH cmd err, output: : overlay

I0825 11:58:14.637266 10620 ubuntu.go:71] root file system type: overlay I0825 11:58:14.637545 10620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0825 11:58:14.644502 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:14.977335 10620 main.go:134] libmachine: Using SSH client type: native I0825 11:58:14.982261 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64370 } I0825 11:58:14.982261 10620 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0825 11:58:15.115094 10620 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target

I0825 11:58:15.128108 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:15.454175 10620 main.go:134] libmachine: Using SSH client type: native I0825 11:58:15.458669 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64370 } I0825 11:58:15.458669 10620 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0825 11:58:18.504809 10620 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-08-25 06:28:15.168787122 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60

[Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process -OOMScoreAdjust=-500

[Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker

I0825 11:58:18.505140 10620 machine.go:91] provisioned docker machine in 6.2091321s I0825 11:58:18.505140 10620 client.go:171] LocalClient.Create took 4m28.4485147s I0825 11:58:18.505177 10620 start.go:174] duration metric: libmachine.API.Create for "Mulrinode-cluster" took 4m28.4485147s I0825 11:58:18.505177 10620 start.go:307] post-start starting for "Mulrinode-cluster" (driver="docker") I0825 11:58:18.505177 10620 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0825 11:58:18.522652 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0825 11:58:18.530494 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:18.850305 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:18.951408 10620 ssh_runner.go:195] Run: cat /etc/os-release I0825 11:58:18.955147 10620 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0825 11:58:18.955147 10620 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0825 11:58:18.955654 10620 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0825 11:58:18.955672 10620 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0825 11:58:18.955676 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\addons for local assets ... I0825 11:58:18.955676 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\files for local assets ... I0825 11:58:18.956204 10620 start.go:310] post-start completed in 451.0273ms I0825 11:58:18.965722 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster I0825 11:58:19.296359 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 11:58:19.300599 10620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0825 11:58:19.308101 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:19.632846 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:19.678387 10620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0825 11:58:19.683297 10620 start.go:135] duration metric: createHost completed in 4m29.6284029s I0825 11:58:19.683297 10620 start.go:82] releasing machines lock for "Mulrinode-cluster", held for 4m29.6286702s I0825 11:58:19.690547 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster I0825 11:58:20.038608 10620 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0825 11:58:20.050309 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:20.065978 10620 ssh_runner.go:195] Run: systemctl --version I0825 11:58:20.076370 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:20.388868 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:20.435603 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:21.021062 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0825 11:58:21.031320 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes) I0825 11:58:21.059814 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 11:58:21.179092 10620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0825 11:58:21.282942 10620 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0825 11:58:21.294431 10620 cruntime.go:273] skipping containerd shutdown because we are bound to it I0825 11:58:21.312919 10620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0825 11:58:21.325687 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0825 11:58:21.355360 10620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0825 11:58:21.465255 10620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0825 11:58:21.573351 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 11:58:21.704567 10620 ssh_runner.go:195] Run: sudo systemctl restart docker I0825 11:58:24.075151 10620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3705847s) I0825 11:58:24.094660 10620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0825 11:58:24.227975 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 11:58:24.361728 10620 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0825 11:58:24.376650 10620 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock I0825 11:58:24.378165 10620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0825 11:58:24.383647 10620 start.go:471] Will wait 60s for crictl version I0825 11:58:24.400661 10620 ssh_runner.go:195] Run: sudo crictl version I0825 11:58:24.432555 10620 start.go:480] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0825 11:58:24.439921 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 11:58:24.479849 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 11:58:24.513923 10620 out.go:204] 🐳 Preparing Kubernetes v1.24.3 on Docker 20.10.17 ... I0825 11:58:24.522440 10620 cli_runner.go:164] Run: docker exec -t Mulrinode-cluster dig +short host.docker.internal I0825 11:58:25.017578 10620 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0825 11:58:25.019505 10620 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0825 11:58:25.025009 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0825 11:58:25.042003 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:25.402332 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0825 11:58:25.413916 10620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0825 11:58:25.444470 10620 docker.go:611] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5

-- /stdout -- I0825 11:58:25.444470 10620 docker.go:542] Images already preloaded, skipping extraction I0825 11:58:25.452470 10620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0825 11:58:25.480397 10620 docker.go:611] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5

-- /stdout -- I0825 11:58:25.480397 10620 cache_images.go:84] Images are preloaded, skipping loading I0825 11:58:25.488395 10620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0825 11:58:25.560036 10620 cni.go:95] Creating CNI manager for "" I0825 11:58:25.560036 10620 cni.go:156] 1 nodes found, recommending kindnet I0825 11:58:25.560036 10620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0825 11:58:25.560036 10620 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:Mulrinode-cluster NodeName:Mulrinode-cluster DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0825 11:58:25.560036 10620 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens:

I0825 11:58:25.560562 10620 kubeadm.go:961] kubelet [Unit] Wants=docker.socket

[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=Mulrinode-cluster --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m

[Install] config: {KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0825 11:58:25.580584 10620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3 I0825 11:58:25.592127 10620 binaries.go:44] Found k8s binaries, skipping transfer I0825 11:58:25.607143 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0825 11:58:25.616129 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (479 bytes) I0825 11:58:25.629628 10620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0825 11:58:25.644861 10620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2040 bytes) I0825 11:58:25.661113 10620 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0825 11:58:25.665113 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0825 11:58:25.675115 10620 certs.go:54] Setting up C:\Users\vikrant.minikube\profiles\Mulrinode-cluster for IP: 192.168.67.2 I0825 11:58:25.675613 10620 certs.go:182] skipping minikubeCA CA generation: C:\Users\vikrant.minikube\ca.key I0825 11:58:25.676113 10620 certs.go:182] skipping proxyClientCA CA generation: C:\Users\vikrant.minikube\proxy-client-ca.key I0825 11:58:25.676612 10620 certs.go:302] generating minikube-user signed cert: C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.key I0825 11:58:25.676612 10620 crypto.go:68] Generating cert C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.crt with IP's: [] I0825 11:58:25.937605 10620 crypto.go:156] Writing cert to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.crt ... I0825 11:58:25.937605 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.crt: {Name:mkbfdcd685e9d228fd75f04582b7f084eeab84e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:25.939264 10620 crypto.go:164] Writing key to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.key ... I0825 11:58:25.939264 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.key: {Name:mke785afb5b6b7f376069c4b21c1032243dd1058 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:25.941282 10620 certs.go:302] generating minikube signed cert: C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.key.c7fa3a9e I0825 11:58:25.941282 10620 crypto.go:68] Generating cert C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1] I0825 11:58:26.135667 10620 crypto.go:156] Writing cert to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.crt.c7fa3a9e ... I0825 11:58:26.135667 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.crt.c7fa3a9e: {Name:mk77c26e80cea32a55721e879b8e8a359d64d76a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:26.138046 10620 crypto.go:164] Writing key to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.key.c7fa3a9e ... I0825 11:58:26.138046 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.key.c7fa3a9e: {Name:mke3398087f06e4e84433071b9a3b3d613e2b82f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:26.139094 10620 certs.go:320] copying C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.crt.c7fa3a9e -> C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.crt I0825 11:58:26.141908 10620 certs.go:324] copying C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.key.c7fa3a9e -> C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.key I0825 11:58:26.142918 10620 certs.go:302] generating aggregator signed cert: C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.key I0825 11:58:26.142918 10620 crypto.go:68] Generating cert C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.crt with IP's: [] I0825 11:58:26.283543 10620 crypto.go:156] Writing cert to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.crt ... I0825 11:58:26.283543 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.crt: {Name:mk8932c5598a999c18f03b0e878be1312ee4db2f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:26.286455 10620 crypto.go:164] Writing key to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.key ... I0825 11:58:26.286455 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.key: {Name:mkcbf9158dad4ff880d39a9f4f6a5abffe37427a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:26.291319 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\ca-key.pem (1675 bytes) I0825 11:58:26.292352 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\ca.pem (1082 bytes) I0825 11:58:26.292352 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\cert.pem (1123 bytes) I0825 11:58:26.293277 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\key.pem (1679 bytes) I0825 11:58:26.295316 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0825 11:58:26.320008 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0825 11:58:26.340201 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0825 11:58:26.358095 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0825 11:58:26.376669 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0825 11:58:26.395356 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0825 11:58:26.414996 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0825 11:58:26.433046 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0825 11:58:26.451472 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0825 11:58:26.470156 10620 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0825 11:58:26.484565 10620 ssh_runner.go:195] Run: openssl version I0825 11:58:26.507395 10620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0825 11:58:26.518843 10620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0825 11:58:26.523343 10620 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug 21 02:52 /usr/share/ca-certificates/minikubeCA.pem I0825 11:58:26.524341 10620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0825 11:58:26.544574 10620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0825 11:58:26.553923 10620 kubeadm.go:395] StartCluster: {Name:Mulrinode-cluster KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\vikrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0825 11:58:26.561944 10620 sshrunner.go:195] Run: docker ps --filter status=paused --filter=name=k8s.*(kube-system) --format={{.ID}} I0825 11:58:26.605919 10620 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0825 11:58:26.637621 10620 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0825 11:58:26.648080 10620 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0825 11:58:26.665460 10620 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0825 11:58:26.674847 10620 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout:

stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0825 11:58:26.674847 10620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0825 11:58:45.512940 10620 out.go:204] ▪ Generating certificates and keys ... I0825 11:58:45.517971 10620 out.go:204] ▪ Booting up control plane ... I0825 11:58:45.520938 10620 out.go:204] ▪ Configuring RBAC rules ... I0825 11:58:45.526496 10620 cni.go:95] Creating CNI manager for "" I0825 11:58:45.526496 10620 cni.go:156] 1 nodes found, recommending kindnet I0825 11:58:45.527494 10620 out.go:177] 🔗 Configuring CNI (Container Networking Interface) ... I0825 11:58:45.532994 10620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap I0825 11:58:45.601698 10620 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ... I0825 11:58:45.601698 10620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes) I0825 11:58:45.735996 10620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml I0825 11:58:46.923757 10620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1877609s) I0825 11:58:46.923757 10620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0825 11:58:46.931257 10620 ops.go:34] apiserver oom_adj: -16 I0825 11:58:46.944867 10620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0825 11:58:46.945374 10620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc minikube.k8s.io/name=Mulrinode-cluster minikube.k8s.io/updated_at=2022_08_25T11_58_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0825 11:58:47.013210 10620 kubeadm.go:1045] duration metric: took 88.9531ms to wait for elevateKubeSystemPrivileges. I0825 11:58:47.013210 10620 kubeadm.go:397] StartCluster complete in 20.4592872s I0825 11:58:47.015183 10620 settings.go:142] acquiring lock: {Name:mk62b4a44a747932007f69757b68e27077f6efeb Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:47.015183 10620 settings.go:150] Updating kubeconfig: C:\Users\vikrant.kube\config I0825 11:58:47.019682 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.kube\config: {Name:mkd6646849a412337efd215e1895f2a58265870a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0825 11:58:47.576445 10620 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "Mulrinode-cluster" rescaled to 1 I0825 11:58:47.576445 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0825 11:58:47.576445 10620 start.go:211] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0825 11:58:47.577554 10620 out.go:177] 🔎 Verifying Kubernetes components... I0825 11:58:47.577007 10620 config.go:180] Loaded profile config "Mulrinode-cluster": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 11:58:47.578106 10620 addons.go:412] enableAddons start: toEnable=map[], additional=[] I0825 11:58:47.578743 10620 addons.go:65] Setting storage-provisioner=true in profile "Mulrinode-cluster" I0825 11:58:47.578743 10620 addons.go:65] Setting default-storageclass=true in profile "Mulrinode-cluster" I0825 11:58:47.578743 10620 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "Mulrinode-cluster" I0825 11:58:47.578743 10620 addons.go:153] Setting addon storage-provisioner=true in "Mulrinode-cluster" W0825 11:58:47.578743 10620 addons.go:162] addon storage-provisioner should already be in state true I0825 11:58:47.579222 10620 host.go:66] Checking if "Mulrinode-cluster" exists ... I0825 11:58:47.614779 10620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0825 11:58:47.616345 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Status}} I0825 11:58:47.616841 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Status}} I0825 11:58:47.659410 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf./i \ hosts {\n 192.168.65.2 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.24.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0825 11:58:47.670886 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:47.870146 10620 start.go:809] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS I0825 11:58:48.098711 10620 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0825 11:58:48.099723 10620 addons.go:345] installing /etc/kubernetes/addons/storage-provisioner.yaml I0825 11:58:48.099723 10620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0825 11:58:48.107714 10620 addons.go:153] Setting addon default-storageclass=true in "Mulrinode-cluster" W0825 11:58:48.107714 10620 addons.go:162] addon default-storageclass should already be in state true I0825 11:58:48.107714 10620 host.go:66] Checking if "Mulrinode-cluster" exists ... I0825 11:58:48.111621 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:48.162875 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster --format={{.State.Status}} I0825 11:58:48.174919 10620 api_server.go:51] waiting for apiserver process to appear ... I0825 11:58:48.199703 10620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.minikube.* I0825 11:58:48.213528 10620 api_server.go:71] duration metric: took 636.7602ms to wait for apiserver process to appear ... I0825 11:58:48.213528 10620 api_server.go:87] waiting for apiserver healthz status ... I0825 11:58:48.213704 10620 api_server.go:240] Checking apiserver healthz at https://127.0.0.1:64374/healthz ... I0825 11:58:48.223203 10620 api_server.go:266] https://127.0.0.1:64374/healthz returned 200: ok I0825 11:58:48.225722 10620 api_server.go:140] control plane version: v1.24.3 I0825 11:58:48.225722 10620 api_server.go:130] duration metric: took 12.1947ms to wait for apiserver health ... I0825 11:58:48.225722 10620 system_pods.go:43] waiting for kube-system pods to appear ... I0825 11:58:48.237704 10620 system_pods.go:59] 4 kube-system pods found I0825 11:58:48.237704 10620 system_pods.go:61] "etcd-mulrinode-cluster" [7a79fa30-8050-462e-aa18-6573de9e94f2] Pending I0825 11:58:48.237704 10620 system_pods.go:61] "kube-apiserver-mulrinode-cluster" [3ef2dcfb-95ff-4126-9b6e-314b4c70684a] Pending I0825 11:58:48.237704 10620 system_pods.go:61] "kube-controller-manager-mulrinode-cluster" [56a7ee47-df5f-4ea6-ba7f-ca234482d984] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0825 11:58:48.237704 10620 system_pods.go:61] "kube-scheduler-mulrinode-cluster" [d0f98afa-ccf2-44a7-b302-b2cbce787158] Pending I0825 11:58:48.237704 10620 system_pods.go:74] duration metric: took 11.9813ms to wait for pod list to return data ... I0825 11:58:48.237704 10620 kubeadm.go:572] duration metric: took 661.2588ms to wait for : map[apiserver:true system_pods:true] ... I0825 11:58:48.237704 10620 node_conditions.go:102] verifying NodePressure condition ... I0825 11:58:48.242203 10620 node_conditions.go:122] node storage ephemeral capacity is 263174212Ki I0825 11:58:48.242703 10620 node_conditions.go:123] node cpu capacity is 8 I0825 11:58:48.242703 10620 node_conditions.go:105] duration metric: took 4.9996ms to run NodePressure ... I0825 11:58:48.242703 10620 start.go:216] waiting for startup goroutines ... I0825 11:58:48.605621 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:48.741443 10620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0825 11:58:48.849052 10620 addons.go:345] installing /etc/kubernetes/addons/storageclass.yaml I0825 11:58:48.849052 10620 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0825 11:58:48.856403 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:49.220552 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:49.336384 10620 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0825 11:58:49.447307 10620 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0825 11:58:49.447832 10620 addons.go:414] enableAddons completed in 1.8708255s I0825 11:58:49.449454 10620 out.go:177] I0825 11:58:49.451076 10620 config.go:180] Loaded profile config "Mulrinode-cluster": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 11:58:49.451149 10620 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 11:58:49.451663 10620 config.go:180] Loaded profile config "multinode-demo": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 11:58:49.451663 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 11:58:49.455707 10620 out.go:177] 👍 Starting worker node Mulrinode-cluster-m02 in cluster Mulrinode-cluster I0825 11:58:49.456707 10620 cache.go:120] Beginning downloading kic base image for docker with docker I0825 11:58:49.457210 10620 out.go:177] 🚜 Pulling base image ... I0825 11:58:49.458214 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0825 11:58:49.458214 10620 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon I0825 11:58:49.458214 10620 cache.go:57] Caching tarball of preloaded images I0825 11:58:49.458709 10620 preload.go:174] Found C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0825 11:58:49.458709 10620 cache.go:60] Finished verifying existence of preloaded tar for v1.24.3 on docker I0825 11:58:49.459285 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 11:58:49.820475 10620 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 in local docker daemon, skipping pull I0825 11:58:49.820482 10620 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 exists in daemon, skipping load I0825 11:58:49.820482 10620 cache.go:208] Successfully downloaded all kic artifacts I0825 11:58:49.820482 10620 start.go:371] acquiring machines lock for Mulrinode-cluster-m02: {Name:mk9f921d3486457ef6373ef804861fe5a62ea65e Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0825 11:58:49.820482 10620 start.go:375] acquired machines lock for "Mulrinode-cluster-m02" in 0s I0825 11:58:49.820997 10620 start.go:92] Provisioning new machine with config: &{Name:Mulrinode-cluster KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\vikrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0825 11:58:49.820997 10620 start.go:132] createHost starting for "m02" (driver="docker") I0825 11:58:49.821481 10620 out.go:204] 🔥 Creating docker container (CPUs=2, Memory=2200MB) ... I0825 11:58:49.821981 10620 start.go:166] libmachine.API.Create for "Mulrinode-cluster" (driver="docker") I0825 11:58:49.821981 10620 client.go:168] LocalClient.Create starting I0825 11:58:49.822481 10620 main.go:134] libmachine: Reading certificate data from C:\Users\vikrant.minikube\certs\ca.pem I0825 11:58:49.823000 10620 main.go:134] libmachine: Decoding PEM data... I0825 11:58:49.823000 10620 main.go:134] libmachine: Parsing certificate... I0825 11:58:49.823000 10620 main.go:134] libmachine: Reading certificate data from C:\Users\vikrant.minikube\certs\cert.pem I0825 11:58:49.823000 10620 main.go:134] libmachine: Decoding PEM data... I0825 11:58:49.823000 10620 main.go:134] libmachine: Parsing certificate... I0825 11:58:49.833027 10620 cli_runner.go:164] Run: docker network inspect Mulrinode-cluster --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0825 11:58:50.191560 10620 network_create.go:76] Found existing network {name:Mulrinode-cluster subnet:0xc0014306f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500} I0825 11:58:50.191560 10620 kic.go:106] calculated static IP "192.168.67.3" for the "Mulrinode-cluster-m02" container I0825 11:58:50.203488 10620 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0825 11:58:50.566823 10620 cli_runner.go:164] Run: docker volume create Mulrinode-cluster-m02 --label name.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --label created_by.minikube.sigs.k8s.io=true I0825 11:58:50.975066 10620 oci.go:103] Successfully created a docker volume Mulrinode-cluster-m02 I0825 11:58:50.982067 10620 cli_runner.go:164] Run: docker run --rm --name Mulrinode-cluster-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --entrypoint /usr/bin/test -v Mulrinode-cluster-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib I0825 11:58:53.362677 10620 cli_runner.go:217] Completed: docker run --rm --name Mulrinode-cluster-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --entrypoint /usr/bin/test -v Mulrinode-cluster-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib: (2.3805393s) I0825 11:58:53.362677 10620 oci.go:107] Successfully prepared a docker volume Mulrinode-cluster-m02 I0825 11:58:53.362677 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0825 11:58:53.362677 10620 kic.go:179] Starting extracting preloaded images to volume ... I0825 11:58:53.370108 10620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v Mulrinode-cluster-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir I0825 11:59:53.024105 10620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v Mulrinode-cluster-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (59.6539168s) I0825 11:59:53.024105 10620 kic.go:188] duration metric: took 59.661427 seconds to extract preloaded images to volume I0825 11:59:53.032566 10620 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0825 11:59:53.748569 10620 info.go:265] docker info: {ID:AZAS:SWGD:272W:ALSR:3HA2:L7OM:WE3T:LNS4:IMAE:7WBA:5TRM:4KPB Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:57 SystemTime:2022-08-25 06:29:53.461613302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.10.16.3-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:13218865152 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.12 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.7.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.2.3] map[Name:scan Path:C:\Program Files\Docker\cli-plugins\docker-scan.exe SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0825 11:59:53.765522 10620 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0825 11:59:54.434965 10620 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname Mulrinode-cluster-m02 --name Mulrinode-cluster-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --network Mulrinode-cluster --ip 192.168.67.3 --volume Mulrinode-cluster-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 I0825 12:00:02.617153 10620 cli_runner.go:217] Completed: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname Mulrinode-cluster-m02 --name Mulrinode-cluster-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=Mulrinode-cluster-m02 --network Mulrinode-cluster --ip 192.168.67.3 --volume Mulrinode-cluster-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8: (8.1821877s) I0825 12:00:02.632027 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster-m02 --format={{.State.Running}} I0825 12:00:03.166759 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster-m02 --format={{.State.Status}} I0825 12:00:03.649009 10620 cli_runner.go:164] Run: docker exec Mulrinode-cluster-m02 stat /var/lib/dpkg/alternatives/iptables I0825 12:00:04.502311 10620 oci.go:144] the created container "Mulrinode-cluster-m02" has a running status. I0825 12:00:04.502762 10620 kic.go:210] Creating ssh key for kic: C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa... I0825 12:00:04.667622 10620 kic_runner.go:191] docker (temp): C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0825 12:00:05.237255 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster-m02 --format={{.State.Status}} I0825 12:00:05.655268 10620 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0825 12:00:05.655268 10620 kic_runner.go:114] Args: [docker exec --privileged Mulrinode-cluster-m02 chown docker:docker /home/docker/.ssh/authorized_keys] I0825 12:00:06.345628 10620 kic.go:250] ensuring only current user has permissions to key file located at : C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa... W0825 12:00:06.366773 10620 kic.go:256] unable to determine current user's SID. minikube tunnel may not work. I0825 12:00:06.383297 10620 cli_runner.go:164] Run: docker container inspect Mulrinode-cluster-m02 --format={{.State.Status}} I0825 12:00:06.770227 10620 machine.go:88] provisioning docker machine ... I0825 12:00:06.772776 10620 ubuntu.go:169] provisioning hostname "Mulrinode-cluster-m02" I0825 12:00:06.781749 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:07.157687 10620 main.go:134] libmachine: Using SSH client type: native I0825 12:00:07.169496 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64455 } I0825 12:00:07.169830 10620 main.go:134] libmachine: About to run SSH command: sudo hostname Mulrinode-cluster-m02 && echo "Mulrinode-cluster-m02" | sudo tee /etc/hostname I0825 12:00:07.499285 10620 main.go:134] libmachine: SSH cmd err, output: : Mulrinode-cluster-m02

I0825 12:00:07.510740 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:07.891323 10620 main.go:134] libmachine: Using SSH client type: native I0825 12:00:07.891867 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64455 } I0825 12:00:07.891867 10620 main.go:134] libmachine: About to run SSH command:

    if ! grep -xq '.*\sMulrinode-cluster-m02' /etc/hosts; then
        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
            sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 Mulrinode-cluster-m02/g' /etc/hosts;
        else 
            echo '127.0.1.1 Mulrinode-cluster-m02' | sudo tee -a /etc/hosts; 
        fi
    fi

I0825 12:00:08.021989 10620 main.go:134] libmachine: SSH cmd err, output: : I0825 12:00:08.022491 10620 ubuntu.go:175] set auth options {CertDir:C:\Users\vikrant.minikube CaCertPath:C:\Users\vikrant.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\vikrant.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\vikrant.minikube\machines\server.pem ServerKeyPath:C:\Users\vikrant.minikube\machines\server-key.pem ClientKeyPath:C:\Users\vikrant.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\vikrant.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\vikrant.minikube} I0825 12:00:08.022991 10620 ubuntu.go:177] setting up certificates I0825 12:00:08.022991 10620 provision.go:83] configureAuth start I0825 12:00:08.031013 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster-m02 I0825 12:00:08.379882 10620 provision.go:138] copyHostCerts I0825 12:00:08.380807 10620 exec_runner.go:144] found C:\Users\vikrant.minikube/ca.pem, removing ... I0825 12:00:08.380807 10620 exec_runner.go:207] rm: C:\Users\vikrant.minikube\ca.pem I0825 12:00:08.382879 10620 exec_runner.go:151] cp: C:\Users\vikrant.minikube\certs\ca.pem --> C:\Users\vikrant.minikube/ca.pem (1082 bytes) I0825 12:00:08.384491 10620 exec_runner.go:144] found C:\Users\vikrant.minikube/cert.pem, removing ... I0825 12:00:08.384491 10620 exec_runner.go:207] rm: C:\Users\vikrant.minikube\cert.pem I0825 12:00:08.386388 10620 exec_runner.go:151] cp: C:\Users\vikrant.minikube\certs\cert.pem --> C:\Users\vikrant.minikube/cert.pem (1123 bytes) I0825 12:00:08.387977 10620 exec_runner.go:144] found C:\Users\vikrant.minikube/key.pem, removing ... I0825 12:00:08.387977 10620 exec_runner.go:207] rm: C:\Users\vikrant.minikube\key.pem I0825 12:00:08.389069 10620 exec_runner.go:151] cp: C:\Users\vikrant.minikube\certs\key.pem --> C:\Users\vikrant.minikube/key.pem (1679 bytes) I0825 12:00:08.389582 10620 provision.go:112] generating server cert: C:\Users\vikrant.minikube\machines\server.pem ca-key=C:\Users\vikrant.minikube\certs\ca.pem private-key=C:\Users\vikrant.minikube\certs\ca-key.pem org=Vikrant.Mulrinode-cluster-m02 san=[192.168.67.3 127.0.0.1 localhost 127.0.0.1 minikube Mulrinode-cluster-m02] I0825 12:00:08.551728 10620 provision.go:172] copyRemoteCerts I0825 12:00:08.571481 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0825 12:00:08.577656 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:08.921853 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:09.026367 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes) I0825 12:00:09.047853 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes) I0825 12:00:09.067599 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0825 12:00:09.086293 10620 provision.go:86] duration metric: configureAuth took 1.0633021s I0825 12:00:09.086601 10620 ubuntu.go:193] setting minikube options for container-runtime I0825 12:00:09.089100 10620 config.go:180] Loaded profile config "Mulrinode-cluster": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 12:00:09.098115 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:09.446314 10620 main.go:134] libmachine: Using SSH client type: native I0825 12:00:09.446314 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64455 } I0825 12:00:09.446813 10620 main.go:134] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0825 12:00:09.521734 10620 main.go:134] libmachine: SSH cmd err, output: : overlay

I0825 12:00:09.521734 10620 ubuntu.go:71] root file system type: overlay I0825 12:00:09.525570 10620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0825 12:00:09.532584 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:09.890137 10620 main.go:134] libmachine: Using SSH client type: native I0825 12:00:09.890637 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64455 } I0825 12:00:09.890637 10620 main.go:134] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

Environment="NO_PROXY=192.168.67.2"

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0825 12:00:09.981295 10620 main.go:134] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60

[Service] Type=notify Restart=on-failure

Environment=NO_PROXY=192.168.67.2

This file is a systemd drop-in unit that inherits from the base dockerd configuration.

The base configuration already specifies an 'ExecStart=...' command. The first directive

here is to clear out that command inherited from the base configuration. Without this,

the command from the base configuration and the command specified here are treated as

a sequence of commands, which is not the desired behavior, nor is it valid -- systemd

will catch this invalid input and refuse to start the service with an error like:

Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other

container runtimes. If left unlimited, it may result in OOM issues with MySQL.

ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity

Uncomment TasksMax if your systemd version supports it.

Only systemd 226 and above support this version.

TasksMax=infinity TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process

[Install] WantedBy=multi-user.target

I0825 12:00:09.990555 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:10.336740 10620 main.go:134] libmachine: Using SSH client type: native I0825 12:00:10.336740 10620 main.go:134] libmachine: &{{{ 0 [] [] []} docker [0x1323da0] 0x1326c00 [] 0s} 127.0.0.1 64455 } I0825 12:00:10.336740 10620 main.go:134] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0825 12:00:13.361708 10620 main.go:134] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2022-06-06 23:01:03.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-08-25 06:30:10.038787772 +0000 @@ -1,30 +1,33 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60

[Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always

-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure

-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s +Environment=NO_PROXY=192.168.67.2 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID

Having non-zero Limit*s causes performance problems due to accounting overhead

in the kernel. We recommend using cgroups to do container-local accounting.

@@ -32,16 +35,16 @@ LimitNPROC=infinity LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0

set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes

kill only the docker process, not all processes in the cgroup

KillMode=process -OOMScoreAdjust=-500

[Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker

I0825 12:00:13.361708 10620 machine.go:91] provisioned docker machine in 6.5914806s I0825 12:00:13.361708 10620 client.go:171] LocalClient.Create took 1m23.5397265s I0825 12:00:13.362256 10620 start.go:174] duration metric: libmachine.API.Create for "Mulrinode-cluster" took 1m23.5402751s I0825 12:00:13.362256 10620 start.go:307] post-start starting for "Mulrinode-cluster-m02" (driver="docker") I0825 12:00:13.362256 10620 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0825 12:00:13.386287 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0825 12:00:13.392786 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:13.772510 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:13.873966 10620 ssh_runner.go:195] Run: cat /etc/os-release I0825 12:00:13.879697 10620 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0825 12:00:13.879697 10620 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0825 12:00:13.879697 10620 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0825 12:00:13.879697 10620 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0825 12:00:13.879697 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\addons for local assets ... I0825 12:00:13.880769 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\files for local assets ... I0825 12:00:13.881269 10620 start.go:310] post-start completed in 519.0126ms I0825 12:00:13.895122 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster-m02 I0825 12:00:14.264863 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 12:00:14.271738 10620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0825 12:00:14.278118 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:14.650986 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:15.274868 10620 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0029391s) I0825 12:00:15.276678 10620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0825 12:00:15.281770 10620 start.go:135] duration metric: createHost completed in 1m25.4607726s I0825 12:00:15.282178 10620 start.go:82] releasing machines lock for "Mulrinode-cluster-m02", held for 1m25.4616961s I0825 12:00:15.292283 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster-m02 I0825 12:00:15.721450 10620 out.go:177] 🌐 Found network options: I0825 12:00:15.727451 10620 out.go:177] ▪ NO_PROXY=192.168.67.2 W0825 12:00:15.728950 10620 proxy.go:118] fail to check proxy env: Error ip not in block I0825 12:00:15.729950 10620 out.go:177] ▪ no_proxy=192.168.67.2 W0825 12:00:15.730449 10620 proxy.go:118] fail to check proxy env: Error ip not in block W0825 12:00:15.730449 10620 proxy.go:118] fail to check proxy env: Error ip not in block I0825 12:00:15.732450 10620 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0825 12:00:15.748687 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:15.767076 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0825 12:00:15.781269 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:16.211359 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:16.372117 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes) I0825 12:00:16.399486 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 12:00:16.499034 10620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0825 12:00:16.598968 10620 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0825 12:00:16.613961 10620 cruntime.go:273] skipping containerd shutdown because we are bound to it I0825 12:00:16.637940 10620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0825 12:00:16.668751 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0825 12:00:16.707777 10620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0825 12:00:16.817966 10620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0825 12:00:16.932603 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 12:00:17.047347 10620 ssh_runner.go:195] Run: sudo systemctl restart docker I0825 12:00:21.308074 10620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.2607266s) I0825 12:00:21.322925 10620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0825 12:00:21.447356 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 12:00:21.562099 10620 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0825 12:00:21.574724 10620 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock I0825 12:00:21.577333 10620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0825 12:00:21.581834 10620 start.go:471] Will wait 60s for crictl version I0825 12:00:21.595332 10620 ssh_runner.go:195] Run: sudo crictl version I0825 12:00:21.626994 10620 start.go:480] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0825 12:00:21.633493 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 12:00:21.687963 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 12:00:21.724459 10620 out.go:204] 🐳 Preparing Kubernetes v1.24.3 on Docker 20.10.17 ... I0825 12:00:21.725459 10620 out.go:177] ▪ env NO_PROXY=192.168.67.2 I0825 12:00:21.735594 10620 cli_runner.go:164] Run: docker exec -t Mulrinode-cluster-m02 dig +short host.docker.internal W0825 12:00:21.901811 10620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 returned with exit code 1 I0825 12:00:21.901811 10620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02: (6.1531232s) W0825 12:00:21.904161 10620 start.go:734] [curl -sS -m 2 https://k8s.gcr.io/] failed: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "Mulrinode-cluster-m02": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02: exit status 1 stdout:

stderr: Error response from daemon: i/o timeout W0825 12:00:21.904677 10620 out.go:239] ❗ This container is having trouble accessing https://k8s.gcr.io W0825 12:00:21.905162 10620 out.go:239] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0825 12:00:22.288025 10620 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0825 12:00:22.291953 10620 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0825 12:00:22.296940 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0825 12:00:22.308439 10620 certs.go:54] Setting up C:\Users\vikrant.minikube\profiles\Mulrinode-cluster for IP: 192.168.67.3 I0825 12:00:22.309454 10620 certs.go:182] skipping minikubeCA CA generation: C:\Users\vikrant.minikube\ca.key I0825 12:00:22.309939 10620 certs.go:182] skipping proxyClientCA CA generation: C:\Users\vikrant.minikube\proxy-client-ca.key I0825 12:00:22.310953 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\ca-key.pem (1675 bytes) I0825 12:00:22.311439 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\ca.pem (1082 bytes) I0825 12:00:22.311439 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\cert.pem (1123 bytes) I0825 12:00:22.311938 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\key.pem (1679 bytes) I0825 12:00:22.323504 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0825 12:00:22.344253 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0825 12:00:22.362754 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0825 12:00:22.381266 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0825 12:00:22.401267 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0825 12:00:22.421204 10620 ssh_runner.go:195] Run: openssl version I0825 12:00:22.442185 10620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0825 12:00:22.454685 10620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0825 12:00:22.459686 10620 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug 21 02:52 /usr/share/ca-certificates/minikubeCA.pem I0825 12:00:22.460687 10620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0825 12:00:22.480203 10620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0825 12:00:22.497202 10620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0825 12:00:22.592461 10620 cni.go:95] Creating CNI manager for "" I0825 12:00:22.592461 10620 cni.go:156] 2 nodes found, recommending kindnet I0825 12:00:22.592989 10620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0825 12:00:22.592989 10620 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.3 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:Mulrinode-cluster NodeName:Mulrinode-cluster-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0825 12:00:22.593524 10620 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.3 bindPort: 8443 bootstrapTokens:

I0825 12:00:22.594037 10620 kubeadm.go:961] kubelet [Unit] Wants=docker.socket

[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=Mulrinode-cluster-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3 --runtime-request-timeout=15m

[Install] config: {KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0825 12:00:22.607826 10620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3 I0825 12:00:22.618275 10620 binaries.go:44] Found k8s binaries, skipping transfer I0825 12:00:22.633758 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system I0825 12:00:22.643260 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (483 bytes) I0825 12:00:22.657020 10620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0825 12:00:22.672514 10620 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts I0825 12:00:22.676709 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0825 12:00:22.687813 10620 host.go:66] Checking if "Mulrinode-cluster" exists ... I0825 12:00:22.687813 10620 config.go:180] Loaded profile config "Mulrinode-cluster": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3 I0825 12:00:22.690257 10620 start.go:285] JoinCluster: &{Name:Mulrinode-cluster KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\vikrant:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:} I0825 12:00:22.690257 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm token create --print-join-command --ttl=0" I0825 12:00:22.696760 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 12:00:23.025085 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 12:00:23.284421 10620 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0825 12:00:23.284421 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02" I0825 12:02:29.047638 10620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": (2m5.7631556s) E0825 12:02:29.047638 10620 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.

stderr: W0825 06:30:23.565952 1099 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher I0825 12:02:29.052078 10620 start.go:311] resetting worker node "m02" before attempting to rejoin cluster... I0825 12:02:29.052078 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force" I0825 12:02:29.090708 10620 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1 stdout:

stderr: Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock To see the stack trace of this error execute with --v=5 or higher I0825 12:02:29.090708 10620 retry.go:31] will retry after 14.405090881s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.

stderr: W0825 06:30:23.565952 1099 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher I0825 12:02:43.503749 10620 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0825 12:02:43.504698 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02" I0825 12:04:44.018073 10620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": (2m0.5132732s) E0825 12:04:44.018073 10620 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.

stderr: W0825 06:32:43.607165 2484 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING Port-10250]: Port 10250 is in use [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher I0825 12:04:44.021579 10620 start.go:311] resetting worker node "m02" before attempting to rejoin cluster... I0825 12:04:44.022058 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force" I0825 12:04:44.063229 10620 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1 stdout:

stderr: Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock To see the stack trace of this error execute with --v=5 or higher I0825 12:04:44.063229 10620 start.go:287] JoinCluster complete in 4m21.3729719s I0825 12:04:44.089256 10620 out.go:177] W0825 12:04:44.098948 10620 out.go:239] ❌ Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.

stderr: W0825 06:32:43.607165 2484 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING Port-10250]: Port 10250 is in use [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher

W0825 12:04:44.102461 10620 out.go:239] W0825 12:04:44.117965 10620 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────╮ │ │ │ 😿 If the above advice does not help, please let us know: │ │ 👉 https://github.com/kubernetes/minikube/issues/new/choose │ │ │ │ Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │ │ │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ I0825 12:04:44.121015 10620 out.go:177]

klaases commented 1 year ago

Hi @Vikrant1020 – is this issue still occurring? Were you able to find a solution?

For additional assistance, please consider reaching out to the minikube community:

https://minikube.sigs.k8s.io/community/

We also offer support through Slack, Groups, and office hours.

/triage needs-information /kind support

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 1 year ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 1 year ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/minikube/issues/14856#issuecomment-1464794775): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.