Closed Vikrant1020 closed 1 year ago
==> Audit <==
-------------- | -------------------------------- | ------------------- | ------------------------ | --------- | --------------------- | --------------------- | Command | Args | Profile | User | Version | Start Time | End Time | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
start | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 08:09 IST | ||||||||||
start | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 09:56 IST | ||||||||||
start | --driver docker | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 09:57 IST | |||||||||
start | --driver docker | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 09:58 IST | 21 Aug 22 10:04 IST | ||||||||
start | --download-only --output json | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 10:01 IST | |||||||||
update-check | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 10:47 IST | 21 Aug 22 10:47 IST | |||||||||
start | --download-only --output json | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 10:50 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 10:59 IST | 21 Aug 22 10:59 IST | |||||||||
start | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 12:26 IST | 21 Aug 22 12:28 IST | |||||||||
dashboard | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 13:29 IST | ||||||||||
dashboard | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 21 Aug 22 13:36 IST | ||||||||||
start | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 22 Aug 22 09:45 IST | ||||||||||
start | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 22 Aug 22 09:47 IST | 22 Aug 22 09:53 IST | |||||||||
dashboard | --url | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 22 Aug 22 09:54 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 22 Aug 22 13:12 IST | 22 Aug 22 13:12 IST | |||||||||
start | --download-only --output json | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 22 Aug 22 13:51 IST | |||||||||
update-check | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 09:37 IST | 23 Aug 22 09:37 IST | |||||||||
start | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 09:41 IST | 23 Aug 22 09:43 IST | |||||||||
start | --download-only --output json | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 09:41 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 09:52 IST | 23 Aug 22 09:52 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 11:01 IST | 23 Aug 22 11:01 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 11:35 IST | 23 Aug 22 11:35 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 12:01 IST | 23 Aug 22 12:01 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 23 Aug 22 12:23 IST | 23 Aug 22 12:23 IST | |||||||||
start | --download-only --output json | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 24 Aug 22 12:01 IST | |||||||||
update-check | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 07:40 IST | 25 Aug 22 07:40 IST | |||||||||
start | --download-only --output json | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 07:44 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 08:19 IST | 25 Aug 22 08:19 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:02 IST | 25 Aug 22 09:02 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:03 IST | 25 Aug 22 09:03 IST | |||||||||
ip | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:06 IST | 25 Aug 22 09:06 IST | |||||||||
service | web --url | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:28 IST | |||||||||
service | django --url | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:35 IST | 25 Aug 22 09:38 IST | ||||||||
service | django --url | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:38 IST | 25 Aug 22 09:40 IST | ||||||||
service | django | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:40 IST | 25 Aug 22 09:51 IST | ||||||||
service | django nginx-service | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 09:51 IST | 25 Aug 22 09:54 IST | ||||||||
service | django nginx-service | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 10:12 IST | 25 Aug 22 10:28 IST | ||||||||
kubernetes | ||||||||||||||
service | django nginx-service | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 10:31 IST | 25 Aug 22 10:42 IST | ||||||||
kubernetes | ||||||||||||||
service | react-service | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 11:11 IST | 25 Aug 22 11:11 IST | ||||||||
addons | list | minikube | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 11:25 IST | 25 Aug 22 11:25 IST | ||||||||
start | --nodes 3 -p multinode-demo | multinode-demo | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 11:43 IST | |||||||||
start | --nodes 3 -p Mulrinode-cluster | Mulrinode-cluster | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 11:53 IST | |||||||||
start | --nodes 3 -p Mulrinode-cluster | Mulrinode-cluster | SEASIAINFOTECH\Vikrant | v1.26.1 | 25 Aug 22 11:53 IST | |||||||||
-------------- | -------------------------------- | ------------------- | ------------------------ | --------- | --------------------- | --------------------- |
==> Last Start <==
Log file created at: 2022/08/25 11:53:47
Running on machine: VIKRANT-HYLA-LT
Binary: Built with gc go1.18.3 for windows/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0825 11:53:47.988427 10620 out.go:296] Setting OutFile to fd 444 ...
I0825 11:53:47.989428 10620 out.go:348] isatty.IsTerminal(444) = true
I0825 11:53:47.989428 10620 out.go:309] Setting ErrFile to fd 444...
I0825 11:53:47.989428 10620 out.go:348] isatty.IsTerminal(444) = true
W0825 11:53:48.003972 10620 root.go:310] Error reading config file at C:\Users\vikrant.minikube\config\config.json: open C:\Users\vikrant.minikube\config\config.json: The system cannot find the file specified.
I0825 11:53:48.010452 10620 out.go:303] Setting JSON to false
I0825 11:53:48.020743 10620 start.go:115] hostinfo: {"hostname":"VIKRANT-HYLA-LT","uptime":701907,"bootTime":1660706721,"procs":300,"os":"windows","platform":"Microsoft Windows 10 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.19044 Build 19044","kernelVersion":"10.0.19044 Build 19044","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"c5cf8b5b-8450-45bd-8d71-eff4268c0719"}
W0825 11:53:48.020743 10620 start.go:123] gopshost.Virtualization returned error: not implemented yet
I0825 11:53:48.021743 10620 out.go:177] 😄 [Mulrinode-cluster] minikube v1.26.1 on Microsoft Windows 10 Pro 10.0.19044 Build 19044
I0825 11:53:48.022744 10620 notify.go:193] Checking for updates...
I0825 11:53:48.024744 10620 config.go:180] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0825 11:53:48.024744 10620 config.go:180] Loaded profile config "multinode-demo": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0825 11:53:48.024744 10620 driver.go:365] Setting default libvirt URI to qemu:///system
I0825 11:53:48.024744 10620 global.go:111] Querying for installed drivers using PATH=C:\Users\vikrant\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\local\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\usr\bin;C:\Program Files\Git\mingw64\bin;C:\Program Files\Git\usr\bin;C:\Users\vikrant\bin;C:\Users\vikrant\AppData\Local\cloud-code\installer\google-cloud-sdk\bin;C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\System32\OpenSSH;C:\Program Files\Java\jre1.8.0_271\bin;C:\Program Files\Docker\Docker\resources\bin;C:\ProgramData\DockerDesktop\version-bin;C:\Program Files\Git\cmd;C:\Program Files\Amazon\AWSCLI\bin;C:\Program Files\nodejs;C:\Program Files\php-8.1.4-nts-Win32-vs16-x64;C:\composer;D:\terraform;C:\Program Files\Kubernetes\Minikube;C:\Users\vikrant\AppData\Local\Programs\Python\Python310\Scripts;C:\Users\vikrant\AppData\Local\Programs\Python\Python310;C:\Users\vikrant\AppData\Local\Microsoft\WindowsApps;C:\Program Files\JetBrains\PyCharm 2021.3.2\bin;C:\Users\vikrant\AppData\Local\Programs\Microsoft VS Code\bin;C:\Program Files;C:\Users\vikrant\AppData\Roaming\npm;C:\Program Files\php-8.1.4\php-8.1.4;C:\Users\vikrant\AppData\Roaming\Composer\vendor\bin;C:\Users\vikrant\AppData\Local\GitHubDesktop\bin;C:\Program Files\Git\usr\bin\vendor_perl;C:\Program Files\Git\usr\bin\core_perl
I0825 11:53:48.042296 10620 global.go:119] qemu2 default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-x86_64": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu2/ Version:}
I0825 11:53:48.042296 10620 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Wmiobject Win32_ComputerSystem).HypervisorPresent failed:
Reason: Fix:Start PowerShell as an Administrator Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperv/ Version:}
I0825 11:53:48.990461 10620 global.go:119] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in %!P(MISSING)ATH%!R(MISSING)eason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:}
I0825 11:53:48.990461 10620 driver.go:300] not recommending "ssh" due to default: false
I0825 11:53:48.990461 10620 driver.go:335] Picked: docker
I0825 11:53:48.990461 10620 driver.go:336] Alternatives: [ssh]
I0825 11:53:48.990461 10620 driver.go:337] Rejects: [qemu2 virtualbox vmware hyperv podman]
I0825 11:53:48.991993 10620 out.go:177] ✨ Automatically selected the docker driver
I0825 11:53:48.992726 10620 start.go:284] selected driver: docker
I0825 11:53:48.993258 10620 start.go:808] validating driver "docker" against
stderr: Error: No such network: Mulrinode-cluster I0825 11:53:50.744874 10620 network_create.go:277] output of [docker network inspect Mulrinode-cluster]: -- stdout -- []
-- /stdout -- stderr Error: No such network: Mulrinode-cluster
/stderr
I0825 11:53:50.751904 10620 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0825 11:53:51.177310 10620 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000b8c848] misses:0}
I0825 11:53:51.177310 10620 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0825 11:53:51.177545 10620 network_create.go:115] attempt to create docker network Mulrinode-cluster 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0825 11:53:51.186105 10620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster
W0825 11:53:51.556518 10620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster returned with exit code 1
W0825 11:53:51.556518 10620 network_create.go:107] failed to create docker network Mulrinode-cluster 192.168.49.0/24, will retry: subnet is taken
I0825 11:53:51.594157 10620 network.go:279] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:false}} dirty:map[] misses:0}
I0825 11:53:51.594157 10620 network.go:238] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0825 11:53:51.624375 10620 network.go:288] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:true}} dirty:map[192.168.49.0:0xc000b8c848 192.168.58.0:0xc000006718] misses:0}
I0825 11:53:51.624375 10620 network.go:235] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0825 11:53:51.624375 10620 network_create.go:115] attempt to create docker network Mulrinode-cluster 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0825 11:53:51.632298 10620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster
W0825 11:53:52.063584 10620 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster returned with exit code 1
W0825 11:53:52.063584 10620 network_create.go:107] failed to create docker network Mulrinode-cluster 192.168.58.0/24, will retry: subnet is taken
I0825 11:53:52.092450 10620 network.go:279] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:true}} dirty:map[192.168.49.0:0xc000b8c848 192.168.58.0:0xc000006718] misses:1}
I0825 11:53:52.092450 10620 network.go:238] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0825 11:53:52.122937 10620 network.go:288] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000b8c848] amended:true}} dirty:map[192.168.49.0:0xc000b8c848 192.168.58.0:0xc000006718 192.168.67.0:0xc0000067d0] misses:1}
I0825 11:53:52.122937 10620 network.go:235] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0825 11:53:52.122937 10620 network_create.go:115] attempt to create docker network Mulrinode-cluster 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I0825 11:53:52.130469 10620 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster
I0825 11:53:53.237541 10620 cli_runner.go:217] Completed: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=Mulrinode-cluster Mulrinode-cluster: (1.1070726s)
I0825 11:53:53.237541 10620 network_create.go:99] docker network Mulrinode-cluster 192.168.67.0/24 created
I0825 11:53:53.237541 10620 kic.go:106] calculated static IP "192.168.67.2" for the "Mulrinode-cluster" container
I0825 11:53:53.249148 10620 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
I0825 11:53:53.630038 10620 cli_runner.go:164] Run: docker volume create Mulrinode-cluster --label name.minikube.sigs.k8s.io=Mulrinode-cluster --label created_by.minikube.sigs.k8s.io=true
I0825 11:53:53.959929 10620 oci.go:103] Successfully created a docker volume Mulrinode-cluster
I0825 11:53:53.968285 10620 cli_runner.go:164] Run: docker run --rm --name Mulrinode-cluster-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster --entrypoint /usr/bin/test -v Mulrinode-cluster:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib
I0825 11:53:55.332465 10620 cli_runner.go:217] Completed: docker run --rm --name Mulrinode-cluster-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=Mulrinode-cluster --entrypoint /usr/bin/test -v Mulrinode-cluster:/var gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -d /var/lib: (1.3641796s)
I0825 11:53:55.332465 10620 oci.go:107] Successfully prepared a docker volume Mulrinode-cluster
I0825 11:53:55.332465 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker
I0825 11:53:55.332465 10620 kic.go:179] Starting extracting preloaded images to volume ...
I0825 11:53:55.339949 10620 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v Mulrinode-cluster:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir
I0825 11:56:05.473374 10620 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\vikrant.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.24.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v Mulrinode-cluster:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 -I lz4 -xf /preloaded.tar -C /extractDir: (2m10.1334249s)
I0825 11:56:05.473374 10620 kic.go:188] duration metric: took 130.140909 seconds to extract preloaded images to volume
I0825 11:56:05.484263 10620 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0825 11:56:06.215083 10620 info.go:265] docker info: {ID:AZAS:SWGD:272W:ALSR:3HA2:L7OM:WE3T:LNS4:IMAE:7WBA:5TRM:4KPB Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:
I0825 11:58:12.781716 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster
I0825 11:58:13.115511 10620 main.go:134] libmachine: Using SSH client type: native
I0825 11:58:13.119916 10620 main.go:134] libmachine: &{{{
if ! grep -xq '.*\sMulrinode-cluster' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 Mulrinode-cluster/g' /etc/hosts;
else
echo '127.0.1.1 Mulrinode-cluster' | sudo tee -a /etc/hosts;
fi
fi
I0825 11:58:13.243816 10620 main.go:134] libmachine: SSH cmd err, output:
I0825 11:58:14.637266 10620 ubuntu.go:71] root file system type: overlay
I0825 11:58:14.637545 10620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0825 11:58:14.644502 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster
I0825 11:58:14.977335 10620 main.go:134] libmachine: Using SSH client type: native
I0825 11:58:14.982261 10620 main.go:134] libmachine: &{{{
[Service] Type=notify Restart=on-failure
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0825 11:58:15.115094 10620 main.go:134] libmachine: SSH cmd err, output:
[Service] Type=notify Restart=on-failure
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install] WantedBy=multi-user.target
I0825 11:58:15.128108 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster
I0825 11:58:15.454175 10620 main.go:134] libmachine: Using SSH client type: native
I0825 11:58:15.458669 10620 main.go:134] libmachine: &{{{
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID
@@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0
Delegate=yes
KillMode=process -OOMScoreAdjust=-500
[Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker
I0825 11:58:18.505140 10620 machine.go:91] provisioned docker machine in 6.2091321s I0825 11:58:18.505140 10620 client.go:171] LocalClient.Create took 4m28.4485147s I0825 11:58:18.505177 10620 start.go:174] duration metric: libmachine.API.Create for "Mulrinode-cluster" took 4m28.4485147s I0825 11:58:18.505177 10620 start.go:307] post-start starting for "Mulrinode-cluster" (driver="docker") I0825 11:58:18.505177 10620 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0825 11:58:18.522652 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0825 11:58:18.530494 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:18.850305 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:18.951408 10620 ssh_runner.go:195] Run: cat /etc/os-release I0825 11:58:18.955147 10620 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0825 11:58:18.955147 10620 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0825 11:58:18.955654 10620 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0825 11:58:18.955672 10620 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0825 11:58:18.955676 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\addons for local assets ... I0825 11:58:18.955676 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\files for local assets ... I0825 11:58:18.956204 10620 start.go:310] post-start completed in 451.0273ms I0825 11:58:18.965722 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster I0825 11:58:19.296359 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 11:58:19.300599 10620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0825 11:58:19.308101 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:19.632846 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:19.678387 10620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0825 11:58:19.683297 10620 start.go:135] duration metric: createHost completed in 4m29.6284029s I0825 11:58:19.683297 10620 start.go:82] releasing machines lock for "Mulrinode-cluster", held for 4m29.6286702s I0825 11:58:19.690547 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster I0825 11:58:20.038608 10620 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0825 11:58:20.050309 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:20.065978 10620 ssh_runner.go:195] Run: systemctl --version I0825 11:58:20.076370 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:20.388868 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:20.435603 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64370 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster\id_rsa Username:docker} I0825 11:58:21.021062 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0825 11:58:21.031320 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes) I0825 11:58:21.059814 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 11:58:21.179092 10620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0825 11:58:21.282942 10620 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0825 11:58:21.294431 10620 cruntime.go:273] skipping containerd shutdown because we are bound to it I0825 11:58:21.312919 10620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0825 11:58:21.325687 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0825 11:58:21.355360 10620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0825 11:58:21.465255 10620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0825 11:58:21.573351 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 11:58:21.704567 10620 ssh_runner.go:195] Run: sudo systemctl restart docker I0825 11:58:24.075151 10620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.3705847s) I0825 11:58:24.094660 10620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0825 11:58:24.227975 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 11:58:24.361728 10620 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0825 11:58:24.376650 10620 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock I0825 11:58:24.378165 10620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0825 11:58:24.383647 10620 start.go:471] Will wait 60s for crictl version I0825 11:58:24.400661 10620 ssh_runner.go:195] Run: sudo crictl version I0825 11:58:24.432555 10620 start.go:480] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0825 11:58:24.439921 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 11:58:24.479849 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 11:58:24.513923 10620 out.go:204] 🐳 Preparing Kubernetes v1.24.3 on Docker 20.10.17 ... I0825 11:58:24.522440 10620 cli_runner.go:164] Run: docker exec -t Mulrinode-cluster dig +short host.docker.internal I0825 11:58:25.017578 10620 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0825 11:58:25.019505 10620 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0825 11:58:25.025009 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0825 11:58:25.042003 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" Mulrinode-cluster I0825 11:58:25.402332 10620 preload.go:132] Checking if preload exists for k8s version v1.24.3 and runtime docker I0825 11:58:25.413916 10620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0825 11:58:25.444470 10620 docker.go:611] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout -- I0825 11:58:25.444470 10620 docker.go:542] Images already preloaded, skipping extraction I0825 11:58:25.452470 10620 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0825 11:58:25.480397 10620 docker.go:611] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 k8s.gcr.io/kube-controller-manager:v1.24.3 k8s.gcr.io/kube-proxy:v1.24.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/pause:3.7 k8s.gcr.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5
-- /stdout -- I0825 11:58:25.480397 10620 cache_images.go:84] Images are preloaded, skipping loading I0825 11:58:25.488395 10620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0825 11:58:25.560036 10620 cni.go:95] Creating CNI manager for "" I0825 11:58:25.560036 10620 cni.go:156] 1 nodes found, recommending kindnet I0825 11:58:25.560036 10620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0825 11:58:25.560036 10620 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:Mulrinode-cluster NodeName:Mulrinode-cluster DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0825 11:58:25.560036 10620 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.2 bindPort: 8443 bootstrapTokens:
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local"
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0
tcpEstablishedTimeout: 0s
tcpCloseWaitTimeout: 0s
I0825 11:58:25.560562 10620 kubeadm.go:961] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=Mulrinode-cluster --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0825 11:58:25.580584 10620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
I0825 11:58:25.592127 10620 binaries.go:44] Found k8s binaries, skipping transfer
I0825 11:58:25.607143 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0825 11:58:25.616129 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (479 bytes)
I0825 11:58:25.629628 10620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0825 11:58:25.644861 10620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2040 bytes)
I0825 11:58:25.661113 10620 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0825 11:58:25.665113 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0825 11:58:25.675115 10620 certs.go:54] Setting up C:\Users\vikrant.minikube\profiles\Mulrinode-cluster for IP: 192.168.67.2
I0825 11:58:25.675613 10620 certs.go:182] skipping minikubeCA CA generation: C:\Users\vikrant.minikube\ca.key
I0825 11:58:25.676113 10620 certs.go:182] skipping proxyClientCA CA generation: C:\Users\vikrant.minikube\proxy-client-ca.key
I0825 11:58:25.676612 10620 certs.go:302] generating minikube-user signed cert: C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.key
I0825 11:58:25.676612 10620 crypto.go:68] Generating cert C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.crt with IP's: []
I0825 11:58:25.937605 10620 crypto.go:156] Writing cert to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.crt ...
I0825 11:58:25.937605 10620 lock.go:35] WriteFile acquiring C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\client.crt: {Name:mkbfdcd685e9d228fd75f04582b7f084eeab84e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0825 11:58:26.674847 10620 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0825 11:58:45.512940 10620 out.go:204] ▪ Generating certificates and keys ...
I0825 11:58:45.517971 10620 out.go:204] ▪ Booting up control plane ...
I0825 11:58:45.520938 10620 out.go:204] ▪ Configuring RBAC rules ...
I0825 11:58:45.526496 10620 cni.go:95] Creating CNI manager for ""
I0825 11:58:45.526496 10620 cni.go:156] 1 nodes found, recommending kindnet
I0825 11:58:45.527494 10620 out.go:177] 🔗 Configuring CNI (Container Networking Interface) ...
I0825 11:58:45.532994 10620 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
I0825 11:58:45.601698 10620 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.24.3/kubectl ...
I0825 11:58:45.601698 10620 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
I0825 11:58:45.735996 10620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I0825 11:58:46.923757 10620 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.24.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1877609s)
I0825 11:58:46.923757 10620 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0825 11:58:46.931257 10620 ops.go:34] apiserver oom_adj: -16
I0825 11:58:46.944867 10620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0825 11:58:46.945374 10620 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.3/kubectl label nodes minikube.k8s.io/version=v1.26.1 minikube.k8s.io/commit=62e108c3dfdec8029a890ad6d8ef96b6461426dc minikube.k8s.io/name=Mulrinode-cluster minikube.k8s.io/updated_at=2022_08_25T11_58_46_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I0825 11:58:47.013210 10620 kubeadm.go:1045] duration metric: took 88.9531ms to wait for elevateKubeSystemPrivileges.
I0825 11:58:47.013210 10620 kubeadm.go:397] StartCluster complete in 20.4592872s
I0825 11:58:47.015183 10620 settings.go:142] acquiring lock: {Name:mk62b4a44a747932007f69757b68e27077f6efeb Clock:{} Delay:500ms Timeout:1m0s Cancel:
I0825 12:00:07.510740 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02
I0825 12:00:07.891323 10620 main.go:134] libmachine: Using SSH client type: native
I0825 12:00:07.891867 10620 main.go:134] libmachine: &{{{
if ! grep -xq '.*\sMulrinode-cluster-m02' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 Mulrinode-cluster-m02/g' /etc/hosts;
else
echo '127.0.1.1 Mulrinode-cluster-m02' | sudo tee -a /etc/hosts;
fi
fi
I0825 12:00:08.021989 10620 main.go:134] libmachine: SSH cmd err, output:
I0825 12:00:09.521734 10620 ubuntu.go:71] root file system type: overlay
I0825 12:00:09.525570 10620 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
I0825 12:00:09.532584 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02
I0825 12:00:09.890137 10620 main.go:134] libmachine: Using SSH client type: native
I0825 12:00:09.890637 10620 main.go:134] libmachine: &{{{
[Service] Type=notify Restart=on-failure
Environment="NO_PROXY=192.168.67.2"
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0825 12:00:09.981295 10620 main.go:134] libmachine: SSH cmd err, output:
[Service] Type=notify Restart=on-failure
Environment=NO_PROXY=192.168.67.2
ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity
TasksMax=infinity TimeoutStartSec=0
Delegate=yes
KillMode=process
[Install] WantedBy=multi-user.target
I0825 12:00:09.990555 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02
I0825 12:00:10.336740 10620 main.go:134] libmachine: Using SSH client type: native
I0825 12:00:10.336740 10620 main.go:134] libmachine: &{{{
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s +Environment=NO_PROXY=192.168.67.2 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID
@@ -32,16 +35,16 @@ LimitNPROC=infinity LimitCORE=infinity
-# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0
Delegate=yes
KillMode=process -OOMScoreAdjust=-500
[Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker
I0825 12:00:13.361708 10620 machine.go:91] provisioned docker machine in 6.5914806s I0825 12:00:13.361708 10620 client.go:171] LocalClient.Create took 1m23.5397265s I0825 12:00:13.362256 10620 start.go:174] duration metric: libmachine.API.Create for "Mulrinode-cluster" took 1m23.5402751s I0825 12:00:13.362256 10620 start.go:307] post-start starting for "Mulrinode-cluster-m02" (driver="docker") I0825 12:00:13.362256 10620 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0825 12:00:13.386287 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0825 12:00:13.392786 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:13.772510 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:13.873966 10620 ssh_runner.go:195] Run: cat /etc/os-release I0825 12:00:13.879697 10620 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0825 12:00:13.879697 10620 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0825 12:00:13.879697 10620 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0825 12:00:13.879697 10620 info.go:137] Remote host: Ubuntu 20.04.4 LTS I0825 12:00:13.879697 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\addons for local assets ... I0825 12:00:13.880769 10620 filesync.go:126] Scanning C:\Users\vikrant.minikube\files for local assets ... I0825 12:00:13.881269 10620 start.go:310] post-start completed in 519.0126ms I0825 12:00:13.895122 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster-m02 I0825 12:00:14.264863 10620 profile.go:148] Saving config to C:\Users\vikrant.minikube\profiles\Mulrinode-cluster\config.json ... I0825 12:00:14.271738 10620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0825 12:00:14.278118 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:14.650986 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:15.274868 10620 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (1.0029391s) I0825 12:00:15.276678 10620 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0825 12:00:15.281770 10620 start.go:135] duration metric: createHost completed in 1m25.4607726s I0825 12:00:15.282178 10620 start.go:82] releasing machines lock for "Mulrinode-cluster-m02", held for 1m25.4616961s I0825 12:00:15.292283 10620 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" Mulrinode-cluster-m02 I0825 12:00:15.721450 10620 out.go:177] 🌐 Found network options: I0825 12:00:15.727451 10620 out.go:177] ▪ NO_PROXY=192.168.67.2 W0825 12:00:15.728950 10620 proxy.go:118] fail to check proxy env: Error ip not in block I0825 12:00:15.729950 10620 out.go:177] ▪ no_proxy=192.168.67.2 W0825 12:00:15.730449 10620 proxy.go:118] fail to check proxy env: Error ip not in block W0825 12:00:15.730449 10620 proxy.go:118] fail to check proxy env: Error ip not in block I0825 12:00:15.732450 10620 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0825 12:00:15.748687 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:15.767076 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0825 12:00:15.781269 10620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 I0825 12:00:16.211359 10620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64455 SSHKeyPath:C:\Users\vikrant.minikube\machines\Mulrinode-cluster-m02\id_rsa Username:docker} I0825 12:00:16.372117 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes) I0825 12:00:16.399486 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 12:00:16.499034 10620 ssh_runner.go:195] Run: sudo systemctl restart cri-docker I0825 12:00:16.598968 10620 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0825 12:00:16.613961 10620 cruntime.go:273] skipping containerd shutdown because we are bound to it I0825 12:00:16.637940 10620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0825 12:00:16.668751 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock image-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0825 12:00:16.707777 10620 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0825 12:00:16.817966 10620 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0825 12:00:16.932603 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 12:00:17.047347 10620 ssh_runner.go:195] Run: sudo systemctl restart docker I0825 12:00:21.308074 10620 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.2607266s) I0825 12:00:21.322925 10620 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0825 12:00:21.447356 10620 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0825 12:00:21.562099 10620 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket I0825 12:00:21.574724 10620 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock I0825 12:00:21.577333 10620 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0825 12:00:21.581834 10620 start.go:471] Will wait 60s for crictl version I0825 12:00:21.595332 10620 ssh_runner.go:195] Run: sudo crictl version I0825 12:00:21.626994 10620 start.go:480] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 20.10.17 RuntimeApiVersion: 1.41.0 I0825 12:00:21.633493 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 12:00:21.687963 10620 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0825 12:00:21.724459 10620 out.go:204] 🐳 Preparing Kubernetes v1.24.3 on Docker 20.10.17 ... I0825 12:00:21.725459 10620 out.go:177] ▪ env NO_PROXY=192.168.67.2 I0825 12:00:21.735594 10620 cli_runner.go:164] Run: docker exec -t Mulrinode-cluster-m02 dig +short host.docker.internal W0825 12:00:21.901811 10620 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02 returned with exit code 1 I0825 12:00:21.901811 10620 cli_runner.go:217] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02: (6.1531232s) W0825 12:00:21.904161 10620 start.go:734] [curl -sS -m 2 https://k8s.gcr.io/] failed: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "Mulrinode-cluster-m02": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" Mulrinode-cluster-m02: exit status 1 stdout:
stderr: Error response from daemon: i/o timeout W0825 12:00:21.904677 10620 out.go:239] ❗ This container is having trouble accessing https://k8s.gcr.io W0825 12:00:21.905162 10620 out.go:239] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0825 12:00:22.288025 10620 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2 I0825 12:00:22.291953 10620 ssh_runner.go:195] Run: grep 192.168.65.2 host.minikube.internal$ /etc/hosts I0825 12:00:22.296940 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0825 12:00:22.308439 10620 certs.go:54] Setting up C:\Users\vikrant.minikube\profiles\Mulrinode-cluster for IP: 192.168.67.3 I0825 12:00:22.309454 10620 certs.go:182] skipping minikubeCA CA generation: C:\Users\vikrant.minikube\ca.key I0825 12:00:22.309939 10620 certs.go:182] skipping proxyClientCA CA generation: C:\Users\vikrant.minikube\proxy-client-ca.key I0825 12:00:22.310953 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\ca-key.pem (1675 bytes) I0825 12:00:22.311439 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\ca.pem (1082 bytes) I0825 12:00:22.311439 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\cert.pem (1123 bytes) I0825 12:00:22.311938 10620 certs.go:388] found cert: C:\Users\vikrant.minikube\certs\C:\Users\vikrant.minikube\certs\key.pem (1679 bytes) I0825 12:00:22.323504 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0825 12:00:22.344253 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0825 12:00:22.362754 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0825 12:00:22.381266 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I0825 12:00:22.401267 10620 ssh_runner.go:362] scp C:\Users\vikrant.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0825 12:00:22.421204 10620 ssh_runner.go:195] Run: openssl version I0825 12:00:22.442185 10620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0825 12:00:22.454685 10620 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0825 12:00:22.459686 10620 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Aug 21 02:52 /usr/share/ca-certificates/minikubeCA.pem I0825 12:00:22.460687 10620 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0825 12:00:22.480203 10620 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0825 12:00:22.497202 10620 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0825 12:00:22.592461 10620 cni.go:95] Creating CNI manager for "" I0825 12:00:22.592461 10620 cni.go:156] 2 nodes found, recommending kindnet I0825 12:00:22.592989 10620 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0825 12:00:22.592989 10620 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.3 APIServerPort:8443 KubernetesVersion:v1.24.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:Mulrinode-cluster NodeName:Mulrinode-cluster-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0825 12:00:22.593524 10620 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.67.3 bindPort: 8443 bootstrapTokens:
apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local"
apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0
tcpEstablishedTimeout: 0s
tcpCloseWaitTimeout: 0s
I0825 12:00:22.594037 10620 kubeadm.go:961] kubelet [Unit] Wants=docker.socket
[Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.24.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=Mulrinode-cluster-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0825 12:00:22.607826 10620 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.3
I0825 12:00:22.618275 10620 binaries.go:44] Found k8s binaries, skipping transfer
I0825 12:00:22.633758 10620 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
I0825 12:00:22.643260 10620 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (483 bytes)
I0825 12:00:22.657020 10620 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0825 12:00:22.672514 10620 ssh_runner.go:195] Run: grep 192.168.67.2 control-plane.minikube.internal$ /etc/hosts
I0825 12:00:22.676709 10620 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0825 12:00:22.687813 10620 host.go:66] Checking if "Mulrinode-cluster" exists ...
I0825 12:00:22.687813 10620 config.go:180] Loaded profile config "Mulrinode-cluster": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.24.3
I0825 12:00:22.690257 10620 start.go:285] JoinCluster: &{Name:Mulrinode-cluster KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:Mulrinode-cluster Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:
stderr: W0825 06:30:23.565952 1099 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher I0825 12:02:29.052078 10620 start.go:311] resetting worker node "m02" before attempting to rejoin cluster... I0825 12:02:29.052078 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force" I0825 12:02:29.090708 10620 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1 stdout:
stderr: Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock To see the stack trace of this error execute with --v=5 or higher I0825 12:02:29.090708 10620 retry.go:31] will retry after 14.405090881s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.
stderr: W0825 06:30:23.565952 1099 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher I0825 12:02:43.503749 10620 start.go:306] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:0 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:false Worker:true} I0825 12:02:43.504698 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02" I0825 12:04:44.018073 10620 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": (2m0.5132732s) E0825 12:04:44.018073 10620 start.go:308] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.
stderr: W0825 06:32:43.607165 2484 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING Port-10250]: Port 10250 is in use [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher I0825 12:04:44.021579 10620 start.go:311] resetting worker node "m02" before attempting to rejoin cluster... I0825 12:04:44.022058 10620 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force" I0825 12:04:44.063229 10620 start.go:313] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm reset --force": Process exited with status 1 stdout:
stderr: Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock To see the stack trace of this error execute with --v=5 or higher I0825 12:04:44.063229 10620 start.go:287] JoinCluster complete in 4m21.3729719s I0825 12:04:44.089256 10620 out.go:177] W0825 12:04:44.098948 10620 out.go:239] ❌ Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wrvjsg.dm1zdd1s099m42qf --discovery-token-ca-cert-hash sha256:3207081c9ab15eb393c99852b4d5d4b50400027e2410dcb49962bf938f004bb7 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=Mulrinode-cluster-m02": Process exited with status 1 stdout: [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... [kubelet-check] Initial timeout of 40s passed.
stderr: W0825 06:32:43.607165 2484 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration! [WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [WARNING Port-10250]: Port 10250 is in use [WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists error execution phase kubelet-start: error uploading crisocket: nodes "Mulrinode-cluster-m02" not found To see the stack trace of this error execute with --v=5 or higher
W0825 12:04:44.102461 10620 out.go:239]
W0825 12:04:44.117965 10620 out.go:239] [31m╭───────────────────────────────────────────────────────────────────────────────────────────╮[0m
[31m│[0m [31m│[0m
[31m│[0m 😿 If the above advice does not help, please let us know: [31m│[0m
[31m│[0m 👉 https://github.com/kubernetes/minikube/issues/new/choose [31m│[0m
[31m│[0m [31m│[0m
[31m│[0m Please run minikube logs --file=logs.txt
and attach logs.txt to the GitHub issue. [31m│[0m
[31m│[0m [31m│[0m
[31m╰───────────────────────────────────────────────────────────────────────────────────────────╯[0m
I0825 12:04:44.121015 10620 out.go:177]
Hi @Vikrant1020 – is this issue still occurring? Were you able to find a solution?
For additional assistance, please consider reaching out to the minikube community:
https://minikube.sigs.k8s.io/community/
We also offer support through Slack, Groups, and office hours.
/triage needs-information /kind support
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
What Happened?
tryed to create a multi-node cluster from minikube.
minikube start --nodes 3 -p Mulrinode-cluster
got unknown error
Attach the log file
D:\K8S\testing\logs.txt
Operating System
Windows
Driver
Docker