kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
28.77k stars 4.81k forks source link

cloud shell bug : gcloud alpha code dev fails flaky #9579

Closed medyagh closed 3 years ago

medyagh commented 3 years ago
DEBUG: Running [gcloud.alpha.code.dev] with arguments: [--verbosity: "debug"]
Client:
 Debug Mode: false

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 1
 Server Version: 19.03.13
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 8fba4e9a7d01810a393d5d25a3621dc101981175
 runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.4.49+
 Operating System: Debian GNU/Linux 10 (buster) (containerized)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.64GiB
 Name: cs-259633509134-default-boost-2sq6b
 ID: 3DCN:ZSHU:ORDR:5DQS:UQUU:QAM3:5WJL:KP6V:OK6V:G222:V7RQ:SKP7
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Registry Mirrors:
  https://us-mirror.gcr.io/
 Live Restore Enabled: false

╔════════════════════════════════════════════════════════════╗
╠═ Starting development environment 'gcloud-local-dev' ...  ═╣
╚W1028 01:51:46.689638   69089 root.go:235] Error reading config file at /home/jtzwu/.minikube/config/config.json: open /home/jtzwu/.minikube/config/config.json: no such file or directory
I1028 01:51:46.690232   69089 out.go:192] Setting JSON to true
I1028 01:51:46.754142   69089 start.go:103] hostinfo: {"hostname":"cs-259633509134-default-boost-2sq6b","uptime":2620,"bootTime":1603847286,"procs":91,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"10.6","kernelVersion":"5.4.49+","virtualizationSystem":"","virtualizationRole":"","hostid":"40674502-5d9f-83d8-46c4-261600147460"}
I1028 01:51:46.754652   69089 start.go:113] virtualization:  
I1028 01:51:46.760691   69089 driver.go:288] Setting default libvirt URI to qemu:///system
I1028 01:51:46.823394   69089 docker.go:117] docker version: linux-19.03.13
I1028 01:51:46.823536   69089 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I1028 01:51:46.936157   69089 info.go:253] docker info: {ID:3DCN:ZSHU:ORDR:5DQS:UQUU:QAM3:5WJL:KP6V:OK6V:G222:V7RQ:SKP7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:41 SystemTime:2020-10-28 01:51:46.870613613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.49+ OperatingSystem:Debian GNU/Linux 10 (buster) (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:4 MemTotal:16795111424 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-259633509134-default-boost-2sq6b Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1028 01:51:46.936297   69089 docker.go:147] overlay module found
═════I1028 01:51:46.940373   69089 start.go:272] selected driver: docker
I1028 01:51:46.940415   69089 start.go:680] validating driver "docker" against &{Name:gcloud-local-dev KeepContext:true EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:gcloud-local-dev APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I1028 01:51:46.940519   69089 start.go:691] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Fix: Doc:}
I1028 01:51:46.940691   69089 cli_runner.go:110] Run: docker system info --format "{{json .}}"
I1028 01:51:47.055158   69089 info.go:253] docker info: {ID:3DCN:ZSHU:ORDR:5DQS:UQUU:QAM3:5WJL:KP6V:OK6V:G222:V7RQ:SKP7 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:41 SystemTime:2020-10-28 01:51:46.990595087 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.49+ OperatingSystem:Debian GNU/Linux 10 (buster) (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[https://us-mirror.gcr.io/] Secure:true Official:true}} Mirrors:[https://us-mirror.gcr.io/]} NCPU:4 MemTotal:16795111424 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:cs-259633509134-default-boost-2sq6b Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
I1028 01:51:47.056405   69089 start_flags.go:358] config:
{Name:gcloud-local-dev KeepContext:true EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:gcloud-local-dev APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
══════════I1028 01:51:47.110429   69089 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f in local docker daemon, skipping pull
I1028 01:51:47.110468   69089 cache.go:115] gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f exists in daemon, skipping pull
I1028 01:51:47.110478   69089 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1028 01:51:47.110557   69089 preload.go:105] Found local preload: /home/jtzwu/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1028 01:51:47.110567   69089 cache.go:53] Caching tarball of preloaded images
I1028 01:51:47.110598   69089 preload.go:131] Found /home/jtzwu/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I1028 01:51:47.110605   69089 cache.go:56] Finished verifying existence of preloaded tar for  v1.19.2 on docker
I1028 01:51:47.110752   69089 profile.go:150] Saving config to /home/jtzwu/.minikube/profiles/gcloud-local-dev/config.json ...
I1028 01:51:47.111167   69089 cache.go:182] Successfully downloaded all kic artifacts
I1028 01:51:47.111311   69089 start.go:314] acquiring machines lock for gcloud-local-dev: {Name:mkd23d73954120c533ce5cd29090c1f4d84ba75d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1028 01:51:47.111518   69089 start.go:318] acquired machines lock for "gcloud-local-dev" in 168.019µs
I1028 01:51:47.111551   69089 start.go:94] Skipping create...Using existing machine configuration
I1028 01:51:47.111560   69089 fix.go:54] fixHost starting: 
I1028 01:51:47.112022   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:47.164860   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:47.164960   69089 fix.go:107] recreateIfNeeded on gcloud-local-dev: state= err=unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:47.164987   69089 fix.go:112] machineExists: false. err=machine does not exist
I1028 01:51:47.169277   69089 delete.go:124] DEMOLISHING gcloud-local-dev ...
I1028 01:51:47.169443   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:47.221018   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
W1028 01:51:47.221120   69089 stop.go:75] unable to get state: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:47.221145   69089 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:47.221777   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:47.272607   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:47.272686   69089 delete.go:82] Unable to get host status for gcloud-local-dev, assuming it has already been deleted: state: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:47.272787   69089 cli_runner.go:110] Run: docker container inspect -f {{.Id}} gcloud-local-dev
W1028 01:51:47.323737   69089 cli_runner.go:148] docker container inspect -f {{.Id}} gcloud-local-dev returned with exit code 1
I1028 01:51:47.323895   69089 kic.go:296] could not find the container gcloud-local-dev to remove it. will try anyways
I1028 01:51:47.324018   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:47.393820   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
W1028 01:51:47.393996   69089 oci.go:83] error getting container status, will try to delete anyways: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:47.394113   69089 cli_runner.go:110] Run: docker exec --privileged -t gcloud-local-dev /bin/bash -c "sudo init 0"
W1028 01:51:47.444956   69089 cli_runner.go:148] docker exec --privileged -t gcloud-local-dev /bin/bash -c "sudo init 0" returned with exit code 1
I1028 01:51:47.445010   69089 oci.go:595] error shutdown gcloud-local-dev: docker exec --privileged -t gcloud-local-dev /bin/bash -c "sudo init 0": exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:48.445378   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:48.494905   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:48.494976   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:48.494989   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:51:48.495014   69089 retry.go:31] will retry after 552.330144ms: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:49.047661   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:49.098116   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:49.098188   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:49.098213   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:51:49.098240   69089 retry.go:31] will retry after 1.080381816s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:50.179022   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:50.228159   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:50.228230   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:50.228242   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:51:50.228266   69089 retry.go:31] will retry after 1.31013006s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:51.538716   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:51.589267   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:51.589346   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:51.589367   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:51:51.589392   69089 retry.go:31] will retry after 1.582392691s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:53.172057   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:53.219526   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:53.219586   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:53.219597   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:51:53.219622   69089 retry.go:31] will retry after 2.340488664s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:55.560492   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:51:55.610217   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:51:55.610327   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:51:55.610344   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:51:55.610371   69089 retry.go:31] will retry after 4.506218855s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:52:00.116910   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:52:00.166445   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:52:00.166567   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:52:00.166579   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:52:00.166606   69089 retry.go:31] will retry after 3.221479586s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:52:03.388457   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:52:03.437047   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:52:03.437108   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:52:03.437119   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:52:03.437151   69089 retry.go:31] will retry after 5.608623477s: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:52:09.046051   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:52:09.095076   69089 cli_runner.go:148] docker container inspect gcloud-local-dev --format={{.State.Status}} returned with exit code 1
I1028 01:52:09.095147   69089 oci.go:607] temporary error verifying shutdown: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev
I1028 01:52:09.095168   69089 oci.go:609] temporary error: container gcloud-local-dev status is  but expect it to be exited
I1028 01:52:09.095199   69089 oci.go:87] couldn't shut down gcloud-local-dev (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "gcloud-local-dev": docker container inspect gcloud-local-dev --format={{.State.Status}}: exit status 1
stdout:

stderr:
Error: No such container: gcloud-local-dev

I1028 01:52:09.095281   69089 cli_runner.go:110] Run: docker rm -f -v gcloud-local-dev
W1028 01:52:09.148260   69089 cli_runner.go:148] docker rm -f -v gcloud-local-dev returned with exit code 1
W1028 01:52:09.148647   69089 delete.go:139] delete failed (probably ok) <nil>
I1028 01:52:09.148667   69089 fix.go:119] Sleeping 1 second for extra luck!
I1028 01:52:10.148866   69089 start.go:127] createHost starting for "" (driver="docker")
═══════════════I1028 01:52:10.153508   69089 start.go:164] libmachine.API.Create for "gcloud-local-dev" (driver="docker")
I1028 01:52:10.153568   69089 client.go:165] LocalClient.Create starting
I1028 01:52:10.153637   69089 main.go:119] libmachine: Reading certificate data from /home/jtzwu/.minikube/certs/ca.pem
I1028 01:52:10.153697   69089 main.go:119] libmachine: Decoding PEM data...
I1028 01:52:10.153725   69089 main.go:119] libmachine: Parsing certificate...
I1028 01:52:10.153991   69089 main.go:119] libmachine: Reading certificate data from /home/jtzwu/.minikube/certs/cert.pem
I1028 01:52:10.154044   69089 main.go:119] libmachine: Decoding PEM data...
I1028 01:52:10.154069   69089 main.go:119] libmachine: Parsing certificate...
I1028 01:52:10.154526   69089 cli_runner.go:110] Run: docker network inspect gcloud-local-dev --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
W1028 01:52:10.203555   69089 cli_runner.go:148] docker network inspect gcloud-local-dev --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1
I1028 01:52:10.203722   69089 network_create.go:178] running [docker network inspect gcloud-local-dev] to gather additional debugging logs...
I1028 01:52:10.203779   69089 cli_runner.go:110] Run: docker network inspect gcloud-local-dev
W1028 01:52:10.254056   69089 cli_runner.go:148] docker network inspect gcloud-local-dev returned with exit code 1
I1028 01:52:10.254092   69089 network_create.go:181] error running [docker network inspect gcloud-local-dev]: docker network inspect gcloud-local-dev: exit status 1
stdout:
[]

stderr:
Error: No such network: gcloud-local-dev
I1028 01:52:10.254106   69089 network_create.go:183] output of [docker network inspect gcloud-local-dev]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: gcloud-local-dev

** /stderr **
I1028 01:52:10.254187   69089 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I1028 01:52:10.304634   69089 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: gcloud-local-dev and gateway 192.168.49.1 and MTU of 1460 ...
I1028 01:52:10.304770   69089 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true gcloud-local-dev -o com.docker.network.driver.mtu=1460
I1028 01:52:10.399397   69089 kic.go:93] calculated static IP "192.168.49.2" for the "gcloud-local-dev" container
I1028 01:52:10.399531   69089 cli_runner.go:110] Run: docker ps -a --format {{.Names}}
I1028 01:52:10.450612   69089 cli_runner.go:110] Run: docker volume create gcloud-local-dev --label name.minikube.sigs.k8s.io=gcloud-local-dev --label created_by.minikube.sigs.k8s.io=true
I1028 01:52:10.501859   69089 oci.go:102] Successfully created a docker volume gcloud-local-dev
I1028 01:52:10.501979   69089 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v gcloud-local-dev:/var gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -d /var/lib
I1028 01:52:11.119916   69089 oci.go:106] Successfully prepared a docker volume gcloud-local-dev
W1028 01:52:11.119982   69089 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1028 01:52:11.120082   69089 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'"
I1028 01:52:11.120426   69089 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1028 01:52:11.120486   69089 preload.go:105] Found local preload: /home/jtzwu/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1028 01:52:11.120497   69089 kic.go:148] Starting extracting preloaded images to volume ...
I1028 01:52:11.120569   69089 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jtzwu/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v gcloud-local-dev:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -I lz4 -xvf /preloaded.tar -C /extractDir
I1028 01:52:11.241322   69089 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname gcloud-local-dev --name gcloud-local-dev --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=gcloud-local-dev --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=gcloud-local-dev --network gcloud-local-dev --ip 192.168.49.2 --volume gcloud-local-dev:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f
I1028 01:52:11.808934   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Running}}
I1028 01:52:11.878496   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
I1028 01:52:11.956490   69089 cli_runner.go:110] Run: docker exec gcloud-local-dev stat /var/lib/dpkg/alternatives/iptables
I1028 01:52:12.118258   69089 oci.go:245] the created container "gcloud-local-dev" has a running status.
I1028 01:52:12.118303   69089 kic.go:179] Creating ssh key for kic: /home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa...
I1028 01:52:12.736795   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I1028 01:52:12.736873   69089 kic_runner.go:179] docker (temp): /home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1028 01:52:12.879891   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
I1028 01:52:12.948767   69089 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1028 01:52:12.948794   69089 kic_runner.go:114] Args: [docker exec --privileged gcloud-local-dev chown docker:docker /home/docker/.ssh/authorized_keys]
I1028 01:52:17.651397   69089 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jtzwu/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v gcloud-local-dev:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f -I lz4 -xvf /preloaded.tar -C /extractDir: (6.530753145s)
I1028 01:52:17.651430   69089 kic.go:157] duration metric: took 6.530930 seconds to extract preloaded images to volume
I1028 01:52:17.651566   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
I1028 01:52:17.716498   69089 machine.go:88] provisioning docker machine ...
I1028 01:52:17.716550   69089 ubuntu.go:166] provisioning hostname "gcloud-local-dev"
I1028 01:52:17.716657   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:17.771086   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:17.771402   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:17.771435   69089 main.go:119] libmachine: About to run SSH command:
sudo hostname gcloud-local-dev && echo "gcloud-local-dev" | sudo tee /etc/hostname
I1028 01:52:17.926469   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: gcloud-local-dev

I1028 01:52:17.926593   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:17.981290   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:17.981535   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:17.981576   69089 main.go:119] libmachine: About to run SSH command:

        if ! grep -xq '.*\sgcloud-local-dev' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 gcloud-local-dev/g' /etc/hosts;
            else 
                echo '127.0.1.1 gcloud-local-dev' | sudo tee -a /etc/hosts; 
            fi
        fi
I1028 01:52:18.118679   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I1028 01:52:18.118713   69089 ubuntu.go:172] set auth options {CertDir:/home/jtzwu/.minikube CaCertPath:/home/jtzwu/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jtzwu/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jtzwu/.minikube/machines/server.pem ServerKeyPath:/home/jtzwu/.minikube/machines/server-key.pem ClientKeyPath:/home/jtzwu/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jtzwu/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jtzwu/.minikube}
I1028 01:52:18.118984   69089 ubuntu.go:174] setting up certificates
I1028 01:52:18.118998   69089 provision.go:82] configureAuth start
I1028 01:52:18.119105   69089 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" gcloud-local-dev
I1028 01:52:18.172674   69089 provision.go:131] copyHostCerts
I1028 01:52:18.172723   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/ca.pem -> /home/jtzwu/.minikube/ca.pem
I1028 01:52:18.172753   69089 exec_runner.go:91] found /home/jtzwu/.minikube/ca.pem, removing ...
I1028 01:52:18.172868   69089 exec_runner.go:98] cp: /home/jtzwu/.minikube/certs/ca.pem --> /home/jtzwu/.minikube/ca.pem (1074 bytes)
I1028 01:52:18.173087   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/cert.pem -> /home/jtzwu/.minikube/cert.pem
I1028 01:52:18.173122   69089 exec_runner.go:91] found /home/jtzwu/.minikube/cert.pem, removing ...
I1028 01:52:18.173169   69089 exec_runner.go:98] cp: /home/jtzwu/.minikube/certs/cert.pem --> /home/jtzwu/.minikube/cert.pem (1119 bytes)
I1028 01:52:18.173241   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/key.pem -> /home/jtzwu/.minikube/key.pem
I1028 01:52:18.173264   69089 exec_runner.go:91] found /home/jtzwu/.minikube/key.pem, removing ...
I1028 01:52:18.173299   69089 exec_runner.go:98] cp: /home/jtzwu/.minikube/certs/key.pem --> /home/jtzwu/.minikube/key.pem (1679 bytes)
I1028 01:52:18.173426   69089 provision.go:105] generating server cert: /home/jtzwu/.minikube/machines/server.pem ca-key=/home/jtzwu/.minikube/certs/ca.pem private-key=/home/jtzwu/.minikube/certs/ca-key.pem org=jtzwu.gcloud-local-dev san=[192.168.49.2 localhost 127.0.0.1 minikube gcloud-local-dev]
I1028 01:52:18.612704   69089 provision.go:159] copyRemoteCerts
I1028 01:52:18.612792   69089 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1028 01:52:18.612884   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:18.665098   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:18.764858   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1028 01:52:18.764932   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I1028 01:52:18.787163   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/machines/server.pem -> /etc/docker/server.pem
I1028 01:52:18.787231   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I1028 01:52:18.809443   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1028 01:52:18.809511   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1028 01:52:18.830389   69089 provision.go:85] duration metric: configureAuth took 711.3686ms
I1028 01:52:18.830419   69089 ubuntu.go:190] setting minikube options for container-runtime
I1028 01:52:18.830701   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:18.881545   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:18.881770   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:18.881798   69089 main.go:119] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1028 01:52:19.017006   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: overlay

I1028 01:52:19.017039   69089 ubuntu.go:71] root file system type: overlay
I1028 01:52:19.017249   69089 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I1028 01:52:19.017360   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:19.067034   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:19.067237   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:19.067358   69089 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1028 01:52:19.213376   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I1028 01:52:19.213604   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:19.264926   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:19.265125   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:19.265159   69089 main.go:119] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1028 01:52:19.888954   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service   2020-03-10 19:42:48.000000000 +0000
+++ /lib/systemd/system/docker.service.new  2020-10-28 01:52:19.209979815 +0000
@@ -8,24 +8,22 @@

 [Service]
 Type=notify
-# the default is not to use systemd for cgroups because the delegate issues still
-# exists and systemd currently does not support the cgroup feature set required
-# for containers run by docker
-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
-ExecReload=/bin/kill -s HUP $MAINPID
-TimeoutSec=0
-RestartSec=2
-Restart=always
-
-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
-# to make them work for either version of systemd.
-StartLimitBurst=3
-
-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
-# this option work for either version of systemd.
-StartLimitInterval=60s
+
+
+
+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
+# The base configuration already specifies an 'ExecStart=...' command. The first directive
+# here is to clear out that command inherited from the base configuration. Without this,
+# the command from the base configuration and the command specified here are treated as
+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
+# will catch this invalid input and refuse to start the service with an error like:
+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
+
+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
+ExecStart=
+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
+ExecReload=/bin/kill -s HUP 

 # Having non-zero Limit*s causes performance problems due to accounting overhead
 # in the kernel. We recommend using cgroups to do container-local accounting.
@@ -33,9 +31,10 @@
 LimitNPROC=infinity
 LimitCORE=infinity

-# Comment TasksMax if your systemd version does not support it.
-# Only systemd 226 and above support this option.
+# Uncomment TasksMax if your systemd version supports it.
+# Only systemd 226 and above support this version.
 TasksMax=infinity
+TimeoutStartSec=0

 # set delegate yes so that systemd does not reset the cgroups of docker containers
 Delegate=yes

I1028 01:52:19.889013   69089 machine.go:91] provisioned docker machine in 2.172484216s
I1028 01:52:19.889029   69089 client.go:168] LocalClient.Create took 9.735451481s
I1028 01:52:19.889049   69089 start.go:172] duration metric: libmachine.API.Create for "gcloud-local-dev" took 9.735544278s
I1028 01:52:19.889059   69089 start.go:268] post-start starting for "gcloud-local-dev" (driver="docker")
I1028 01:52:19.889067   69089 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1028 01:52:19.889165   69089 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1028 01:52:19.889233   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:19.939004   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:20.034077   69089 ssh_runner.go:148] Run: cat /etc/os-release
I1028 01:52:20.037534   69089 command_runner.go:109] > NAME="Ubuntu"
I1028 01:52:20.037559   69089 command_runner.go:109] > VERSION="20.04 LTS (Focal Fossa)"
I1028 01:52:20.037566   69089 command_runner.go:109] > ID=ubuntu
I1028 01:52:20.037574   69089 command_runner.go:109] > ID_LIKE=debian
I1028 01:52:20.037582   69089 command_runner.go:109] > PRETTY_NAME="Ubuntu 20.04 LTS"
I1028 01:52:20.037589   69089 command_runner.go:109] > VERSION_ID="20.04"
I1028 01:52:20.037604   69089 command_runner.go:109] > HOME_URL="https://www.ubuntu.com/"
I1028 01:52:20.037614   69089 command_runner.go:109] > SUPPORT_URL="https://help.ubuntu.com/"
I1028 01:52:20.037624   69089 command_runner.go:109] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I1028 01:52:20.037640   69089 command_runner.go:109] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I1028 01:52:20.037649   69089 command_runner.go:109] > VERSION_CODENAME=focal
I1028 01:52:20.037657   69089 command_runner.go:109] > UBUNTU_CODENAME=focal
I1028 01:52:20.037757   69089 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1028 01:52:20.037787   69089 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1028 01:52:20.037803   69089 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1028 01:52:20.037813   69089 info.go:97] Remote host: Ubuntu 20.04 LTS
I1028 01:52:20.037827   69089 filesync.go:118] Scanning /home/jtzwu/.minikube/addons for local assets ...
I1028 01:52:20.037916   69089 filesync.go:118] Scanning /home/jtzwu/.minikube/files for local assets ...
I1028 01:52:20.037953   69089 start.go:271] post-start completed in 148.880034ms
I1028 01:52:20.038521   69089 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" gcloud-local-dev
I1028 01:52:20.087786   69089 profile.go:150] Saving config to /home/jtzwu/.minikube/profiles/gcloud-local-dev/config.json ...
I1028 01:52:20.088206   69089 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1028 01:52:20.088289   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:20.138463   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:20.230319   69089 command_runner.go:109] > 77%
I1028 01:52:20.230490   69089 start.go:130] duration metric: createHost completed in 10.081584349s
I1028 01:52:20.230621   69089 cli_runner.go:110] Run: docker container inspect gcloud-local-dev --format={{.State.Status}}
W1028 01:52:20.279181   69089 fix.go:133] unexpected machine state, will restart: <nil>
I1028 01:52:20.279221   69089 machine.go:88] provisioning docker machine ...
I1028 01:52:20.279246   69089 ubuntu.go:166] provisioning hostname "gcloud-local-dev"
I1028 01:52:20.279339   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:20.330741   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:20.331037   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:20.331069   69089 main.go:119] libmachine: About to run SSH command:
sudo hostname gcloud-local-dev && echo "gcloud-local-dev" | sudo tee /etc/hostname
I1028 01:52:20.475456   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: gcloud-local-dev

I1028 01:52:20.475582   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:20.524714   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:20.524997   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:20.525034   69089 main.go:119] libmachine: About to run SSH command:

        if ! grep -xq '.*\sgcloud-local-dev' /etc/hosts; then
            if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 gcloud-local-dev/g' /etc/hosts;
            else 
                echo '127.0.1.1 gcloud-local-dev' | sudo tee -a /etc/hosts; 
            fi
        fi
I1028 01:52:20.660863   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I1028 01:52:20.660901   69089 ubuntu.go:172] set auth options {CertDir:/home/jtzwu/.minikube CaCertPath:/home/jtzwu/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jtzwu/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jtzwu/.minikube/machines/server.pem ServerKeyPath:/home/jtzwu/.minikube/machines/server-key.pem ClientKeyPath:/home/jtzwu/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jtzwu/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jtzwu/.minikube}
I1028 01:52:20.660920   69089 ubuntu.go:174] setting up certificates
I1028 01:52:20.660933   69089 provision.go:82] configureAuth start
I1028 01:52:20.661021   69089 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" gcloud-local-dev
I1028 01:52:20.722047   69089 provision.go:131] copyHostCerts
I1028 01:52:20.722097   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/ca.pem -> /home/jtzwu/.minikube/ca.pem
I1028 01:52:20.722135   69089 exec_runner.go:91] found /home/jtzwu/.minikube/ca.pem, removing ...
I1028 01:52:20.722201   69089 exec_runner.go:98] cp: /home/jtzwu/.minikube/certs/ca.pem --> /home/jtzwu/.minikube/ca.pem (1074 bytes)
I1028 01:52:20.722324   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/cert.pem -> /home/jtzwu/.minikube/cert.pem
I1028 01:52:20.722356   69089 exec_runner.go:91] found /home/jtzwu/.minikube/cert.pem, removing ...
I1028 01:52:20.722389   69089 exec_runner.go:98] cp: /home/jtzwu/.minikube/certs/cert.pem --> /home/jtzwu/.minikube/cert.pem (1119 bytes)
I1028 01:52:20.722463   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/key.pem -> /home/jtzwu/.minikube/key.pem
I1028 01:52:20.722494   69089 exec_runner.go:91] found /home/jtzwu/.minikube/key.pem, removing ...
I1028 01:52:20.722528   69089 exec_runner.go:98] cp: /home/jtzwu/.minikube/certs/key.pem --> /home/jtzwu/.minikube/key.pem (1679 bytes)
I1028 01:52:20.722603   69089 provision.go:105] generating server cert: /home/jtzwu/.minikube/machines/server.pem ca-key=/home/jtzwu/.minikube/certs/ca.pem private-key=/home/jtzwu/.minikube/certs/ca-key.pem org=jtzwu.gcloud-local-dev san=[192.168.49.2 localhost 127.0.0.1 minikube gcloud-local-dev]
I1028 01:52:20.947649   69089 provision.go:159] copyRemoteCerts
I1028 01:52:20.947752   69089 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1028 01:52:20.947854   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:21.004827   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:21.106622   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I1028 01:52:21.106697   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1074 bytes)
I1028 01:52:21.129771   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/machines/server.pem -> /etc/docker/server.pem
I1028 01:52:21.130101   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
I1028 01:52:21.151717   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I1028 01:52:21.151803   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I1028 01:52:21.174913   69089 provision.go:85] duration metric: configureAuth took 513.964328ms
I1028 01:52:21.174948   69089 ubuntu.go:190] setting minikube options for container-runtime
I1028 01:52:21.175220   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:21.225543   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:21.225792   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:21.225819   69089 main.go:119] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I1028 01:52:21.360035   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: overlay

I1028 01:52:21.360074   69089 ubuntu.go:71] root file system type: overlay
I1028 01:52:21.360284   69089 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ...
I1028 01:52:21.360422   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:21.408256   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:21.408478   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:21.408603   69089 main.go:119] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I1028 01:52:21.556424   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket

[Service]
Type=notify

# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I1028 01:52:21.556535   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:21.607802   69089 main.go:119] libmachine: Using SSH client type: native
I1028 01:52:21.608042   69089 main.go:119] libmachine: &{{{<nil> 0 [] [] []} docker [0x7e4fa0] 0x7e4f60 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
I1028 01:52:21.608082   69089 main.go:119] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I1028 01:52:21.749232   69089 main.go:119] libmachine: SSH cmd err, output: <nil>: 
I1028 01:52:21.749267   69089 machine.go:91] provisioned docker machine in 1.47003745s
I1028 01:52:21.749281   69089 start.go:268] post-start starting for "gcloud-local-dev" (driver="docker")
I1028 01:52:21.749290   69089 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1028 01:52:21.749391   69089 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1028 01:52:21.749456   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:21.801940   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:21.896855   69089 ssh_runner.go:148] Run: cat /etc/os-release
I1028 01:52:21.900465   69089 command_runner.go:109] > NAME="Ubuntu"
I1028 01:52:21.900492   69089 command_runner.go:109] > VERSION="20.04 LTS (Focal Fossa)"
I1028 01:52:21.900500   69089 command_runner.go:109] > ID=ubuntu
I1028 01:52:21.900512   69089 command_runner.go:109] > ID_LIKE=debian
I1028 01:52:21.900521   69089 command_runner.go:109] > PRETTY_NAME="Ubuntu 20.04 LTS"
I1028 01:52:21.900529   69089 command_runner.go:109] > VERSION_ID="20.04"
I1028 01:52:21.900539   69089 command_runner.go:109] > HOME_URL="https://www.ubuntu.com/"
I1028 01:52:21.900548   69089 command_runner.go:109] > SUPPORT_URL="https://help.ubuntu.com/"
I1028 01:52:21.900558   69089 command_runner.go:109] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
I1028 01:52:21.900573   69089 command_runner.go:109] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
I1028 01:52:21.900581   69089 command_runner.go:109] > VERSION_CODENAME=focal
I1028 01:52:21.900589   69089 command_runner.go:109] > UBUNTU_CODENAME=focal
I1028 01:52:21.900687   69089 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1028 01:52:21.900716   69089 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1028 01:52:21.900731   69089 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1028 01:52:21.900738   69089 info.go:97] Remote host: Ubuntu 20.04 LTS
I1028 01:52:21.900752   69089 filesync.go:118] Scanning /home/jtzwu/.minikube/addons for local assets ...
I1028 01:52:21.900876   69089 filesync.go:118] Scanning /home/jtzwu/.minikube/files for local assets ...
I1028 01:52:21.900919   69089 start.go:271] post-start completed in 151.628458ms
I1028 01:52:21.901002   69089 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1028 01:52:21.901070   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:21.954422   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:22.065704   69089 command_runner.go:109] > 77%
I1028 01:52:22.065889   69089 fix.go:56] fixHost completed within 34.954324756s
I1028 01:52:22.065914   69089 start.go:81] releasing machines lock for "gcloud-local-dev", held for 34.954375657s
I1028 01:52:22.066036   69089 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" gcloud-local-dev
I1028 01:52:22.117657   69089 ssh_runner.go:148] Run: systemctl --version
I1028 01:52:22.117751   69089 ssh_runner.go:148] Run: curl -sS -m 2 https://k8s.gcr.io/
I1028 01:52:22.117756   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:22.117886   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:22.179077   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:22.179547   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:22.270751   69089 command_runner.go:109] > systemd 245 (245.4-4ubuntu3.2)
I1028 01:52:22.270798   69089 command_runner.go:109] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
I1028 01:52:22.297548   69089 command_runner.go:109] > <HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
I1028 01:52:22.297576   69089 command_runner.go:109] > <TITLE>302 Moved</TITLE></HEAD><BODY>
I1028 01:52:22.297586   69089 command_runner.go:109] > <H1>302 Moved</H1>
I1028 01:52:22.297593   69089 command_runner.go:109] > The document has moved
I1028 01:52:22.297604   69089 command_runner.go:109] > <A HREF="https://cloud.google.com/container-registry/">here</A>.
I1028 01:52:22.297611   69089 command_runner.go:109] > </BODY></HTML>
I1028 01:52:22.299554   69089 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd
I1028 01:52:22.314881   69089 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1028 01:52:22.326266   69089 command_runner.go:109] > # /lib/systemd/system/docker.service
I1028 01:52:22.326580   69089 command_runner.go:109] > [Unit]
I1028 01:52:22.326601   69089 command_runner.go:109] > Description=Docker Application Container Engine
I1028 01:52:22.326614   69089 command_runner.go:109] > Documentation=https://docs.docker.com
I1028 01:52:22.326624   69089 command_runner.go:109] > BindsTo=containerd.service
I1028 01:52:22.326636   69089 command_runner.go:109] > After=network-online.target firewalld.service containerd.service
I1028 01:52:22.326654   69089 command_runner.go:109] > Wants=network-online.target
I1028 01:52:22.326665   69089 command_runner.go:109] > Requires=docker.socket
I1028 01:52:22.326673   69089 command_runner.go:109] > [Service]
I1028 01:52:22.326682   69089 command_runner.go:109] > Type=notify
I1028 01:52:22.326701   69089 command_runner.go:109] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1028 01:52:22.326716   69089 command_runner.go:109] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1028 01:52:22.326740   69089 command_runner.go:109] > # here is to clear out that command inherited from the base configuration. Without this,
I1028 01:52:22.326755   69089 command_runner.go:109] > # the command from the base configuration and the command specified here are treated as
I1028 01:52:22.326770   69089 command_runner.go:109] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1028 01:52:22.326785   69089 command_runner.go:109] > # will catch this invalid input and refuse to start the service with an error like:
I1028 01:52:22.326802   69089 command_runner.go:109] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1028 01:52:22.326817   69089 command_runner.go:109] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1028 01:52:22.327032   69089 command_runner.go:109] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1028 01:52:22.327049   69089 command_runner.go:109] > ExecStart=
I1028 01:52:22.327090   69089 command_runner.go:109] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
I1028 01:52:22.327102   69089 command_runner.go:109] > ExecReload=/bin/kill -s HUP 
I1028 01:52:22.327117   69089 command_runner.go:109] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1028 01:52:22.327132   69089 command_runner.go:109] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1028 01:52:22.327141   69089 command_runner.go:109] > LimitNOFILE=infinity
I1028 01:52:22.327150   69089 command_runner.go:109] > LimitNPROC=infinity
I1028 01:52:22.327159   69089 command_runner.go:109] > LimitCORE=infinity
I1028 01:52:22.327172   69089 command_runner.go:109] > # Uncomment TasksMax if your systemd version supports it.
I1028 01:52:22.327184   69089 command_runner.go:109] > # Only systemd 226 and above support this version.
I1028 01:52:22.327193   69089 command_runner.go:109] > TasksMax=infinity
I1028 01:52:22.327202   69089 command_runner.go:109] > TimeoutStartSec=0
I1028 01:52:22.327217   69089 command_runner.go:109] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1028 01:52:22.327226   69089 command_runner.go:109] > Delegate=yes
I1028 01:52:22.327242   69089 command_runner.go:109] > # kill only the docker process, not all processes in the cgroup
I1028 01:52:22.327251   69089 command_runner.go:109] > KillMode=process
I1028 01:52:22.327259   69089 command_runner.go:109] > [Install]
I1028 01:52:22.327269   69089 command_runner.go:109] > WantedBy=multi-user.target
I1028 01:52:22.328777   69089 cruntime.go:193] skipping containerd shutdown because we are bound to it
I1028 01:52:22.328897   69089 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio
I1028 01:52:22.341066   69089 ssh_runner.go:148] Run: sudo systemctl cat docker.service
I1028 01:52:22.351622   69089 command_runner.go:109] > # /lib/systemd/system/docker.service
I1028 01:52:22.352053   69089 command_runner.go:109] > [Unit]
I1028 01:52:22.352099   69089 command_runner.go:109] > Description=Docker Application Container Engine
I1028 01:52:22.352111   69089 command_runner.go:109] > Documentation=https://docs.docker.com
I1028 01:52:22.352120   69089 command_runner.go:109] > BindsTo=containerd.service
I1028 01:52:22.352132   69089 command_runner.go:109] > After=network-online.target firewalld.service containerd.service
I1028 01:52:22.352142   69089 command_runner.go:109] > Wants=network-online.target
I1028 01:52:22.352157   69089 command_runner.go:109] > Requires=docker.socket
I1028 01:52:22.352164   69089 command_runner.go:109] > [Service]
I1028 01:52:22.352171   69089 command_runner.go:109] > Type=notify
I1028 01:52:22.352187   69089 command_runner.go:109] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
I1028 01:52:22.352201   69089 command_runner.go:109] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
I1028 01:52:22.352217   69089 command_runner.go:109] > # here is to clear out that command inherited from the base configuration. Without this,
I1028 01:52:22.352232   69089 command_runner.go:109] > # the command from the base configuration and the command specified here are treated as
I1028 01:52:22.352246   69089 command_runner.go:109] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
I1028 01:52:22.352260   69089 command_runner.go:109] > # will catch this invalid input and refuse to start the service with an error like:
I1028 01:52:22.352276   69089 command_runner.go:109] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
I1028 01:52:22.352289   69089 command_runner.go:109] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
I1028 01:52:22.352304   69089 command_runner.go:109] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
I1028 01:52:22.352311   69089 command_runner.go:109] > ExecStart=
I1028 01:52:22.352360   69089 command_runner.go:109] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
I1028 01:52:22.352370   69089 command_runner.go:109] > ExecReload=/bin/kill -s HUP 
I1028 01:52:22.352384   69089 command_runner.go:109] > # Having non-zero Limit*s causes performance problems due to accounting overhead
I1028 01:52:22.352395   69089 command_runner.go:109] > # in the kernel. We recommend using cgroups to do container-local accounting.
I1028 01:52:22.352402   69089 command_runner.go:109] > LimitNOFILE=infinity
I1028 01:52:22.352409   69089 command_runner.go:109] > LimitNPROC=infinity
I1028 01:52:22.352416   69089 command_runner.go:109] > LimitCORE=infinity
I1028 01:52:22.353313   69089 command_runner.go:109] > # Uncomment TasksMax if your systemd version supports it.
I1028 01:52:22.353338   69089 command_runner.go:109] > # Only systemd 226 and above support this version.
I1028 01:52:22.353346   69089 command_runner.go:109] > TasksMax=infinity
I1028 01:52:22.353354   69089 command_runner.go:109] > TimeoutStartSec=0
I1028 01:52:22.353375   69089 command_runner.go:109] > # set delegate yes so that systemd does not reset the cgroups of docker containers
I1028 01:52:22.353383   69089 command_runner.go:109] > Delegate=yes
I1028 01:52:22.353395   69089 command_runner.go:109] > # kill only the docker process, not all processes in the cgroup
I1028 01:52:22.353403   69089 command_runner.go:109] > KillMode=process
I1028 01:52:22.353411   69089 command_runner.go:109] > [Install]
I1028 01:52:22.353419   69089 command_runner.go:109] > WantedBy=multi-user.target
I1028 01:52:22.353523   69089 ssh_runner.go:148] Run: sudo systemctl daemon-reload
I1028 01:52:22.440375   69089 ssh_runner.go:148] Run: sudo systemctl start docker
I1028 01:52:22.452323   69089 ssh_runner.go:148] Run: docker version --format {{.Server.Version}}
I1028 01:52:22.512418   69089 command_runner.go:109] > 19.03.8
══════════I1028 01:52:22.516374   69089 cli_runner.go:110] Run: docker network inspect gcloud-local-dev --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}"
I1028 01:52:22.568660   69089 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1028 01:52:22.572814   69089 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1    host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1028 01:52:22.584357   69089 preload.go:97] Checking if preload exists for k8s version v1.19.2 and runtime docker
I1028 01:52:22.584404   69089 preload.go:105] Found local preload: /home/jtzwu/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.2-docker-overlay2-amd64.tar.lz4
I1028 01:52:22.584496   69089 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1028 01:52:22.632965   69089 command_runner.go:109] > k8s.gcr.io/kube-proxy:v1.19.2
I1028 01:52:22.632995   69089 command_runner.go:109] > k8s.gcr.io/kube-apiserver:v1.19.2
I1028 01:52:22.633006   69089 command_runner.go:109] > k8s.gcr.io/kube-controller-manager:v1.19.2
I1028 01:52:22.633016   69089 command_runner.go:109] > k8s.gcr.io/kube-scheduler:v1.19.2
I1028 01:52:22.633028   69089 command_runner.go:109] > gcr.io/k8s-minikube/storage-provisioner:v3
I1028 01:52:22.633036   69089 command_runner.go:109] > k8s.gcr.io/etcd:3.4.13-0
I1028 01:52:22.633044   69089 command_runner.go:109] > kubernetesui/dashboard:v2.0.3
I1028 01:52:22.633052   69089 command_runner.go:109] > k8s.gcr.io/coredns:1.7.0
I1028 01:52:22.633059   69089 command_runner.go:109] > kubernetesui/metrics-scraper:v1.0.4
I1028 01:52:22.633066   69089 command_runner.go:109] > k8s.gcr.io/pause:3.2
I1028 01:52:22.636007   69089 docker.go:381] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I1028 01:52:22.636036   69089 docker.go:319] Images already preloaded, skipping extraction
I1028 01:52:22.636119   69089 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}}
I1028 01:52:22.692388   69089 command_runner.go:109] > k8s.gcr.io/kube-proxy:v1.19.2
I1028 01:52:22.692420   69089 command_runner.go:109] > k8s.gcr.io/kube-controller-manager:v1.19.2
I1028 01:52:22.692429   69089 command_runner.go:109] > k8s.gcr.io/kube-apiserver:v1.19.2
I1028 01:52:22.692438   69089 command_runner.go:109] > k8s.gcr.io/kube-scheduler:v1.19.2
I1028 01:52:22.692446   69089 command_runner.go:109] > gcr.io/k8s-minikube/storage-provisioner:v3
I1028 01:52:22.692454   69089 command_runner.go:109] > k8s.gcr.io/etcd:3.4.13-0
I1028 01:52:22.692462   69089 command_runner.go:109] > kubernetesui/dashboard:v2.0.3
I1028 01:52:22.692470   69089 command_runner.go:109] > k8s.gcr.io/coredns:1.7.0
I1028 01:52:22.692479   69089 command_runner.go:109] > kubernetesui/metrics-scraper:v1.0.4
I1028 01:52:22.692486   69089 command_runner.go:109] > k8s.gcr.io/pause:3.2
I1028 01:52:22.692754   69089 docker.go:381] Got preloaded images: -- stdout --
k8s.gcr.io/kube-proxy:v1.19.2
k8s.gcr.io/kube-controller-manager:v1.19.2
k8s.gcr.io/kube-apiserver:v1.19.2
k8s.gcr.io/kube-scheduler:v1.19.2
gcr.io/k8s-minikube/storage-provisioner:v3
k8s.gcr.io/etcd:3.4.13-0
kubernetesui/dashboard:v2.0.3
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I1028 01:52:22.692776   69089 cache_images.go:74] Images are preloaded, skipping loading
I1028 01:52:22.692887   69089 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}}
I1028 01:52:22.760477   69089 command_runner.go:109] > cgroupfs
I1028 01:52:22.763088   69089 cni.go:74] Creating CNI manager for ""
I1028 01:52:22.763117   69089 cni.go:117] CNI unnecessary in this configuration, recommending no CNI
I1028 01:52:22.763149   69089 kubeadm.go:84] Using pod CIDR: 
I1028 01:52:22.763206   69089 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:gcloud-local-dev NodeName:gcloud-local-dev DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I1028 01:52:22.763440   69089 kubeadm.go:154] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "gcloud-local-dev"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.19.2
networking:
  dnsDomain: cluster.local
  podSubnet: ""
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: ""
metricsBindAddress: 192.168.49.2:10249

I1028 01:52:22.763554   69089 kubeadm.go:822] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.19.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=gcloud-local-dev --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.19.2 ClusterName:gcloud-local-dev APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1028 01:52:22.763653   69089 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.2
I1028 01:52:22.772411   69089 command_runner.go:109] > kubeadm
I1028 01:52:22.772436   69089 command_runner.go:109] > kubectl
I1028 01:52:22.772443   69089 command_runner.go:109] > kubelet
I1028 01:52:22.773279   69089 binaries.go:44] Found k8s binaries, skipping transfer
I1028 01:52:22.773389   69089 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1028 01:52:22.781777   69089 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (342 bytes)
I1028 01:52:22.798024   69089 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes)
I1028 01:52:22.813663   69089 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1795 bytes)
I1028 01:52:22.829767   69089 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1028 01:52:22.833458   69089 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2   control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts"
I1028 01:52:22.845473   69089 certs.go:52] Setting up /home/jtzwu/.minikube/profiles/gcloud-local-dev for IP: 192.168.49.2
I1028 01:52:22.845544   69089 certs.go:169] skipping minikubeCA CA generation: /home/jtzwu/.minikube/ca.key
I1028 01:52:22.845569   69089 certs.go:169] skipping proxyClientCA CA generation: /home/jtzwu/.minikube/proxy-client-ca.key
I1028 01:52:22.845649   69089 certs.go:269] skipping minikube-user signed cert generation: /home/jtzwu/.minikube/profiles/gcloud-local-dev/client.key
I1028 01:52:22.845691   69089 certs.go:269] skipping minikube signed cert generation: /home/jtzwu/.minikube/profiles/gcloud-local-dev/apiserver.key.dd3b5fb2
I1028 01:52:22.845717   69089 certs.go:269] skipping aggregator signed cert generation: /home/jtzwu/.minikube/profiles/gcloud-local-dev/proxy-client.key
I1028 01:52:22.845751   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/profiles/gcloud-local-dev/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I1028 01:52:22.845773   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/profiles/gcloud-local-dev/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I1028 01:52:22.845791   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/profiles/gcloud-local-dev/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I1028 01:52:22.845808   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/profiles/gcloud-local-dev/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I1028 01:52:22.845825   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I1028 01:52:22.845886   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I1028 01:52:22.845905   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I1028 01:52:22.845924   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I1028 01:52:22.845999   69089 certs.go:348] found cert: /home/jtzwu/.minikube/certs/home/jtzwu/.minikube/certs/ca-key.pem (1675 bytes)
I1028 01:52:22.846061   69089 certs.go:348] found cert: /home/jtzwu/.minikube/certs/home/jtzwu/.minikube/certs/ca.pem (1074 bytes)
I1028 01:52:22.846108   69089 certs.go:348] found cert: /home/jtzwu/.minikube/certs/home/jtzwu/.minikube/certs/cert.pem (1119 bytes)
I1028 01:52:22.846143   69089 certs.go:348] found cert: /home/jtzwu/.minikube/certs/home/jtzwu/.minikube/certs/key.pem (1679 bytes)
I1028 01:52:22.846192   69089 vm_assets.go:96] NewFileAsset: /home/jtzwu/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I1028 01:52:22.847784   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/profiles/gcloud-local-dev/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1028 01:52:22.869073   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/profiles/gcloud-local-dev/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1028 01:52:22.890563   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/profiles/gcloud-local-dev/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1028 01:52:22.911954   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/profiles/gcloud-local-dev/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1028 01:52:22.933710   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1028 01:52:22.955222   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I1028 01:52:22.977066   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1028 01:52:22.997907   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1028 01:52:23.020307   69089 ssh_runner.go:215] scp /home/jtzwu/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1028 01:52:23.042231   69089 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes)
I1028 01:52:23.057933   69089 ssh_runner.go:148] Run: openssl version
I1028 01:52:23.063611   69089 command_runner.go:109] > OpenSSL 1.1.1f  31 Mar 2020
I1028 01:52:23.064012   69089 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1028 01:52:23.073250   69089 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1028 01:52:23.077278   69089 command_runner.go:109] > -rw-r--r-- 1 root root 1111 Oct 28 01:26 /usr/share/ca-certificates/minikubeCA.pem
I1028 01:52:23.077371   69089 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Oct 28 01:26 /usr/share/ca-certificates/minikubeCA.pem
I1028 01:52:23.077466   69089 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1028 01:52:23.083534   69089 command_runner.go:109] > b5213941
I1028 01:52:23.083794   69089 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1028 01:52:23.092673   69089 kubeadm.go:324] StartCluster: {Name:gcloud-local-dev KeepContext:true EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.13@sha256:4d43acbd0050148d4bc399931f1b15253b5e73815b63a67b8ab4a5c9e523403f Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.2 ClusterName:gcloud-local-dev APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ExposedPorts:[]}
I1028 01:52:23.092890   69089 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I1028 01:52:23.142243   69089 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1028 01:52:23.151072   69089 command_runner.go:109] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
I1028 01:52:23.151107   69089 command_runner.go:109] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
I1028 01:52:23.151121   69089 command_runner.go:109] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
I1028 01:52:23.151228   69089 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1028 01:52:23.160656   69089 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver
I1028 01:52:23.160759   69089 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1028 01:52:23.169516   69089 command_runner.go:109] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
I1028 01:52:23.169544   69089 command_runner.go:109] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
I1028 01:52:23.169554   69089 command_runner.go:109] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
I1028 01:52:23.169565   69089 command_runner.go:109] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1028 01:52:23.169608   69089 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1028 01:52:23.169639   69089 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1028 01:52:23.297106   69089 command_runner.go:109] > [init] Using Kubernetes version: v1.19.2
I1028 01:52:23.297136   69089 command_runner.go:109] > [preflight] Running pre-flight checks
I1028 01:52:23.599322   69089 command_runner.go:109] > [preflight] Pulling images required for setting up a Kubernetes cluster
I1028 01:52:23.599365   69089 command_runner.go:109] > [preflight] This might take a minute or two, depending on the speed of your internet connection
I1028 01:52:23.599381   69089 command_runner.go:109] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1028 01:52:23.985774   69089 command_runner.go:109] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
I1028 01:52:23.986208   69089 command_runner.go:109] > [certs] Using existing ca certificate authority
I1028 01:52:23.986735   69089 command_runner.go:109] > [certs] Using existing apiserver certificate and key on disk
I1028 01:52:24.164088   69089 command_runner.go:109] > [certs] Generating "apiserver-kubelet-client" certificate and key
I1028 01:52:24.517455   69089 command_runner.go:109] > [certs] Generating "front-proxy-ca" certificate and key
I1028 01:52:24.692703   69089 command_runner.go:109] > [certs] Generating "front-proxy-client" certificate and key
I1028 01:52:24.957405   69089 command_runner.go:109] > [certs] Generating "etcd/ca" certificate and key
I1028 01:52:25.385111   69089 command_runner.go:109] > [certs] Generating "etcd/server" certificate and key
I1028 01:52:25.385152   69089 command_runner.go:109] > [certs] etcd/server serving cert is signed for DNS names [gcloud-local-dev localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1028 01:52:25.537535   69089 command_runner.go:109] > [certs] Generating "etcd/peer" certificate and key
I1028 01:52:25.537612   69089 command_runner.go:109] > [certs] etcd/peer serving cert is signed for DNS names [gcloud-local-dev localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
I1028 01:52:25.727394   69089 command_runner.go:109] > [certs] Generating "etcd/healthcheck-client" certificate and key
I1028 01:52:25.865648   69089 command_runner.go:109] > [certs] Generating "apiserver-etcd-client" certificate and key
I1028 01:52:26.025887   69089 command_runner.go:109] > [certs] Generating "sa" key and public key
I1028 01:52:26.026126   69089 command_runner.go:109] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1028 01:52:26.191225   69089 command_runner.go:109] > [kubeconfig] Writing "admin.conf" kubeconfig file
I1028 01:52:26.339413   69089 command_runner.go:109] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
I1028 01:52:26.688558   69089 command_runner.go:109] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1028 01:52:27.106756   69089 command_runner.go:109] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
I1028 01:52:27.121776   69089 command_runner.go:109] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
I1028 01:52:27.123577   69089 command_runner.go:109] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I1028 01:52:27.123600   69089 command_runner.go:109] > [kubelet-start] Starting the kubelet
I1028 01:52:27.223859   69089 command_runner.go:109] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
I1028 01:52:27.223898   69089 command_runner.go:109] > [control-plane] Creating static Pod manifest for "kube-apiserver"
I1028 01:52:27.233241   69089 command_runner.go:109] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
I1028 01:52:27.234907   69089 command_runner.go:109] > [control-plane] Creating static Pod manifest for "kube-scheduler"
I1028 01:52:27.236801   69089 command_runner.go:109] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1028 01:52:27.241780   69089 command_runner.go:109] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I1028 01:52:36.756347   69089 command_runner.go:109] ! W1028 01:52:23.296083     807 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
I1028 01:52:36.756396   69089 command_runner.go:109] !  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
I1028 01:52:36.756411   69089 command_runner.go:109] !  [WARNING Swap]: running with swap on is not supported. Please disable swap
I1028 01:52:36.756474   69089 command_runner.go:109] !  [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I1028 01:52:36.756517   69089 ssh_runner.go:188] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": (13.586849502s)
I1028 01:52:36.761761   69089 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
W1028 01:52:36.761820   69089 ssh_runner.go:82] session error, resetting client: EOF
I1028 01:52:36.761868   69089 retry.go:31] will retry after 149.242379ms: EOF
I1028 01:52:36.911424   69089 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" gcloud-local-dev
I1028 01:52:36.972985   69089 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jtzwu/.minikube/machines/gcloud-local-dev/id_rsa Username:docker}
I1028 01:52:37.474512   69089 command_runner.go:109] ! W1028 01:52:37.474046    2153 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found
I1028 01:52:37.474550   69089 command_runner.go:109] ! W1028 01:52:37.474178    2153 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
════════════════════╝
DEBUG: (gcloud.alpha.code.dev) No subprocess output for 90 seconds
Traceback (most recent call last):
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/cli.py", line 983, in Execute
    resources = calliope_command.Run(cli=self, args=args)
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/calliope/backend.py", line 808, in Run
    resources = command_instance.Run(args)
  File "/usr/lib/google-cloud-sdk/lib/surface/code/dev.py", line 142, in Run
    self._GetKubernetesEngine(args) as kube_context, \
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/code/kubernetes.py", line 182, in __enter__
    _StartMinikubeCluster(self._cluster_name, self._vm_driver, self._debug)
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/code/kubernetes.py", line 253, in _StartMinikubeCluster
    six.reraise(MinikubeStartError, e, sys.exc_info()[2])
  File "/usr/bin/../lib/google-cloud-sdk/lib/third_party/six/__init__.py", line 693, in reraise
    raise value
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/code/kubernetes.py", line 247, in _StartMinikubeCluster
    cmd, event_timeout_sec=90, show_stderr=debug):
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/code/run_subprocess.py", line 197, in StreamOutputJson
    p.wait()
  File "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/code/run_subprocess.py", line 103, in __exit__
    self.error_format.format(timeout_sec=self.timeout_sec))
googlecloudsdk.api_lib.compute.utils.TimeoutError: No subprocess output for 90 seconds
ERROR: (gcloud.alpha.code.dev) No subprocess output for 90 seconds
minikube {'data': {'currentstep': '0', 'message': '[gcloud-local-dev] minikube v1.14.1 on Debian 10.6', 'name': 'Initial Minikube Setup', 'totalsteps': '12'}, 'datacontenttype': 'application/json', 'id': 'b08fd4ba-0722-4778-a2d3-c11151acc60f', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.step'}
minikube {'data': {'currentstep': '1', 'message': 'Using the docker driver based on existing profile', 'name': 'Selecting Driver', 'totalsteps': '12'}, 'datacontenttype': 'application/json', 'id': 'caa4d0fa-cbb3-4ef9-acd3-545b5339c655', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.step'}
minikube {'data': {'currentstep': '3', 'message': 'Starting control plane node gcloud-local-dev in cluster gcloud-local-dev', 'name': 'Starting Node', 'totalsteps': '12'}, 'datacontenttype': 'application/json', 'id': 'daa0f3e6-e3ba-40ee-923a-104af03a9378', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.step'}
minikube {'data': {'currentstep': '3', 'message': 'docker "gcloud-local-dev" container is missing, will recreate.', 'name': 'Starting Node', 'totalsteps': '12'}, 'datacontenttype': 'application/json', 'id': 'cd2811f6-703f-4c18-8c1b-270d8097429f', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.step'}
minikube {'data': {'currentstep': '6', 'message': 'Creating docker container (CPUs=2, Memory=4000MB) ...', 'name': 'Creating Container', 'totalsteps': '12'}, 'datacontenttype': 'application/json', 'id': '80226ba3-a209-4f8d-813a-5ec0a12f7dec', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.step'}
minikube {'data': {'currentstep': '8', 'message': 'Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...', 'name': 'Preparing Kubernetes', 'totalsteps': '12'}, 'datacontenttype': 'application/json', 'id': 'fa07717c-f4b8-447b-9af2-d534ae2db9d0', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.step'}
minikube {'data': {'message': 'initialization failed, will try again: run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": wait: remote command exited without exit status or exit signal\nstdout:\n[init] Using Kubernetes version: v1.19.2\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using \'kubeadm config images pull\'\n[certs] Using certificateDir folder "/var/lib/minikube/certs"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating "apiserver-kubelet-client" certificate and key\n[certs] Generating "front-proxy-ca" certificate and key\n[certs] Generating "front-proxy-client" certificate and key\n[certs] Generating "etcd/ca" certificate and key\n[certs] Generating "etcd/server" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [gcloud-local-dev localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating "etcd/peer" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [gcloud-local-dev localhost] and IPs [192.168.49.2 127.0.0.1 ::1]\n[certs] Generating "etcd/healthcheck-client" certificate and key\n[certs] Generating "apiserver-etcd-client" certificate and key\n[certs] Generating "sa" key and public key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Starting the kubelet\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n\nstderr:\nW1028 01:52:23.296083     807 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]\n\t[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING Swap]: running with swap on is not supported. Please disable swap\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run \'systemctl enable kubelet.service\''}, 'datacontenttype': 'application/json', 'id': '4ad1a0a5-6bdc-4306-ac8f-d8881f6b44a5', 'source': 'https://minikube.sigs.k8s.io/', 'specversion': '1.0', 'type': 'io.k8s.sigs.minikube.error'}
medyagh commented 3 years ago

update: in the logs we see

WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING Swap]: running with swap on is not supported.

that means

gcloud alpha code dev

is not setting minikube with --force-systemd flag which is required for cloudshell.

however when I run minikube manually inside cloud shell, it is detecting the global env set for force-systemd :

MINIKUBE_FORCE_SYSTEMD=true

and it works

medya@cloudshell:~/helloworld-nodejs (k8s-minikube)$ minikube start
* minikube v1.14.1 on Debian 10.6
  - MINIKUBE_FORCE_SYSTEMD=true
  - MINIKUBE_HOME=/google/minikube
  - MINIKUBE_WANTUPDATENOTIFICATION=false
* Automatically selected the docker driver
* Starting control plane node minikube in cluster minikube
* Creating docker container (CPUs=2, Memory=4000MB) ...
* Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" by default

but when I run sample app https://codelabs.developers.google.com/codelabs/cloud-run-hello#2 and run

gcloud alpha code dev --verbosity=debug

it fails similar to the first comment and when I kill the command and exec into the docker container I verify that it is using "cgroupfs" instead of "systemd" which is required for cloud shell.

medya@cloudshell:~ (k8s-minikube)$ docker exec -it gcloud-local-dev /bin/bash
root@gcloud-local-dev:/# docker info | grep Cgrou
 Cgroup Driver: cgroupfs

that could mean the gcloud alpha code is not picking up the the same Global Env var that cloud shell uses for minikube. (maybe it is running as a separate user? )

medyagh commented 3 years ago

given that info, I recommend making sure the gcloud code alpha is picking up the same environement variable that was meant for cloud shell

altenratively minikube can Prevent cloud shell from running if they dont set the correct flag and provide a beter errorr message

medyagh commented 3 years ago

the root cause was found