kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.47k stars 4.89k forks source link

corp proxy: Get "https://control-plane.minikube.internal": x509: certificate signed by unknown authority #9874

Closed ucohen closed 3 years ago

ucohen commented 3 years ago

Trying to start minikube behind corporate proxy, I cannot get it to start. I trid to folow https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy - not sure if got it correct.

Steps to reproduce the issue:

  1. running Ubuntu 18.04
  2. using corporate proxy setting
minikube start      --alsologtostderr -v=3 --driver docker \
                    --docker-env HTTP_PROXY=http://XXX.com:1111 \
                    --docker-env HTTPS_PROXY=http://XXX.com:1112 \
                    --docker-env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24,192.168.99.100/24

Full output of minikube start command used, if not already included:

I1207 18:05:08.069732 639720 out.go:185] Setting OutFile to fd 1 ... I1207 18:05:08.070052 639720 out.go:237] isatty.IsTerminal(1) = true I1207 18:05:08.070061 639720 out.go:198] Setting ErrFile to fd 2... I1207 18:05:08.070069 639720 out.go:237] isatty.IsTerminal(2) = false I1207 18:05:20.359605 639720 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1207 18:05:20.373015 639720 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1207 18:05:20.386173 639720 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1207 18:05:20.386287 639720 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1207 18:05:20.400214 639720 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1207 18:05:20.400307 639720 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Mon 2020-12-07 15:51:58 UTC, end at Mon 2020-12-07 15:53:17 UTC. -- Dec 07 15:51:58 minikube systemd[1]: Starting Docker Application Container Engine... Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.843144985Z" level=info msg="Starting up" Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.846119982Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.846202087Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.846262240Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.846303919Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.848781792Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.848815292Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.848843231Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 07 15:51:58 minikube dockerd[182]: time="2020-12-07T15:51:58.848868470Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.240310774Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.276106646Z" level=warning msg="Your kernel does not support swap memory limit" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.276143070Z" level=warning msg="Your kernel does not support cgroup rt period" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.276152973Z" level=warning msg="Your kernel does not support cgroup rt runtime" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.276165825Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.276173713Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.276374407Z" level=info msg="Loading containers: start." Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.389740537Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 07 15:52:00 minikube dockerd[182]: time="2020-12-07T15:52:00.463006725Z" level=info msg="Loading containers: done." Dec 07 15:52:01 minikube dockerd[182]: time="2020-12-07T15:52:01.562759926Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13 Dec 07 15:52:01 minikube dockerd[182]: time="2020-12-07T15:52:01.563017561Z" level=info msg="Daemon has completed initialization" Dec 07 15:52:01 minikube dockerd[182]: time="2020-12-07T15:52:01.603647720Z" level=info msg="API listen on /run/docker.sock" Dec 07 15:52:01 minikube systemd[1]: Started Docker Application Container Engine. Dec 07 15:52:03 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Dec 07 15:52:03 minikube systemd[1]: Stopping Docker Application Container Engine... Dec 07 15:52:03 minikube dockerd[182]: time="2020-12-07T15:52:03.954859244Z" level=info msg="Processing signal 'terminated'" Dec 07 15:52:03 minikube dockerd[182]: time="2020-12-07T15:52:03.956915483Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Dec 07 15:52:03 minikube dockerd[182]: time="2020-12-07T15:52:03.958459358Z" level=info msg="Daemon shutdown complete" Dec 07 15:52:03 minikube dockerd[182]: time="2020-12-07T15:52:03.958591281Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby Dec 07 15:52:03 minikube systemd[1]: docker.service: Succeeded. Dec 07 15:52:03 minikube systemd[1]: Stopped Docker Application Container Engine. Dec 07 15:52:03 minikube systemd[1]: Starting Docker Application Container Engine... Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.035636671Z" level=info msg="Starting up" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.038822551Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.038877359Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.038919624Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.038951521Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.043269788Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.043339076Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.043372319Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.043392178Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.057326606Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.072002373Z" level=warning msg="Your kernel does not support swap memory limit" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.072039099Z" level=warning msg="Your kernel does not support cgroup rt period" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.072052419Z" level=warning msg="Your kernel does not support cgroup rt runtime" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.072063194Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.072073831Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.072320132Z" level=info msg="Loading containers: start." Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.251003659Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.325039205Z" level=info msg="Loading containers: done." Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.365857598Z" level=info msg="Docker daemon" commit=4484c46d9d graphdriver(s)=overlay2 version=19.03.13 Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.365969968Z" level=info msg="Daemon has completed initialization" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.400774210Z" level=info msg="API listen on /var/run/docker.sock" Dec 07 15:52:04 minikube dockerd[434]: time="2020-12-07T15:52:04.400852604Z" level=info msg="API listen on [::]:2376" Dec 07 15:52:04 minikube systemd[1]: Started Docker Application Container Engine. ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID e48f5dacf4207 14cd22f7abe78 58 seconds ago Running kube-scheduler 0 b4fc745406c6d 512a60779fcb4 4830ab6185860 58 seconds ago Running kube-controller-manager 0 9e15164d8d8e2 4d3a8f10c1f79 0369cf4303ffd 58 seconds ago Running etcd 0 5fd1b7f953d11 f905e40071053 b15c6247777d7 58 seconds ago Running kube-apiserver 0 b18ac53ff422a ==> describe nodes <== No resources found in default namespace. ==> dmesg <== [ +0.000004] No Local Variables are initialized for Method [_STA] [ +0.000001] No Arguments are initialized for method [_STA] [ +0.000001] ACPI Error: Aborting method \SHAD._STA due to previous error (AE_NOT_FOUND) (20190816/psparse-531) [ +0.462493] usb: port power management may be unreliable [ +0.058469] platform eisa.0: EISA: Cannot allocate resource for mainboard [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 1 [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 2 [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 3 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 4 [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 5 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 6 [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7 [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8 [ +0.271186] acpi PNP0C14:03: duplicate WMI GUID DEADBEEF-4001-0000-00A0-C90629100000 (first instance was on PNP0C14:03) [ +0.733815] ata3.00: supports DRM functions and may not be fully accessible [ +0.000040] ata2.00: supports DRM functions and may not be fully accessible [ +0.003134] ata3.00: supports DRM functions and may not be fully accessible [ +0.000032] ata2.00: supports DRM functions and may not be fully accessible [ +0.898938] nvidia: loading out-of-tree module taints kernel. [ +0.000008] nvidia: module license 'NVIDIA' taints kernel. [ +0.000001] Disabling lock debugging due to kernel taint [ +0.058377] EDAC skx: ECC is disabled on imc 0 [ +0.197257] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 450.80.02 Wed Sep 23 01:13:39 UTC 2020 [ +0.058136] EDAC skx: ECC is disabled on imc 0 [ +0.656096] EDAC skx: ECC is disabled on imc 0 [ +0.327210] EDAC skx: ECC is disabled on imc 0 [ +0.222647] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c4000-0x000c7fff window] [ +0.000163] caller os_map_kernel_space.part.10+0x6d/0x80 [nvidia] mapping multiple BARs [ +0.497131] EDAC skx: ECC is disabled on imc 0 [ +0.095872] EDAC skx: ECC is disabled on imc 0 [ +0.188124] EDAC skx: ECC is disabled on imc 0 [ +0.100035] EDAC skx: ECC is disabled on imc 0 [ +0.111877] EDAC skx: ECC is disabled on imc 0 [ +0.115979] EDAC skx: ECC is disabled on imc 0 [ +0.092005] EDAC skx: ECC is disabled on imc 0 [ +0.116053] EDAC skx: ECC is disabled on imc 0 [ +0.080108] EDAC skx: ECC is disabled on imc 0 [ +0.100332] EDAC skx: ECC is disabled on imc 0 [ +0.099705] EDAC skx: ECC is disabled on imc 0 [ +0.063834] EDAC skx: ECC is disabled on imc 0 [ +0.096095] EDAC skx: ECC is disabled on imc 0 [ +0.072026] EDAC skx: ECC is disabled on imc 0 [ +0.099931] EDAC skx: ECC is disabled on imc 0 [ +0.095930] EDAC skx: ECC is disabled on imc 0 [ +0.100059] EDAC skx: ECC is disabled on imc 0 [ +0.067946] EDAC skx: ECC is disabled on imc 0 [ +0.124389] EDAC skx: ECC is disabled on imc 0 [ +0.067651] EDAC skx: ECC is disabled on imc 0 [ +0.063961] EDAC skx: ECC is disabled on imc 0 [ +0.064186] EDAC skx: ECC is disabled on imc 0 [ +0.072121] EDAC skx: ECC is disabled on imc 0 [ +0.051984] EDAC skx: ECC is disabled on imc 0 [ +0.072009] EDAC skx: ECC is disabled on imc 0 [ +0.068000] EDAC skx: ECC is disabled on imc 0 [ +0.075995] EDAC skx: ECC is disabled on imc 0 [ +0.083867] EDAC skx: ECC is disabled on imc 0 [Dec 6 16:34] kauditd_printk_skb: 46 callbacks suppressed [ +0.289580] Started bpfilter [Dec 6 16:37] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality. [Dec 7 10:52] kauditd_printk_skb: 41 callbacks suppressed ==> etcd [4d3a8f10c1f7] <== [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-12-07 15:52:20.416835 I | etcdmain: etcd Version: 3.4.13 2020-12-07 15:52:20.416902 I | etcdmain: Git SHA: ae9734ed2 2020-12-07 15:52:20.416910 I | etcdmain: Go Version: go1.12.17 2020-12-07 15:52:20.416919 I | etcdmain: Go OS/Arch: linux/amd64 2020-12-07 15:52:20.416931 I | etcdmain: setting maximum number of CPUs to 32, total number of available CPUs is 32 [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead 2020-12-07 15:52:20.417079 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-12-07 15:52:20.418456 I | embed: name = minikube 2020-12-07 15:52:20.418480 I | embed: data dir = /var/lib/minikube/etcd 2020-12-07 15:52:20.418488 I | embed: member dir = /var/lib/minikube/etcd/member 2020-12-07 15:52:20.418495 I | embed: heartbeat = 100ms 2020-12-07 15:52:20.418502 I | embed: election = 1000ms 2020-12-07 15:52:20.418508 I | embed: snapshot count = 10000 2020-12-07 15:52:20.418542 I | embed: advertise client URLs = https://192.168.49.2:2379 2020-12-07 15:52:20.507243 I | etcdserver: starting member aec36adc501070cc in cluster fa54960ea34d58be raft2020/12/07 15:52:20 INFO: aec36adc501070cc switched to configuration voters=() raft2020/12/07 15:52:20 INFO: aec36adc501070cc became follower at term 0 raft2020/12/07 15:52:20 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/12/07 15:52:20 INFO: aec36adc501070cc became follower at term 1 raft2020/12/07 15:52:20 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-12-07 15:52:20.512187 W | auth: simple token is not cryptographically signed 2020-12-07 15:52:20.521623 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] 2020-12-07 15:52:20.521815 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10) raft2020/12/07 15:52:20 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892) 2020-12-07 15:52:20.522789 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be 2020-12-07 15:52:20.525845 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-12-07 15:52:20.525986 I | embed: listening for peers on 192.168.49.2:2380 2020-12-07 15:52:20.526149 I | embed: listening for metrics on http://127.0.0.1:2381 raft2020/12/07 15:52:21 INFO: aec36adc501070cc is starting a new election at term 1 raft2020/12/07 15:52:21 INFO: aec36adc501070cc became candidate at term 2 raft2020/12/07 15:52:21 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2 raft2020/12/07 15:52:21 INFO: aec36adc501070cc became leader at term 2 raft2020/12/07 15:52:21 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2 2020-12-07 15:52:21.408935 I | etcdserver: setting up the initial cluster version to 3.4 2020-12-07 15:52:21.411968 N | etcdserver/membership: set the initial cluster version to 3.4 2020-12-07 15:52:21.412058 I | etcdserver/api: enabled capabilities for version 3.4 2020-12-07 15:52:21.412089 I | embed: ready to serve client requests 2020-12-07 15:52:21.412104 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be 2020-12-07 15:52:21.412162 I | embed: ready to serve client requests 2020-12-07 15:52:21.414857 I | embed: serving client requests on 192.168.49.2:2379 2020-12-07 15:52:21.414889 I | embed: serving client requests on 127.0.0.1:2379 2020-12-07 15:52:30.408457 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-07 15:52:36.570736 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-07 15:52:46.570674 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-07 15:52:56.570684 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-07 15:53:06.570654 I | etcdserver/api/etcdhttp: /health OK (status code 200) 2020-12-07 15:53:16.570657 I | etcdserver/api/etcdhttp: /health OK (status code 200) ==> kernel <== 15:53:18 up 23:19, 0 users, load average: 0.85, 0.55, 0.42 Linux minikube 5.4.0-56-generic #62~18.04.1-Ubuntu SMP Tue Nov 24 10:07:50 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.1 LTS" ==> kube-apiserver [f905e4007105] <== W1207 15:52:24.191667 1 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources. W1207 15:52:24.203368 1 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W1207 15:52:24.216099 1 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources. W1207 15:52:24.230313 1 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1207 15:52:24.310266 1 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1207 15:52:24.333932 1 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1207 15:52:24.350474 1 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources. W1207 15:52:24.350492 1 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources. I1207 15:52:24.358303 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I1207 15:52:24.358318 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I1207 15:52:24.361857 1 client.go:360] parsed scheme: "endpoint" I1207 15:52:24.361879 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I1207 15:52:24.375867 1 client.go:360] parsed scheme: "endpoint" I1207 15:52:24.375887 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I1207 15:52:26.491798 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1207 15:52:26.491869 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1207 15:52:26.491923 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I1207 15:52:26.492192 1 secure_serving.go:197] Serving securely on [::]:8443 I1207 15:52:26.492224 1 available_controller.go:457] Starting AvailableConditionController I1207 15:52:26.492228 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1207 15:52:26.492234 1 tlsconfig.go:240] Starting DynamicServingCertificateController I1207 15:52:26.492276 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key I1207 15:52:26.492310 1 customresource_discovery_controller.go:209] Starting DiscoveryController I1207 15:52:26.492249 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I1207 15:52:26.492326 1 controller.go:86] Starting OpenAPI controller I1207 15:52:26.492327 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1207 15:52:26.492334 1 naming_controller.go:291] Starting NamingConditionController I1207 15:52:26.492345 1 establishing_controller.go:76] Starting EstablishingController I1207 15:52:26.492351 1 controller.go:83] Starting OpenAPI AggregationController I1207 15:52:26.492360 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I1207 15:52:26.492369 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I1207 15:52:26.492381 1 crd_finalizer.go:266] Starting CRDFinalizer I1207 15:52:26.492758 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1207 15:52:26.492768 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I1207 15:52:26.492792 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1207 15:52:26.492809 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1207 15:52:26.493141 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1207 15:52:26.493158 1 autoregister_controller.go:141] Starting autoregister controller I1207 15:52:26.493184 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I1207 15:52:26.493183 1 cache.go:32] Waiting for caches to sync for autoregister controller E1207 15:52:26.495377 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg: I1207 15:52:26.592408 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1207 15:52:26.592487 1 cache.go:39] Caches are synced for AvailableConditionController controller I1207 15:52:26.607230 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I1207 15:52:26.607235 1 cache.go:39] Caches are synced for autoregister controller I1207 15:52:26.607238 1 shared_informer.go:247] Caches are synced for crd-autoregister I1207 15:52:27.491938 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1207 15:52:27.491989 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1207 15:52:27.499626 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I1207 15:52:27.505137 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I1207 15:52:27.505168 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I1207 15:52:28.317366 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1207 15:52:28.374022 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W1207 15:52:28.536229 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I1207 15:52:28.537754 1 controller.go:606] quota admission added evaluator for: endpoints I1207 15:52:28.542850 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I1207 15:52:28.883311 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1207 15:52:59.005753 1 client.go:360] parsed scheme: "passthrough" I1207 15:52:59.005822 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I1207 15:52:59.005837 1 clientconn.go:948] ClientConn switching balancer to "pick_first" ==> kube-controller-manager [512a60779fcb] <== W1207 15:52:33.779930 1 controllermanager.go:541] Skipping "service" I1207 15:52:34.036108 1 controllermanager.go:549] Started "namespace" I1207 15:52:34.036210 1 namespace_controller.go:200] Starting namespace controller I1207 15:52:34.036236 1 shared_informer.go:240] Waiting for caches to sync for namespace I1207 15:52:34.429101 1 garbagecollector.go:128] Starting garbage collector controller I1207 15:52:34.429140 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1207 15:52:34.429190 1 graph_builder.go:282] GraphBuilder running I1207 15:52:34.429369 1 controllermanager.go:549] Started "garbagecollector" I1207 15:52:34.679575 1 controllermanager.go:549] Started "cronjob" I1207 15:52:34.679661 1 cronjob_controller.go:96] Starting CronJob Manager I1207 15:52:34.929725 1 controllermanager.go:549] Started "ttl" I1207 15:52:34.929809 1 ttl_controller.go:118] Starting TTL controller I1207 15:52:34.929823 1 shared_informer.go:240] Waiting for caches to sync for TTL I1207 15:52:35.179772 1 controllermanager.go:549] Started "serviceaccount" I1207 15:52:35.179864 1 serviceaccounts_controller.go:117] Starting service account controller I1207 15:52:35.179895 1 shared_informer.go:240] Waiting for caches to sync for service account I1207 15:52:35.429589 1 controllermanager.go:549] Started "job" I1207 15:52:35.429665 1 job_controller.go:148] Starting job controller I1207 15:52:35.429674 1 shared_informer.go:240] Waiting for caches to sync for job I1207 15:52:35.828699 1 controllermanager.go:549] Started "disruption" W1207 15:52:35.828735 1 controllermanager.go:541] Skipping "ttl-after-finished" I1207 15:52:35.829084 1 shared_informer.go:240] Waiting for caches to sync for resource quota I1207 15:52:35.829136 1 disruption.go:331] Starting disruption controller I1207 15:52:35.829168 1 shared_informer.go:240] Waiting for caches to sync for disruption I1207 15:52:35.907182 1 shared_informer.go:247] Caches are synced for PVC protection I1207 15:52:35.907214 1 shared_informer.go:247] Caches are synced for endpoint_slice I1207 15:52:35.907253 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I1207 15:52:35.907283 1 shared_informer.go:247] Caches are synced for service account I1207 15:52:35.907303 1 shared_informer.go:247] Caches are synced for stateful set I1207 15:52:35.907294 1 shared_informer.go:247] Caches are synced for expand I1207 15:52:35.907337 1 shared_informer.go:247] Caches are synced for attach detach I1207 15:52:35.907402 1 shared_informer.go:247] Caches are synced for taint I1207 15:52:35.907347 1 shared_informer.go:247] Caches are synced for persistent volume I1207 15:52:35.907488 1 taint_manager.go:187] Starting NoExecuteTaintManager I1207 15:52:35.908988 1 shared_informer.go:247] Caches are synced for daemon sets I1207 15:52:35.918885 1 shared_informer.go:247] Caches are synced for deployment I1207 15:52:35.921649 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I1207 15:52:35.922346 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I1207 15:52:35.923868 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I1207 15:52:35.923911 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I1207 15:52:35.926088 1 shared_informer.go:247] Caches are synced for GC I1207 15:52:35.929231 1 shared_informer.go:247] Caches are synced for disruption I1207 15:52:35.929260 1 disruption.go:339] Sending events to api server. I1207 15:52:35.929686 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I1207 15:52:35.929717 1 shared_informer.go:247] Caches are synced for job I1207 15:52:35.929864 1 shared_informer.go:247] Caches are synced for TTL I1207 15:52:35.929896 1 shared_informer.go:247] Caches are synced for ReplicationController I1207 15:52:35.929942 1 shared_informer.go:247] Caches are synced for ReplicaSet I1207 15:52:35.929981 1 shared_informer.go:247] Caches are synced for PV protection I1207 15:52:35.930234 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I1207 15:52:35.930347 1 shared_informer.go:247] Caches are synced for HPA I1207 15:52:35.936657 1 shared_informer.go:247] Caches are synced for namespace I1207 15:52:35.941168 1 shared_informer.go:247] Caches are synced for bootstrap_signer I1207 15:52:35.979806 1 shared_informer.go:247] Caches are synced for endpoint I1207 15:52:36.084393 1 shared_informer.go:247] Caches are synced for resource quota I1207 15:52:36.129264 1 shared_informer.go:247] Caches are synced for resource quota I1207 15:52:36.185665 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I1207 15:52:36.485839 1 shared_informer.go:247] Caches are synced for garbage collector I1207 15:52:36.529457 1 shared_informer.go:247] Caches are synced for garbage collector I1207 15:52:36.529492 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage ==> kube-scheduler [e48f5dacf420] <== I1207 15:52:20.622399 1 registry.go:173] Registering SelectorSpread plugin I1207 15:52:20.622840 1 registry.go:173] Registering SelectorSpread plugin I1207 15:52:21.641523 1 serving.go:331] Generated self-signed cert in-memory W1207 15:52:26.612359 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W1207 15:52:26.612413 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W1207 15:52:26.612434 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. W1207 15:52:26.612450 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I1207 15:52:26.721041 1 registry.go:173] Registering SelectorSpread plugin I1207 15:52:26.721071 1 registry.go:173] Registering SelectorSpread plugin I1207 15:52:26.725826 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1207 15:52:26.725853 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1207 15:52:26.726532 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I1207 15:52:26.726614 1 tlsconfig.go:240] Starting DynamicServingCertificateController E1207 15:52:26.727830 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1207 15:52:26.807644 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1207 15:52:26.809470 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E1207 15:52:26.809881 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1207 15:52:26.809922 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E1207 15:52:26.810541 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1207 15:52:26.810691 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1207 15:52:26.810764 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1207 15:52:26.810787 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E1207 15:52:26.810875 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E1207 15:52:26.811046 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E1207 15:52:26.811428 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1207 15:52:26.811909 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1207 15:52:27.707591 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E1207 15:52:27.728276 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E1207 15:52:27.745132 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E1207 15:52:27.808325 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E1207 15:52:27.935226 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E1207 15:52:28.007766 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E1207 15:52:28.008714 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E1207 15:52:28.034108 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope I1207 15:52:30.726000 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== -- Logs begin at Mon 2020-12-07 15:51:58 UTC, end at Mon 2020-12-07 15:53:18 UTC. -- Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.328641 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.428824 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.529139 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.629374 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.729599 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.829929 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:13 minikube kubelet[1254]: E1207 15:53:13.930231 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.030468 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.130823 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.231057 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.331224 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.431580 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.531792 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.631972 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.732191 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.832488 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:14 minikube kubelet[1254]: E1207 15:53:14.932727 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.032898 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.133241 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.233359 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.333593 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.433872 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.534189 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.634515 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.734741 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.835021 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:15 minikube kubelet[1254]: E1207 15:53:15.935257 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.035580 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.135862 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.236169 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.336481 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.436762 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.537028 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.637229 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.737483 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.837684 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: E1207 15:53:16.937954 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:16 minikube kubelet[1254]: I1207 15:53:16.964710 1254 kubelet_node_status.go:70] Attempting to register node minikube Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.038295 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.138503 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.238833 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.338915 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.439231 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.539555 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.639802 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.740096 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.840357 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:17 minikube kubelet[1254]: E1207 15:53:17.940519 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.040736 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.141049 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.241321 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.341593 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.441755 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.541913 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.548508 1254 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.642227 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.742546 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.807947 1254 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/minikube?timeout=10s": x509: certificate signed by unknown authority Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.842791 1254 kubelet.go:2183] node "minikube" not found Dec 07 15:53:18 minikube kubelet[1254]: E1207 15:53:18.943130 1254 kubelet.go:2183] node "minikube" not found
tstromberg commented 3 years ago

Instead of using docker-env, which only Docker respects, set the environment variable so that the Go program (minikube) can respect it as well. Try:


HTTPS_PROXY=http://proxy-iil.intel.com:912 \
NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24,192.168.99.100/24 \
minikube start  --driver=docker`

You can also set these variables permanently in your shell environment.
tstromberg commented 3 years ago

Please let me know if this helps!

ucohen commented 3 years ago

Thanks for your reply @tstromberg I think there is a progress, now I have get error for non-existing directory, could you have a look?

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
...

full log:

I1213 16:07:20.567948 1771486 out.go:185] Setting OutFile to fd 1 ... I1213 16:07:20.568230 1771486 out.go:237] isatty.IsTerminal(1) = true I1213 16:07:20.568247 1771486 out.go:198] Setting ErrFile to fd 2... I1213 16:07:20.568262 1771486 out.go:237] isatty.IsTerminal(2) = false I1213 16:07:20.568446 1771486 root.go:279] Updating PATH: /home/ucohen/.minikube/bin I1213 16:07:20.568962 1771486 out.go:192] Setting JSON to false I1213 16:07:20.597448 1771486 start.go:103] hostinfo: {"hostname":"ucohen-lx","uptime":596028,"bootTime":1607272412,"procs":843,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"5.4.0-56-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"2be680f4-2a8b-47cd-a874-9bc340444fd0"} I1213 16:07:20.600101 1771486 start.go:113] virtualization: kvm host I1213 16:07:20.605646 1771486 out.go:110] * minikube v1.15.1 on Ubuntu 18.04 I1213 16:07:20.605913 1771486 notify.go:126] Checking for updates... I1213 16:07:20.605955 1771486 driver.go:302] Setting default libvirt URI to qemu:///system I1213 16:07:20.674716 1771486 docker.go:117] docker version: linux-19.03.14 I1213 16:07:20.674895 1771486 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1213 16:07:20.811432 1771486 info.go:253] docker info: {ID:AATZ:2GLN:IE6N:VWTQ:UHTQ:2OMG:NENA:2VSK:NYYB:GOVE:Q2XA:SK2E Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:66 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-13 16:07:20.731617464 +0200 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-56-generic OperatingSystem:Ubuntu 18.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:134774636544 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http://proxy-chain.intel.com:911/ HTTPSProxy:http://proxy-chain.intel.com:911/ NoProxy: Name:ucohen-lx Labels:[] ExperimentalBuild:false ServerVersion:19.03.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:nvidia Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ea765aba0d05254012b0b9e595e995c09186427f Expected:ea765aba0d05254012b0b9e595e995c09186427f} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1213 16:07:20.811574 1771486 docker.go:147] overlay module found I1213 16:07:20.816059 1771486 out.go:110] * Using the docker driver based on user configuration I1213 16:07:20.816132 1771486 start.go:272] selected driver: docker I1213 16:07:20.816148 1771486 start.go:686] validating driver "docker" against I1213 16:07:20.816179 1771486 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:} I1213 16:07:20.816374 1771486 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1213 16:07:20.950769 1771486 info.go:253] docker info: {ID:AATZ:2GLN:IE6N:VWTQ:UHTQ:2OMG:NENA:2VSK:NYYB:GOVE:Q2XA:SK2E Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:66 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-13 16:07:20.866760059 +0200 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-56-generic OperatingSystem:Ubuntu 18.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:134774636544 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http://proxy-chain.intel.com:911/ HTTPSProxy:http://proxy-chain.intel.com:911/ NoProxy: Name:ucohen-lx Labels:[] ExperimentalBuild:false ServerVersion:19.03.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:nvidia Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ea765aba0d05254012b0b9e595e995c09186427f Expected:ea765aba0d05254012b0b9e595e995c09186427f} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1213 16:07:20.950944 1771486 start_flags.go:233] no existing cluster config was found, will generate one from the flags I1213 16:07:20.955816 1771486 start_flags.go:251] Using suggested 32100MB memory alloc based on sys=128531MB, container=128531MB I1213 16:07:20.956029 1771486 start_flags.go:641] Wait components to verify : map[apiserver:true system_pods:true] I1213 16:07:20.956071 1771486 cni.go:74] Creating CNI manager for "" I1213 16:07:20.956085 1771486 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1213 16:07:20.956109 1771486 start_flags.go:364] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:32100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} I1213 16:07:20.960391 1771486 out.go:110] * Starting control plane node minikube in cluster minikube I1213 16:07:21.032871 1771486 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull I1213 16:07:21.032935 1771486 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull I1213 16:07:21.032963 1771486 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1213 16:07:21.033017 1771486 preload.go:105] Found local preload: /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1213 16:07:21.033030 1771486 cache.go:54] Caching tarball of preloaded images I1213 16:07:21.033051 1771486 preload.go:131] Found /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I1213 16:07:21.033062 1771486 cache.go:57] Finished verifying existence of preloaded tar for v1.19.4 on docker I1213 16:07:21.033581 1771486 profile.go:150] Saving config to /home/ucohen/.minikube/profiles/minikube/config.json ... I1213 16:07:21.033620 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/config.json: {Name:mkb95bb470e96ef849260eca00d7f1270e6380f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:21.033917 1771486 cache.go:184] Successfully downloaded all kic artifacts I1213 16:07:21.033951 1771486 start.go:314] acquiring machines lock for minikube: {Name:mk936277234fc2eae69dee06be3d5658f5f3a331 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1213 16:07:21.034032 1771486 start.go:318] acquired machines lock for "minikube" in 60.055µs I1213 16:07:21.034058 1771486 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:32100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true} I1213 16:07:21.034143 1771486 start.go:127] createHost starting for "" (driver="docker") I1213 16:07:21.036662 1771486 out.go:110] * Creating docker container (CPUs=2, Memory=32100MB) ... I1213 16:07:21.037064 1771486 start.go:164] libmachine.API.Create for "minikube" (driver="docker") I1213 16:07:21.037112 1771486 client.go:165] LocalClient.Create starting I1213 16:07:21.037177 1771486 main.go:119] libmachine: Reading certificate data from /home/ucohen/.minikube/certs/ca.pem I1213 16:07:21.037231 1771486 main.go:119] libmachine: Decoding PEM data... I1213 16:07:21.037265 1771486 main.go:119] libmachine: Parsing certificate... I1213 16:07:21.037479 1771486 main.go:119] libmachine: Reading certificate data from /home/ucohen/.minikube/certs/cert.pem I1213 16:07:21.037518 1771486 main.go:119] libmachine: Decoding PEM data... I1213 16:07:21.037544 1771486 main.go:119] libmachine: Parsing certificate... I1213 16:07:21.038152 1771486 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" W1213 16:07:21.096700 1771486 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1 I1213 16:07:21.097020 1771486 network_create.go:178] running [docker network inspect minikube] to gather additional debugging logs... I1213 16:07:21.097054 1771486 cli_runner.go:110] Run: docker network inspect minikube W1213 16:07:21.161372 1771486 cli_runner.go:148] docker network inspect minikube returned with exit code 1 I1213 16:07:21.161438 1771486 network_create.go:181] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I1213 16:07:21.161463 1771486 network_create.go:183] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I1213 16:07:21.161571 1771486 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1213 16:07:21.221918 1771486 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I1213 16:07:21.222123 1771486 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500 I1213 16:07:21.353399 1771486 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container I1213 16:07:21.353634 1771486 cli_runner.go:110] Run: docker ps -a --format {{.Names}} I1213 16:07:21.422894 1771486 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I1213 16:07:21.490528 1771486 oci.go:102] Successfully created a docker volume minikube I1213 16:07:21.490664 1771486 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib I1213 16:07:22.446273 1771486 oci.go:106] Successfully prepared a docker volume minikube W1213 16:07:22.446392 1771486 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I1213 16:07:22.446409 1771486 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1213 16:07:22.446515 1771486 preload.go:105] Found local preload: /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1213 16:07:22.446530 1771486 kic.go:148] Starting extracting preloaded images to volume ... I1213 16:07:22.446558 1771486 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'" I1213 16:07:22.446636 1771486 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir I1213 16:07:22.599320 1771486 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=32100mb --memory-swap=32100mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e I1213 16:07:23.266558 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}} I1213 16:07:23.323623 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1213 16:07:23.388890 1771486 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I1213 16:07:23.590655 1771486 oci.go:245] the created container "minikube" has a running status. I1213 16:07:23.590738 1771486 kic.go:179] Creating ssh key for kic: /home/ucohen/.minikube/machines/minikube/id_rsa... I1213 16:07:23.784750 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys I1213 16:07:23.784783 1771486 kic_runner.go:179] docker (temp): /home/ucohen/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I1213 16:07:23.912026 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1213 16:07:23.958984 1771486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I1213 16:07:23.959033 1771486 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I1213 16:07:27.405069 1771486 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir: (4.9583476s) I1213 16:07:27.405145 1771486 kic.go:157] duration metric: took 4.958610 seconds to extract preloaded images to volume I1213 16:07:27.405320 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1213 16:07:27.482679 1771486 machine.go:88] provisioning docker machine ... I1213 16:07:27.482762 1771486 ubuntu.go:166] provisioning hostname "minikube" I1213 16:07:27.482891 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:27.532935 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:27.533360 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:27.533393 1771486 main.go:119] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1213 16:07:27.726682 1771486 main.go:119] libmachine: SSH cmd err, output: : minikube I1213 16:07:27.726864 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:27.800598 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:27.800901 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:27.800943 1771486 main.go:119] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1213 16:07:27.967921 1771486 main.go:119] libmachine: SSH cmd err, output: : I1213 16:07:27.967998 1771486 ubuntu.go:172] set auth options {CertDir:/home/ucohen/.minikube CaCertPath:/home/ucohen/.minikube/certs/ca.pem CaPrivateKeyPath:/home/ucohen/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/ucohen/.minikube/machines/server.pem ServerKeyPath:/home/ucohen/.minikube/machines/server-key.pem ClientKeyPath:/home/ucohen/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/ucohen/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/ucohen/.minikube} I1213 16:07:27.968063 1771486 ubuntu.go:174] setting up certificates I1213 16:07:27.968086 1771486 provision.go:82] configureAuth start I1213 16:07:27.968195 1771486 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1213 16:07:28.030722 1771486 provision.go:131] copyHostCerts I1213 16:07:28.030803 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/ca.pem -> /home/ucohen/.minikube/ca.pem I1213 16:07:28.030855 1771486 exec_runner.go:91] found /home/ucohen/.minikube/ca.pem, removing ... I1213 16:07:28.030949 1771486 exec_runner.go:98] cp: /home/ucohen/.minikube/certs/ca.pem --> /home/ucohen/.minikube/ca.pem (1078 bytes) I1213 16:07:28.031062 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/cert.pem -> /home/ucohen/.minikube/cert.pem I1213 16:07:28.031093 1771486 exec_runner.go:91] found /home/ucohen/.minikube/cert.pem, removing ... I1213 16:07:28.031144 1771486 exec_runner.go:98] cp: /home/ucohen/.minikube/certs/cert.pem --> /home/ucohen/.minikube/cert.pem (1119 bytes) I1213 16:07:28.031217 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/key.pem -> /home/ucohen/.minikube/key.pem I1213 16:07:28.031246 1771486 exec_runner.go:91] found /home/ucohen/.minikube/key.pem, removing ... I1213 16:07:28.031294 1771486 exec_runner.go:98] cp: /home/ucohen/.minikube/certs/key.pem --> /home/ucohen/.minikube/key.pem (1675 bytes) I1213 16:07:28.031366 1771486 provision.go:105] generating server cert: /home/ucohen/.minikube/machines/server.pem ca-key=/home/ucohen/.minikube/certs/ca.pem private-key=/home/ucohen/.minikube/certs/ca-key.pem org=ucohen.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube] I1213 16:07:28.203249 1771486 provision.go:159] copyRemoteCerts I1213 16:07:28.203295 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1213 16:07:28.203327 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:28.254698 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:28.370219 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/ca.pem -> /etc/docker/ca.pem I1213 16:07:28.370304 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I1213 16:07:28.404987 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/machines/server.pem -> /etc/docker/server.pem I1213 16:07:28.405057 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes) I1213 16:07:28.440061 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I1213 16:07:28.440144 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I1213 16:07:28.477164 1771486 provision.go:85] duration metric: configureAuth took 509.038085ms I1213 16:07:28.477214 1771486 ubuntu.go:190] setting minikube options for container-runtime I1213 16:07:28.477562 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:28.537610 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:28.537901 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:28.537929 1771486 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1213 16:07:28.708796 1771486 main.go:119] libmachine: SSH cmd err, output: : overlay I1213 16:07:28.708880 1771486 ubuntu.go:71] root file system type: overlay I1213 16:07:28.709188 1771486 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ... I1213 16:07:28.709311 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:28.781740 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:28.782046 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:28.782245 1771486 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify Environment="HTTP_PROXY=http://proxy-iil.intel.com:911" Environment="HTTPS_PROXY=http://proxy-iil.intel.com:912" Environment="NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24" Environment="NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1213 16:07:28.963600 1771486 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify Environment=HTTP_PROXY=http://proxy-iil.intel.com:911 Environment=HTTPS_PROXY=http://proxy-iil.intel.com:912 Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1213 16:07:28.963770 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:29.031075 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:29.031399 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:29.031442 1771486 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1213 16:07:30.138853 1771486 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-09-16 17:01:20.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2020-12-13 14:07:28.957716591 +0000 @@ -8,24 +8,26 @@ [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 - -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +Environment=HTTP_PROXY=http://proxy-iil.intel.com:911 +Environment=HTTPS_PROXY=http://proxy-iil.intel.com:912 +Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 +Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -33,9 +35,10 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I1213 16:07:30.138963 1771486 machine.go:91] provisioned docker machine in 2.65622896s I1213 16:07:30.138987 1771486 client.go:168] LocalClient.Create took 9.101862405s I1213 16:07:30.139018 1771486 start.go:172] duration metric: libmachine.API.Create for "minikube" took 9.101954177s I1213 16:07:30.139035 1771486 start.go:268] post-start starting for "minikube" (driver="docker") I1213 16:07:30.139047 1771486 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1213 16:07:30.139164 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1213 16:07:30.139250 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.204327 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.324084 1771486 ssh_runner.go:148] Run: cat /etc/os-release I1213 16:07:30.329678 1771486 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1213 16:07:30.329732 1771486 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1213 16:07:30.329756 1771486 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1213 16:07:30.329770 1771486 info.go:97] Remote host: Ubuntu 20.04.1 LTS I1213 16:07:30.329789 1771486 filesync.go:118] Scanning /home/ucohen/.minikube/addons for local assets ... I1213 16:07:30.329873 1771486 filesync.go:118] Scanning /home/ucohen/.minikube/files for local assets ... I1213 16:07:30.330072 1771486 filesync.go:141] local asset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5A-base64.pem -> IntelCA5A-base64.pem in /etc/ssl/certs I1213 16:07:30.330105 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5A-base64.pem -> /etc/ssl/certs/IntelCA5A-base64.pem I1213 16:07:30.330143 1771486 filesync.go:141] local asset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5B-base64.pem -> IntelCA5B-base64.pem in /etc/ssl/certs I1213 16:07:30.330155 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5B-base64.pem -> /etc/ssl/certs/IntelCA5B-base64.pem I1213 16:07:30.330237 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/ssl/certs /etc/ssl/certs I1213 16:07:30.344768 1771486 ssh_runner.go:148] Run: stat -c "%s %y" /etc/ssl/certs/IntelCA5A-base64.pem I1213 16:07:30.350047 1771486 ssh_runner.go:205] existence check for /etc/ssl/certs/IntelCA5A-base64.pem: stat -c "%s %y" /etc/ssl/certs/IntelCA5A-base64.pem: Process exited with status 1 stdout: stderr: stat: cannot stat '/etc/ssl/certs/IntelCA5A-base64.pem': No such file or directory I1213 16:07:30.350117 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5A-base64.pem --> /etc/ssl/certs/IntelCA5A-base64.pem (2416 bytes) I1213 16:07:30.386833 1771486 ssh_runner.go:148] Run: stat -c "%s %y" /etc/ssl/certs/IntelCA5B-base64.pem I1213 16:07:30.392796 1771486 ssh_runner.go:205] existence check for /etc/ssl/certs/IntelCA5B-base64.pem: stat -c "%s %y" /etc/ssl/certs/IntelCA5B-base64.pem: Process exited with status 1 stdout: stderr: stat: cannot stat '/etc/ssl/certs/IntelCA5B-base64.pem': No such file or directory I1213 16:07:30.392861 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5B-base64.pem --> /etc/ssl/certs/IntelCA5B-base64.pem (2416 bytes) I1213 16:07:30.429634 1771486 start.go:271] post-start completed in 290.579375ms I1213 16:07:30.430269 1771486 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1213 16:07:30.500198 1771486 profile.go:150] Saving config to /home/ucohen/.minikube/profiles/minikube/config.json ... I1213 16:07:30.500663 1771486 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I1213 16:07:30.500760 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.567685 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.676960 1771486 start.go:130] duration metric: createHost completed in 9.642793421s I1213 16:07:30.677010 1771486 start.go:81] releasing machines lock for "minikube", held for 9.642960331s I1213 16:07:30.677183 1771486 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1213 16:07:30.738189 1771486 out.go:110] * Found network options: I1213 16:07:30.740588 1771486 out.go:110] - HTTP_PROXY=http://proxy-iil.intel.com:911 W1213 16:07:30.740678 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.740711 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.740734 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.745025 1771486 out.go:110] - HTTPS_PROXY=http://proxy-iil.intel.com:912 W1213 16:07:30.745105 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.745133 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.745156 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.749568 1771486 out.go:110] - NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 W1213 16:07:30.749651 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.749689 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.749742 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.754622 1771486 out.go:110] - http_proxy=http://proxy-iil.intel.com:911 W1213 16:07:30.754702 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.754730 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.754753 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.759231 1771486 out.go:110] - https_proxy=http://proxy-iil.intel.com:912 W1213 16:07:30.759307 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.759333 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.759356 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.763959 1771486 out.go:110] - no_proxy=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 W1213 16:07:30.764044 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764075 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764099 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764139 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764162 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764203 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.764315 1771486 ssh_runner.go:148] Run: systemctl --version I1213 16:07:30.764316 1771486 ssh_runner.go:148] Run: curl -x http://proxy-iil.intel.com:912 -sS -m 2 https://k8s.gcr.io/ I1213 16:07:30.764406 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.764461 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.821649 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.832209 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.932179 1771486 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I1213 16:07:31.280776 1771486 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1213 16:07:31.300031 1771486 cruntime.go:193] skipping containerd shutdown because we are bound to it I1213 16:07:31.300158 1771486 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I1213 16:07:31.319783 1771486 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1213 16:07:31.338088 1771486 ssh_runner.go:148] Run: sudo systemctl daemon-reload I1213 16:07:31.444319 1771486 ssh_runner.go:148] Run: sudo systemctl start docker I1213 16:07:31.461007 1771486 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} I1213 16:07:31.571064 1771486 out.go:110] * Preparing Kubernetes v1.19.4 on Docker 19.03.13 ... I1213 16:07:31.573607 1771486 out.go:110] - env HTTP_PROXY=http://proxy-iil.intel.com:911 I1213 16:07:31.576086 1771486 out.go:110] - env HTTPS_PROXY=http://proxy-iil.intel.com:912 I1213 16:07:31.578619 1771486 out.go:110] - env NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 I1213 16:07:31.581002 1771486 out.go:110] - env NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 I1213 16:07:31.581132 1771486 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1213 16:07:31.635944 1771486 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I1213 16:07:31.642168 1771486 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1213 16:07:31.660274 1771486 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1213 16:07:31.660329 1771486 preload.go:105] Found local preload: /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1213 16:07:31.660442 1771486 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1213 16:07:31.727487 1771486 docker.go:382] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.4 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kube-controller-manager:v1.19.4 k8s.gcr.io/kube-scheduler:v1.19.4 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1213 16:07:31.727543 1771486 docker.go:319] Images already preloaded, skipping extraction I1213 16:07:31.727634 1771486 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1213 16:07:31.784337 1771486 docker.go:382] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.4 k8s.gcr.io/kube-controller-manager:v1.19.4 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kube-scheduler:v1.19.4 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1213 16:07:31.784399 1771486 cache_images.go:74] Images are preloaded, skipping loading I1213 16:07:31.784503 1771486 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}} I1213 16:07:31.907297 1771486 cni.go:74] Creating CNI manager for "" I1213 16:07:31.907343 1771486 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1213 16:07:31.907358 1771486 kubeadm.go:84] Using pod CIDR: I1213 16:07:31.907381 1771486 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1213 16:07:31.907607 1771486 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.19.4 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 192.168.49.2:10249 I1213 16:07:31.907785 1771486 kubeadm.go:822] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.19.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I1213 16:07:31.907879 1771486 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.4 I1213 16:07:31.921870 1771486 binaries.go:44] Found k8s binaries, skipping transfer I1213 16:07:31.921991 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1213 16:07:31.935465 1771486 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I1213 16:07:31.962079 1771486 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I1213 16:07:31.988414 1771486 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes) I1213 16:07:32.014956 1771486 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I1213 16:07:32.020798 1771486 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1213 16:07:32.040339 1771486 certs.go:52] Setting up /home/ucohen/.minikube/profiles/minikube for IP: 192.168.49.2 I1213 16:07:32.040426 1771486 certs.go:169] skipping minikubeCA CA generation: /home/ucohen/.minikube/ca.key I1213 16:07:32.040479 1771486 certs.go:169] skipping proxyClientCA CA generation: /home/ucohen/.minikube/proxy-client-ca.key I1213 16:07:32.040555 1771486 certs.go:273] generating minikube-user signed cert: /home/ucohen/.minikube/profiles/minikube/client.key I1213 16:07:32.040583 1771486 crypto.go:69] Generating cert /home/ucohen/.minikube/profiles/minikube/client.crt with IP's: [] I1213 16:07:32.284758 1771486 crypto.go:157] Writing cert to /home/ucohen/.minikube/profiles/minikube/client.crt ... I1213 16:07:32.284787 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/client.crt: {Name:mk210d434693f6be82f7fd362be41dca53e22bce Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.284892 1771486 crypto.go:165] Writing key to /home/ucohen/.minikube/profiles/minikube/client.key ... I1213 16:07:32.284900 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/client.key: {Name:mk5fce7af04f142a21441b5cbf07dc46367a6ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.284953 1771486 certs.go:273] generating minikube signed cert: /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I1213 16:07:32.284963 1771486 crypto.go:69] Generating cert /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I1213 16:07:32.469911 1771486 crypto.go:157] Writing cert to /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I1213 16:07:32.469927 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk8872c323a6718fb1417f765d626111e4841581 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.470001 1771486 crypto.go:165] Writing key to /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I1213 16:07:32.470006 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk5a02dea70a9ac6ecc668d4d9268dacf617162c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.470050 1771486 certs.go:284] copying /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/ucohen/.minikube/profiles/minikube/apiserver.crt I1213 16:07:32.470085 1771486 certs.go:288] copying /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/ucohen/.minikube/profiles/minikube/apiserver.key I1213 16:07:32.470112 1771486 certs.go:273] generating aggregator signed cert: /home/ucohen/.minikube/profiles/minikube/proxy-client.key I1213 16:07:32.470116 1771486 crypto.go:69] Generating cert /home/ucohen/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I1213 16:07:32.565701 1771486 crypto.go:157] Writing cert to /home/ucohen/.minikube/profiles/minikube/proxy-client.crt ... I1213 16:07:32.565720 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/proxy-client.crt: {Name:mkc0552aca2652498e7c2601f1c2d6b0391b0774 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.565795 1771486 crypto.go:165] Writing key to /home/ucohen/.minikube/profiles/minikube/proxy-client.key ... I1213 16:07:32.565801 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/proxy-client.key: {Name:mkcf3fa7d4ffad6c3566d1b3eabcd538f478c38f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.565844 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I1213 16:07:32.565854 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I1213 16:07:32.565862 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I1213 16:07:32.565869 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I1213 16:07:32.565876 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I1213 16:07:32.565884 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I1213 16:07:32.565891 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I1213 16:07:32.565898 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I1213 16:07:32.565930 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/ca-key.pem (1675 bytes) I1213 16:07:32.565951 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/ca.pem (1078 bytes) I1213 16:07:32.565969 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/cert.pem (1119 bytes) I1213 16:07:32.565984 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/key.pem (1675 bytes) I1213 16:07:32.566001 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.566525 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1213 16:07:32.599489 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I1213 16:07:32.632641 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1213 16:07:32.665279 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I1213 16:07:32.698409 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1213 16:07:32.732018 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I1213 16:07:32.767191 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1213 16:07:32.802984 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I1213 16:07:32.839496 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1213 16:07:32.875950 1771486 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes) I1213 16:07:32.902108 1771486 ssh_runner.go:148] Run: openssl version I1213 16:07:32.912160 1771486 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1213 16:07:32.927054 1771486 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.933132 1771486 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Dec 6 15:53 /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.933213 1771486 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.943371 1771486 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1213 16:07:32.957989 1771486 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:32100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} I1213 16:07:32.958187 1771486 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1213 16:07:33.015355 1771486 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1213 16:07:33.027729 1771486 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1213 16:07:33.040971 1771486 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1213 16:07:33.041077 1771486 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1213 16:07:33.053769 1771486 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1213 16:07:33.053822 1771486 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
tstromberg commented 3 years ago

Looks like we don't know how to handle certificate file names containing parentheses.

On Sun, Dec 13, 2020, 5:23 AM ucohen notifications@github.com wrote:

Thanks for your reply @tstromberg https://github.com/tstromberg I think there is a progress, now I have a certificate parsing issue, could you have a look?

➜ minikube start --driver=docker

😄 minikube v1.15.1 on Ubuntu 18.04

✨ Using the docker driver based on user configuration

👍 Starting control plane node minikube in cluster minikube

🔥 Creating docker container (CPUs=2, Memory=32100MB) ...

✋ Stopping node "minikube" ...

🛑 Powering off "minikube" via SSH ...

🔥 Deleting "minikube" in docker ...

🤦 StartHost failed, but will try again: post-start: sudo test -d /etc/ssl/certs && sudo scp -t /etc/ssl/certs && sudo touch -d "2018-08-22 09:34:44 +0300" /etc/ssl/certs/IntelCA5A(1)-base64.crt: Process exite$

with status 1

output: bash: -c: line 0: syntax error near unexpected token `('

bash: -c: line 0: `sudo test -d /etc/ssl/certs && sudo scp -t /etc/ssl/certs && sudo touch -d "2018-08-22 09:34:44 +0300" /etc/ssl/certs/IntelCA5A(1)-base64.crt'

🔥 Creating docker container (CPUs=2, Memory=32100MB) ...

😿 Failed to start docker container. Running "minikube delete" may fix it: post-start: sudo test -d /etc/ssl/certs && sudo scp -t /etc/ssl/certs && sudo touch -d "2018-08-22 09:34:44 +0300" /etc/ssl/certs/Inte$

CA5A(1)-base64.crt: Process exited with status 1

output: bash: -c: line 0: syntax error near unexpected token `('

bash: -c: line 0: `sudo test -d /etc/ssl/certs && sudo scp -t /etc/ssl/certs && sudo touch -d "2018-08-22 09:34:44 +0300" /etc/ssl/certs/IntelCA5A(1)-base64.crt'

❌ Exiting due to GUEST_PROVISION: Failed to start host: post-start: sudo test -d /etc/ssl/certs && sudo scp -t /etc/ssl/certs && sudo touch -d "2018-08-22 09:34:44 +0300" /etc/ssl/certs/IntelCA5A(1)-base64.cr$

: Process exited with status 1

output: bash: -c: line 0: syntax error near unexpected token `('

bash: -c: line 0: `sudo test -d /etc/ssl/certs && sudo scp -t /etc/ssl/certs && sudo touch -d "2018-08-22 09:34:44 +0300" /etc/ssl/certs/IntelCA5A(1)-base64.crt'

😿 If the above advice does not help, please let us know:

👉 https://github.com/kubernetes/minikube/issues/new/choose

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kubernetes/minikube/issues/9874#issuecomment-744007081, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAYYMDMJ6JDBY5QQQLNVULSUS53TANCNFSM4UQVKR2Q .

ucohen commented 3 years ago

Looks like we don't know how to handle certificate file names containing parentheses.

I figured this was the issue so I've updated the certificate name and got this error:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
...
I1213 16:07:20.567948 1771486 out.go:185] Setting OutFile to fd 1 ... I1213 16:07:20.568230 1771486 out.go:237] isatty.IsTerminal(1) = true I1213 16:07:20.568247 1771486 out.go:198] Setting ErrFile to fd 2... I1213 16:07:20.568262 1771486 out.go:237] isatty.IsTerminal(2) = false I1213 16:07:20.568446 1771486 root.go:279] Updating PATH: /home/ucohen/.minikube/bin I1213 16:07:20.568962 1771486 out.go:192] Setting JSON to false I1213 16:07:20.597448 1771486 start.go:103] hostinfo: {"hostname":"ucohen-lx","uptime":596028,"bootTime":1607272412,"procs":843,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"18.04","kernelVersion":"5.4.0-56-generic","virtualizationSystem":"kvm","virtualizationRole":"host","hostid":"2be680f4-2a8b-47cd-a874-9bc340444fd0"} I1213 16:07:20.600101 1771486 start.go:113] virtualization: kvm host I1213 16:07:20.605646 1771486 out.go:110] * minikube v1.15.1 on Ubuntu 18.04 I1213 16:07:20.605913 1771486 notify.go:126] Checking for updates... I1213 16:07:20.605955 1771486 driver.go:302] Setting default libvirt URI to qemu:///system I1213 16:07:20.674716 1771486 docker.go:117] docker version: linux-19.03.14 I1213 16:07:20.674895 1771486 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1213 16:07:20.811432 1771486 info.go:253] docker info: {ID:AATZ:2GLN:IE6N:VWTQ:UHTQ:2OMG:NENA:2VSK:NYYB:GOVE:Q2XA:SK2E Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:66 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-13 16:07:20.731617464 +0200 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-56-generic OperatingSystem:Ubuntu 18.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:134774636544 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http://proxy-chain.intel.com:911/ HTTPSProxy:http://proxy-chain.intel.com:911/ NoProxy: Name:ucohen-lx Labels:[] ExperimentalBuild:false ServerVersion:19.03.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:nvidia Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ea765aba0d05254012b0b9e595e995c09186427f Expected:ea765aba0d05254012b0b9e595e995c09186427f} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1213 16:07:20.811574 1771486 docker.go:147] overlay module found I1213 16:07:20.816059 1771486 out.go:110] * Using the docker driver based on user configuration I1213 16:07:20.816132 1771486 start.go:272] selected driver: docker I1213 16:07:20.816148 1771486 start.go:686] validating driver "docker" against I1213 16:07:20.816179 1771486 start.go:697] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Fix: Doc:} I1213 16:07:20.816374 1771486 cli_runner.go:110] Run: docker system info --format "{{json .}}" I1213 16:07:20.950769 1771486 info.go:253] docker info: {ID:AATZ:2GLN:IE6N:VWTQ:UHTQ:2OMG:NENA:2VSK:NYYB:GOVE:Q2XA:SK2E Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:66 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:21 OomKillDisable:true NGoroutines:35 SystemTime:2020-12-13 16:07:20.866760059 +0200 IST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-56-generic OperatingSystem:Ubuntu 18.04.5 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:32 MemTotal:134774636544 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http://proxy-chain.intel.com:911/ HTTPSProxy:http://proxy-chain.intel.com:911/ NoProxy: Name:ucohen-lx Labels:[] ExperimentalBuild:false ServerVersion:19.03.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:nvidia Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ea765aba0d05254012b0b9e595e995c09186427f Expected:ea765aba0d05254012b0b9e595e995c09186427f} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:}} I1213 16:07:20.950944 1771486 start_flags.go:233] no existing cluster config was found, will generate one from the flags I1213 16:07:20.955816 1771486 start_flags.go:251] Using suggested 32100MB memory alloc based on sys=128531MB, container=128531MB I1213 16:07:20.956029 1771486 start_flags.go:641] Wait components to verify : map[apiserver:true system_pods:true] I1213 16:07:20.956071 1771486 cni.go:74] Creating CNI manager for "" I1213 16:07:20.956085 1771486 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1213 16:07:20.956109 1771486 start_flags.go:364] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:32100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} I1213 16:07:20.960391 1771486 out.go:110] * Starting control plane node minikube in cluster minikube I1213 16:07:21.032871 1771486 image.go:92] Found gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e in local docker daemon, skipping pull I1213 16:07:21.032935 1771486 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e exists in daemon, skipping pull I1213 16:07:21.032963 1771486 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1213 16:07:21.033017 1771486 preload.go:105] Found local preload: /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1213 16:07:21.033030 1771486 cache.go:54] Caching tarball of preloaded images I1213 16:07:21.033051 1771486 preload.go:131] Found /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 in cache, skipping download I1213 16:07:21.033062 1771486 cache.go:57] Finished verifying existence of preloaded tar for v1.19.4 on docker I1213 16:07:21.033581 1771486 profile.go:150] Saving config to /home/ucohen/.minikube/profiles/minikube/config.json ... I1213 16:07:21.033620 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/config.json: {Name:mkb95bb470e96ef849260eca00d7f1270e6380f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:21.033917 1771486 cache.go:184] Successfully downloaded all kic artifacts I1213 16:07:21.033951 1771486 start.go:314] acquiring machines lock for minikube: {Name:mk936277234fc2eae69dee06be3d5658f5f3a331 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I1213 16:07:21.034032 1771486 start.go:318] acquired machines lock for "minikube" in 60.055µs I1213 16:07:21.034058 1771486 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:32100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} &{Name: IP: Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true} I1213 16:07:21.034143 1771486 start.go:127] createHost starting for "" (driver="docker") I1213 16:07:21.036662 1771486 out.go:110] * Creating docker container (CPUs=2, Memory=32100MB) ... I1213 16:07:21.037064 1771486 start.go:164] libmachine.API.Create for "minikube" (driver="docker") I1213 16:07:21.037112 1771486 client.go:165] LocalClient.Create starting I1213 16:07:21.037177 1771486 main.go:119] libmachine: Reading certificate data from /home/ucohen/.minikube/certs/ca.pem I1213 16:07:21.037231 1771486 main.go:119] libmachine: Decoding PEM data... I1213 16:07:21.037265 1771486 main.go:119] libmachine: Parsing certificate... I1213 16:07:21.037479 1771486 main.go:119] libmachine: Reading certificate data from /home/ucohen/.minikube/certs/cert.pem I1213 16:07:21.037518 1771486 main.go:119] libmachine: Decoding PEM data... I1213 16:07:21.037544 1771486 main.go:119] libmachine: Parsing certificate... I1213 16:07:21.038152 1771486 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" W1213 16:07:21.096700 1771486 cli_runner.go:148] docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" returned with exit code 1 I1213 16:07:21.097020 1771486 network_create.go:178] running [docker network inspect minikube] to gather additional debugging logs... I1213 16:07:21.097054 1771486 cli_runner.go:110] Run: docker network inspect minikube W1213 16:07:21.161372 1771486 cli_runner.go:148] docker network inspect minikube returned with exit code 1 I1213 16:07:21.161438 1771486 network_create.go:181] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I1213 16:07:21.161463 1771486 network_create.go:183] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I1213 16:07:21.161571 1771486 cli_runner.go:110] Run: docker network inspect bridge --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1213 16:07:21.221918 1771486 network_create.go:96] attempt to create network 192.168.49.0/24 with subnet: minikube and gateway 192.168.49.1 and MTU of 1500 ... I1213 16:07:21.222123 1771486 cli_runner.go:110] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc --label=created_by.minikube.sigs.k8s.io=true minikube -o com.docker.network.driver.mtu=1500 I1213 16:07:21.353399 1771486 kic.go:93] calculated static IP "192.168.49.2" for the "minikube" container I1213 16:07:21.353634 1771486 cli_runner.go:110] Run: docker ps -a --format {{.Names}} I1213 16:07:21.422894 1771486 cli_runner.go:110] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I1213 16:07:21.490528 1771486 oci.go:102] Successfully created a docker volume minikube I1213 16:07:21.490664 1771486 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -d /var/lib I1213 16:07:22.446273 1771486 oci.go:106] Successfully prepared a docker volume minikube W1213 16:07:22.446392 1771486 oci.go:153] Your kernel does not support swap limit capabilities or the cgroup is not mounted. I1213 16:07:22.446409 1771486 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1213 16:07:22.446515 1771486 preload.go:105] Found local preload: /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1213 16:07:22.446530 1771486 kic.go:148] Starting extracting preloaded images to volume ... I1213 16:07:22.446558 1771486 cli_runner.go:110] Run: docker info --format "'{{json .SecurityOptions}}'" I1213 16:07:22.446636 1771486 cli_runner.go:110] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir I1213 16:07:22.599320 1771486 cli_runner.go:110] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=32100mb --memory-swap=32100mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e I1213 16:07:23.266558 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Running}} I1213 16:07:23.323623 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1213 16:07:23.388890 1771486 cli_runner.go:110] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I1213 16:07:23.590655 1771486 oci.go:245] the created container "minikube" has a running status. I1213 16:07:23.590738 1771486 kic.go:179] Creating ssh key for kic: /home/ucohen/.minikube/machines/minikube/id_rsa... I1213 16:07:23.784750 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys I1213 16:07:23.784783 1771486 kic_runner.go:179] docker (temp): /home/ucohen/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I1213 16:07:23.912026 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1213 16:07:23.958984 1771486 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I1213 16:07:23.959033 1771486 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I1213 16:07:27.405069 1771486 cli_runner.go:154] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e -I lz4 -xvf /preloaded.tar -C /extractDir: (4.9583476s) I1213 16:07:27.405145 1771486 kic.go:157] duration metric: took 4.958610 seconds to extract preloaded images to volume I1213 16:07:27.405320 1771486 cli_runner.go:110] Run: docker container inspect minikube --format={{.State.Status}} I1213 16:07:27.482679 1771486 machine.go:88] provisioning docker machine ... I1213 16:07:27.482762 1771486 ubuntu.go:166] provisioning hostname "minikube" I1213 16:07:27.482891 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:27.532935 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:27.533360 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:27.533393 1771486 main.go:119] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I1213 16:07:27.726682 1771486 main.go:119] libmachine: SSH cmd err, output: : minikube I1213 16:07:27.726864 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:27.800598 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:27.800901 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:27.800943 1771486 main.go:119] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I1213 16:07:27.967921 1771486 main.go:119] libmachine: SSH cmd err, output: : I1213 16:07:27.967998 1771486 ubuntu.go:172] set auth options {CertDir:/home/ucohen/.minikube CaCertPath:/home/ucohen/.minikube/certs/ca.pem CaPrivateKeyPath:/home/ucohen/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/ucohen/.minikube/machines/server.pem ServerKeyPath:/home/ucohen/.minikube/machines/server-key.pem ClientKeyPath:/home/ucohen/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/ucohen/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/ucohen/.minikube} I1213 16:07:27.968063 1771486 ubuntu.go:174] setting up certificates I1213 16:07:27.968086 1771486 provision.go:82] configureAuth start I1213 16:07:27.968195 1771486 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1213 16:07:28.030722 1771486 provision.go:131] copyHostCerts I1213 16:07:28.030803 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/ca.pem -> /home/ucohen/.minikube/ca.pem I1213 16:07:28.030855 1771486 exec_runner.go:91] found /home/ucohen/.minikube/ca.pem, removing ... I1213 16:07:28.030949 1771486 exec_runner.go:98] cp: /home/ucohen/.minikube/certs/ca.pem --> /home/ucohen/.minikube/ca.pem (1078 bytes) I1213 16:07:28.031062 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/cert.pem -> /home/ucohen/.minikube/cert.pem I1213 16:07:28.031093 1771486 exec_runner.go:91] found /home/ucohen/.minikube/cert.pem, removing ... I1213 16:07:28.031144 1771486 exec_runner.go:98] cp: /home/ucohen/.minikube/certs/cert.pem --> /home/ucohen/.minikube/cert.pem (1119 bytes) I1213 16:07:28.031217 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/key.pem -> /home/ucohen/.minikube/key.pem I1213 16:07:28.031246 1771486 exec_runner.go:91] found /home/ucohen/.minikube/key.pem, removing ... I1213 16:07:28.031294 1771486 exec_runner.go:98] cp: /home/ucohen/.minikube/certs/key.pem --> /home/ucohen/.minikube/key.pem (1675 bytes) I1213 16:07:28.031366 1771486 provision.go:105] generating server cert: /home/ucohen/.minikube/machines/server.pem ca-key=/home/ucohen/.minikube/certs/ca.pem private-key=/home/ucohen/.minikube/certs/ca-key.pem org=ucohen.minikube san=[192.168.49.2 localhost 127.0.0.1 minikube minikube] I1213 16:07:28.203249 1771486 provision.go:159] copyRemoteCerts I1213 16:07:28.203295 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I1213 16:07:28.203327 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:28.254698 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:28.370219 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/certs/ca.pem -> /etc/docker/ca.pem I1213 16:07:28.370304 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I1213 16:07:28.404987 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/machines/server.pem -> /etc/docker/server.pem I1213 16:07:28.405057 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes) I1213 16:07:28.440061 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem I1213 16:07:28.440144 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I1213 16:07:28.477164 1771486 provision.go:85] duration metric: configureAuth took 509.038085ms I1213 16:07:28.477214 1771486 ubuntu.go:190] setting minikube options for container-runtime I1213 16:07:28.477562 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:28.537610 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:28.537901 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:28.537929 1771486 main.go:119] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I1213 16:07:28.708796 1771486 main.go:119] libmachine: SSH cmd err, output: : overlay I1213 16:07:28.708880 1771486 ubuntu.go:71] root file system type: overlay I1213 16:07:28.709188 1771486 provision.go:290] Updating docker unit: /lib/systemd/system/docker.service ... I1213 16:07:28.709311 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:28.781740 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:28.782046 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:28.782245 1771486 main.go:119] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify Environment="HTTP_PROXY=http://proxy-iil.intel.com:911" Environment="HTTPS_PROXY=http://proxy-iil.intel.com:912" Environment="NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24" Environment="NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24" # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I1213 16:07:28.963600 1771486 main.go:119] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket [Service] Type=notify Environment=HTTP_PROXY=http://proxy-iil.intel.com:911 Environment=HTTPS_PROXY=http://proxy-iil.intel.com:912 Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I1213 16:07:28.963770 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:29.031075 1771486 main.go:119] libmachine: Using SSH client type: native I1213 16:07:29.031399 1771486 main.go:119] libmachine: &{{{ 0 [] [] []} docker [0x808c20] 0x808be0 [] 0s} 127.0.0.1 32859 } I1213 16:07:29.031442 1771486 main.go:119] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I1213 16:07:30.138853 1771486 main.go:119] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2020-09-16 17:01:20.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2020-12-13 14:07:28.957716591 +0000 @@ -8,24 +8,26 @@ [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 - -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +Environment=HTTP_PROXY=http://proxy-iil.intel.com:911 +Environment=HTTPS_PROXY=http://proxy-iil.intel.com:912 +Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 +Environment=NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -33,9 +35,10 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I1213 16:07:30.138963 1771486 machine.go:91] provisioned docker machine in 2.65622896s I1213 16:07:30.138987 1771486 client.go:168] LocalClient.Create took 9.101862405s I1213 16:07:30.139018 1771486 start.go:172] duration metric: libmachine.API.Create for "minikube" took 9.101954177s I1213 16:07:30.139035 1771486 start.go:268] post-start starting for "minikube" (driver="docker") I1213 16:07:30.139047 1771486 start.go:278] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I1213 16:07:30.139164 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I1213 16:07:30.139250 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.204327 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.324084 1771486 ssh_runner.go:148] Run: cat /etc/os-release I1213 16:07:30.329678 1771486 main.go:119] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I1213 16:07:30.329732 1771486 main.go:119] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I1213 16:07:30.329756 1771486 main.go:119] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I1213 16:07:30.329770 1771486 info.go:97] Remote host: Ubuntu 20.04.1 LTS I1213 16:07:30.329789 1771486 filesync.go:118] Scanning /home/ucohen/.minikube/addons for local assets ... I1213 16:07:30.329873 1771486 filesync.go:118] Scanning /home/ucohen/.minikube/files for local assets ... I1213 16:07:30.330072 1771486 filesync.go:141] local asset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5A-base64.pem -> IntelCA5A-base64.pem in /etc/ssl/certs I1213 16:07:30.330105 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5A-base64.pem -> /etc/ssl/certs/IntelCA5A-base64.pem I1213 16:07:30.330143 1771486 filesync.go:141] local asset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5B-base64.pem -> IntelCA5B-base64.pem in /etc/ssl/certs I1213 16:07:30.330155 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5B-base64.pem -> /etc/ssl/certs/IntelCA5B-base64.pem I1213 16:07:30.330237 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/ssl/certs /etc/ssl/certs I1213 16:07:30.344768 1771486 ssh_runner.go:148] Run: stat -c "%s %y" /etc/ssl/certs/IntelCA5A-base64.pem I1213 16:07:30.350047 1771486 ssh_runner.go:205] existence check for /etc/ssl/certs/IntelCA5A-base64.pem: stat -c "%s %y" /etc/ssl/certs/IntelCA5A-base64.pem: Process exited with status 1 stdout: stderr: stat: cannot stat '/etc/ssl/certs/IntelCA5A-base64.pem': No such file or directory I1213 16:07:30.350117 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5A-base64.pem --> /etc/ssl/certs/IntelCA5A-base64.pem (2416 bytes) I1213 16:07:30.386833 1771486 ssh_runner.go:148] Run: stat -c "%s %y" /etc/ssl/certs/IntelCA5B-base64.pem I1213 16:07:30.392796 1771486 ssh_runner.go:205] existence check for /etc/ssl/certs/IntelCA5B-base64.pem: stat -c "%s %y" /etc/ssl/certs/IntelCA5B-base64.pem: Process exited with status 1 stdout: stderr: stat: cannot stat '/etc/ssl/certs/IntelCA5B-base64.pem': No such file or directory I1213 16:07:30.392861 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/files/etc/ssl/certs/IntelCA5B-base64.pem --> /etc/ssl/certs/IntelCA5B-base64.pem (2416 bytes) I1213 16:07:30.429634 1771486 start.go:271] post-start completed in 290.579375ms I1213 16:07:30.430269 1771486 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1213 16:07:30.500198 1771486 profile.go:150] Saving config to /home/ucohen/.minikube/profiles/minikube/config.json ... I1213 16:07:30.500663 1771486 ssh_runner.go:148] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I1213 16:07:30.500760 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.567685 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.676960 1771486 start.go:130] duration metric: createHost completed in 9.642793421s I1213 16:07:30.677010 1771486 start.go:81] releasing machines lock for "minikube", held for 9.642960331s I1213 16:07:30.677183 1771486 cli_runner.go:110] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I1213 16:07:30.738189 1771486 out.go:110] * Found network options: I1213 16:07:30.740588 1771486 out.go:110] - HTTP_PROXY=http://proxy-iil.intel.com:911 W1213 16:07:30.740678 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.740711 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.740734 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.745025 1771486 out.go:110] - HTTPS_PROXY=http://proxy-iil.intel.com:912 W1213 16:07:30.745105 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.745133 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.745156 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.749568 1771486 out.go:110] - NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 W1213 16:07:30.749651 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.749689 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.749742 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.754622 1771486 out.go:110] - http_proxy=http://proxy-iil.intel.com:911 W1213 16:07:30.754702 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.754730 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.754753 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.759231 1771486 out.go:110] - https_proxy=http://proxy-iil.intel.com:912 W1213 16:07:30.759307 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.759333 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.759356 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.763959 1771486 out.go:110] - no_proxy=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 W1213 16:07:30.764044 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764075 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764099 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764139 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764162 1771486 proxy.go:118] fail to check proxy env: Error ip not in block W1213 16:07:30.764203 1771486 proxy.go:118] fail to check proxy env: Error ip not in block I1213 16:07:30.764315 1771486 ssh_runner.go:148] Run: systemctl --version I1213 16:07:30.764316 1771486 ssh_runner.go:148] Run: curl -x http://proxy-iil.intel.com:912 -sS -m 2 https://k8s.gcr.io/ I1213 16:07:30.764406 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.764461 1771486 cli_runner.go:110] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I1213 16:07:30.821649 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.832209 1771486 sshutil.go:45] new ssh client: &{IP:127.0.0.1 Port:32859 SSHKeyPath:/home/ucohen/.minikube/machines/minikube/id_rsa Username:docker} I1213 16:07:30.932179 1771486 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service containerd I1213 16:07:31.280776 1771486 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1213 16:07:31.300031 1771486 cruntime.go:193] skipping containerd shutdown because we are bound to it I1213 16:07:31.300158 1771486 ssh_runner.go:148] Run: sudo systemctl is-active --quiet service crio I1213 16:07:31.319783 1771486 ssh_runner.go:148] Run: sudo systemctl cat docker.service I1213 16:07:31.338088 1771486 ssh_runner.go:148] Run: sudo systemctl daemon-reload I1213 16:07:31.444319 1771486 ssh_runner.go:148] Run: sudo systemctl start docker I1213 16:07:31.461007 1771486 ssh_runner.go:148] Run: docker version --format {{.Server.Version}} I1213 16:07:31.571064 1771486 out.go:110] * Preparing Kubernetes v1.19.4 on Docker 19.03.13 ... I1213 16:07:31.573607 1771486 out.go:110] - env HTTP_PROXY=http://proxy-iil.intel.com:911 I1213 16:07:31.576086 1771486 out.go:110] - env HTTPS_PROXY=http://proxy-iil.intel.com:912 I1213 16:07:31.578619 1771486 out.go:110] - env NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,192.168.49.0/24 I1213 16:07:31.581002 1771486 out.go:110] - env NO_PROXY=intel.com,.intel.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16,134.134.0.0/16,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24 I1213 16:07:31.581132 1771486 cli_runner.go:110] Run: docker network inspect minikube --format "{{(index .IPAM.Config 0).Subnet}},{{(index .IPAM.Config 0).Gateway}},{{(index .Options "com.docker.network.driver.mtu")}}" I1213 16:07:31.635944 1771486 ssh_runner.go:148] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I1213 16:07:31.642168 1771486 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\thost.minikube.internal$' /etc/hosts; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1213 16:07:31.660274 1771486 preload.go:97] Checking if preload exists for k8s version v1.19.4 and runtime docker I1213 16:07:31.660329 1771486 preload.go:105] Found local preload: /home/ucohen/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v6-v1.19.4-docker-overlay2-amd64.tar.lz4 I1213 16:07:31.660442 1771486 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1213 16:07:31.727487 1771486 docker.go:382] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.4 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kube-controller-manager:v1.19.4 k8s.gcr.io/kube-scheduler:v1.19.4 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1213 16:07:31.727543 1771486 docker.go:319] Images already preloaded, skipping extraction I1213 16:07:31.727634 1771486 ssh_runner.go:148] Run: docker images --format {{.Repository}}:{{.Tag}} I1213 16:07:31.784337 1771486 docker.go:382] Got preloaded images: -- stdout -- k8s.gcr.io/kube-proxy:v1.19.4 k8s.gcr.io/kube-controller-manager:v1.19.4 k8s.gcr.io/kube-apiserver:v1.19.4 k8s.gcr.io/kube-scheduler:v1.19.4 gcr.io/k8s-minikube/storage-provisioner:v3 k8s.gcr.io/etcd:3.4.13-0 kubernetesui/dashboard:v2.0.3 k8s.gcr.io/coredns:1.7.0 kubernetesui/metrics-scraper:v1.0.4 k8s.gcr.io/pause:3.2 -- /stdout -- I1213 16:07:31.784399 1771486 cache_images.go:74] Images are preloaded, skipping loading I1213 16:07:31.784503 1771486 ssh_runner.go:148] Run: docker info --format {{.CgroupDriver}} I1213 16:07:31.907297 1771486 cni.go:74] Creating CNI manager for "" I1213 16:07:31.907343 1771486 cni.go:117] CNI unnecessary in this configuration, recommending no CNI I1213 16:07:31.907358 1771486 kubeadm.go:84] Using pod CIDR: I1213 16:07:31.907381 1771486 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet: AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.19.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket: ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I1213 16:07:31.907607 1771486 kubeadm.go:154] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta2 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 dns: type: CoreDNS etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.19.4 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "" metricsBindAddress: 192.168.49.2:10249 I1213 16:07:31.907785 1771486 kubeadm.go:822] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.19.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I1213 16:07:31.907879 1771486 ssh_runner.go:148] Run: sudo ls /var/lib/minikube/binaries/v1.19.4 I1213 16:07:31.921870 1771486 binaries.go:44] Found k8s binaries, skipping transfer I1213 16:07:31.921991 1771486 ssh_runner.go:148] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I1213 16:07:31.935465 1771486 ssh_runner.go:215] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes) I1213 16:07:31.962079 1771486 ssh_runner.go:215] scp memory --> /lib/systemd/system/kubelet.service (349 bytes) I1213 16:07:31.988414 1771486 ssh_runner.go:215] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1787 bytes) I1213 16:07:32.014956 1771486 ssh_runner.go:148] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I1213 16:07:32.020798 1771486 ssh_runner.go:148] Run: /bin/bash -c "{ grep -v '\tcontrol-plane.minikube.internal$' /etc/hosts; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ /etc/hosts" I1213 16:07:32.040339 1771486 certs.go:52] Setting up /home/ucohen/.minikube/profiles/minikube for IP: 192.168.49.2 I1213 16:07:32.040426 1771486 certs.go:169] skipping minikubeCA CA generation: /home/ucohen/.minikube/ca.key I1213 16:07:32.040479 1771486 certs.go:169] skipping proxyClientCA CA generation: /home/ucohen/.minikube/proxy-client-ca.key I1213 16:07:32.040555 1771486 certs.go:273] generating minikube-user signed cert: /home/ucohen/.minikube/profiles/minikube/client.key I1213 16:07:32.040583 1771486 crypto.go:69] Generating cert /home/ucohen/.minikube/profiles/minikube/client.crt with IP's: [] I1213 16:07:32.284758 1771486 crypto.go:157] Writing cert to /home/ucohen/.minikube/profiles/minikube/client.crt ... I1213 16:07:32.284787 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/client.crt: {Name:mk210d434693f6be82f7fd362be41dca53e22bce Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.284892 1771486 crypto.go:165] Writing key to /home/ucohen/.minikube/profiles/minikube/client.key ... I1213 16:07:32.284900 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/client.key: {Name:mk5fce7af04f142a21441b5cbf07dc46367a6ee4 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.284953 1771486 certs.go:273] generating minikube signed cert: /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I1213 16:07:32.284963 1771486 crypto.go:69] Generating cert /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I1213 16:07:32.469911 1771486 crypto.go:157] Writing cert to /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I1213 16:07:32.469927 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mk8872c323a6718fb1417f765d626111e4841581 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.470001 1771486 crypto.go:165] Writing key to /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I1213 16:07:32.470006 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mk5a02dea70a9ac6ecc668d4d9268dacf617162c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.470050 1771486 certs.go:284] copying /home/ucohen/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/ucohen/.minikube/profiles/minikube/apiserver.crt I1213 16:07:32.470085 1771486 certs.go:288] copying /home/ucohen/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/ucohen/.minikube/profiles/minikube/apiserver.key I1213 16:07:32.470112 1771486 certs.go:273] generating aggregator signed cert: /home/ucohen/.minikube/profiles/minikube/proxy-client.key I1213 16:07:32.470116 1771486 crypto.go:69] Generating cert /home/ucohen/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I1213 16:07:32.565701 1771486 crypto.go:157] Writing cert to /home/ucohen/.minikube/profiles/minikube/proxy-client.crt ... I1213 16:07:32.565720 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/proxy-client.crt: {Name:mkc0552aca2652498e7c2601f1c2d6b0391b0774 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.565795 1771486 crypto.go:165] Writing key to /home/ucohen/.minikube/profiles/minikube/proxy-client.key ... I1213 16:07:32.565801 1771486 lock.go:36] WriteFile acquiring /home/ucohen/.minikube/profiles/minikube/proxy-client.key: {Name:mkcf3fa7d4ffad6c3566d1b3eabcd538f478c38f Clock:{} Delay:500ms Timeout:1m0s Cancel:} I1213 16:07:32.565844 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt I1213 16:07:32.565854 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key I1213 16:07:32.565862 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt I1213 16:07:32.565869 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key I1213 16:07:32.565876 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt I1213 16:07:32.565884 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/ca.key -> /var/lib/minikube/certs/ca.key I1213 16:07:32.565891 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt I1213 16:07:32.565898 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key I1213 16:07:32.565930 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/ca-key.pem (1675 bytes) I1213 16:07:32.565951 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/ca.pem (1078 bytes) I1213 16:07:32.565969 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/cert.pem (1119 bytes) I1213 16:07:32.565984 1771486 certs.go:348] found cert: /home/ucohen/.minikube/certs/home/ucohen/.minikube/certs/key.pem (1675 bytes) I1213 16:07:32.566001 1771486 vm_assets.go:96] NewFileAsset: /home/ucohen/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.566525 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I1213 16:07:32.599489 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I1213 16:07:32.632641 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I1213 16:07:32.665279 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I1213 16:07:32.698409 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I1213 16:07:32.732018 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I1213 16:07:32.767191 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I1213 16:07:32.802984 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes) I1213 16:07:32.839496 1771486 ssh_runner.go:215] scp /home/ucohen/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I1213 16:07:32.875950 1771486 ssh_runner.go:215] scp memory --> /var/lib/minikube/kubeconfig (392 bytes) I1213 16:07:32.902108 1771486 ssh_runner.go:148] Run: openssl version I1213 16:07:32.912160 1771486 ssh_runner.go:148] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I1213 16:07:32.927054 1771486 ssh_runner.go:148] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.933132 1771486 certs.go:389] hashing: -rw-r--r-- 1 root root 1111 Dec 6 15:53 /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.933213 1771486 ssh_runner.go:148] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I1213 16:07:32.943371 1771486 ssh_runner.go:148] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I1213 16:07:32.957989 1771486 kubeadm.go:324] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.14@sha256:2bd97b482faf5b6a403ac39dd5e7c6fe2006425c6663a12f94f64f5f81a7787e Memory:32100 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.19.4 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.19.4 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[]} I1213 16:07:32.958187 1771486 ssh_runner.go:148] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I1213 16:07:33.015355 1771486 ssh_runner.go:148] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I1213 16:07:33.027729 1771486 ssh_runner.go:148] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I1213 16:07:33.040971 1771486 kubeadm.go:211] ignoring SystemVerification for kubeadm because of docker driver I1213 16:07:33.041077 1771486 ssh_runner.go:148] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I1213 16:07:33.053769 1771486 kubeadm.go:147] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I1213 16:07:33.053822 1771486 ssh_runner.go:148] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.19.4:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
tstromberg commented 3 years ago

That particular error is benign. I suggest running minikube delete to cleanup the existing state, and then run minikube start -- please let me know what the final 5 lines of output end up being if it fails then.

ucohen commented 3 years ago

@tstromberg here is the output I get

minikube start --driver docker

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.503121 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.

stderr:
W1214 19:34:51.075228     868 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.4.0-56-generic\n", err: exit status 1
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition
To see the stack trace of this error execute with --v=5 or higher
medyagh commented 3 years ago

@ucohen I am curious if you have already tried to sign the minikube cert with your corp cert ?

medyagh commented 3 years ago

@ucohen I haven't heard from you, do you still have this issue ?

spowelljr commented 3 years ago

Hi @ucohen, we haven't heard back from you, do you still have this issue? There isn't enough information in this issue to make it actionable, and a long enough duration has passed, so this issue is likely difficult to replicate.

I will close this issue for now but feel free to reopen when you feel ready to provide more information.