kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.28k stars 4.87k forks source link

Include --apiserver-name names in Docker daemon certificate #6367

Closed creckord closed 4 years ago

creckord commented 4 years ago

Hi!

We have a Minikube setup using VirtualBox, where we create/start the cluster with --apiserver-name=minikube through a start script that puts the cluster ip into the local hosts file for the minikube name. This lets us configure the K8s connection using the stable name instead of the potentially changing IP in a lot of places where we can't easily perform dynamic configuration using minikube ip and friends.

This works pretty well for the K8s / kubectl side of things, but unfortunately it breaks when we try to do the same for the Docker daemon. This is because the generated server certificate is lacking all the alternative name records that the k8s apiserver.crt has.

Apiserver Certificate:

Certificate:
    Data:
        Issuer: CN = minikubeCA
        Subject: O = system:masters, CN = minikube
        X509v3 extensions:
            X509v3 Subject Alternative Name:
                DNS:minikube, DNS:kubernetes.default.svc.cluster.local, DNS:kubernetes.default.svc, DNS:kubernetes.default, DNS:kubernetes, DNS:localhost, IP Address:192.168.99.103, IP Address:10.96.0.1, IP Address:10.0.0.1

Docker Daemon Certificate:

Certificate:
    Data:
        Issuer: O = Reck062
        Subject: O = Reck062.minikube
        X509v3 extensions:
            X509v3 Subject Alternative Name:
                DNS:localhost, IP Address:192.168.99.103

Expected: A certificate with the same alternative names as the apiserver, or at least with DNS:minikube.

The exact command to reproduce the issue: Setup:

minikube start --apiserver-name=minikube
echo "$(minikube ip) minikube" | sudo tee -a /etc/hosts
eval $(minikube docker-env)
export DOCKER_HOST="tcp://minikube:2376"

Docker issue:

docker images

The full output of the command that failed:

error during connect: Get https://minikube:2376/v1.40/images/json: x509: certificate is valid for localhost, not minikube

The output of the minikube logs command:

  • ==> Docker <==
  • -- Logs begin at Wed 2020-01-22 12:03:42 UTC, end at Wed 2020-01-22 12:11:56 UTC. --
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.455946488Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456010229Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456044937Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456077662Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456108266Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456140532Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456196304Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456228700Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456259046Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456287882Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456404766Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456450870Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.456501404Z" level=info msg="containerd successfully booted in 0.007905s"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.467758985Z" level=info msg="parsed scheme: \"unix\"" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.467786935Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.467800722Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.467809873Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.468773189Z" level=info msg="parsed scheme: \"unix\"" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.468800550Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.468811026Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.468817456Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.486848242Z" level=warning msg="Your kernel does not support cgroup blkio weight"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.486874615Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.486879815Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.486883768Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.486887903Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.486891775Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.487007412Z" level=info msg="Loading containers: start."
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.546385383Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.589868720Z" level=info msg="Loading containers: done."
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.624385289Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.624495161Z" level=info msg="Daemon has completed initialization"
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.642688345Z" level=info msg="API listen on /var/run/docker.sock"
  • Jan 22 12:03:57 minikube-test systemd[1]: Started Docker Application Container Engine.
  • Jan 22 12:03:57 minikube-test dockerd[2715]: time="2020-01-22T12:03:57.642729207Z" level=info msg="API listen on [::]:2376"
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.404155040Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b72f64fdd55a4f59fbf869218f49e22b2353a9be93599fcdbc813c78efa7335b/shim.sock" debug=false pid=4833
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.404790130Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d3b4a4473ea87dc080f3154ef881655b47ef3314d608b7e6c32132d346a7cef/shim.sock" debug=false pid=4834
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.407170304Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e43c7f405e6eb33d223a6390bbbd0479282385500e4af71c16ba1846d80ceacd/shim.sock" debug=false pid=4847
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.411628283Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab960fce7bdf2c0fb27a40daf4d35a80ae12562e62f5b8ed3b6adb96e36f39a5/shim.sock" debug=false pid=4867
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.419960771Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20355247d5a0228226cdfb02502494a282da20c7700693bccf50f44cbcce87fe/shim.sock" debug=false pid=4871
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.925649197Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1c6c2f1566bc0cc66b1b747a7f1224a34cb7dc3e6edf1c0ef1009791c146eff3/shim.sock" debug=false pid=5175
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.926363696Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d960669e72f699b5e577fec4cb25e157dbe9bb43c337c223be22726d2252d978/shim.sock" debug=false pid=5179
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.928089854Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8608cc541cb6d40b1eaaa2097373167936ef76ff6b631c938a80cf731a56dd68/shim.sock" debug=false pid=5184
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.929173980Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ebf5f2986242fc7e58d02aa7c18c99fc2f1409d66508e34d4a230836ed25175d/shim.sock" debug=false pid=5191
  • Jan 22 12:08:40 minikube-test dockerd[2715]: time="2020-01-22T12:08:40.937351917Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e7549080d5efd8ce423279b763ee0c8fc664944ff02d302f5d2928fe917c26fb/shim.sock" debug=false pid=5222
  • Jan 22 12:08:53 minikube-test dockerd[2715]: time="2020-01-22T12:08:53.860668677Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24632203255e94878add5e6b0a10c5e85501ca3d0172f669172979881528e545/shim.sock" debug=false pid=6403
  • Jan 22 12:08:54 minikube-test dockerd[2715]: time="2020-01-22T12:08:54.157540089Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/372bdf3a9e73405c0c47fed74861cf9ef6365800d528668371f01bd8b3b97f80/shim.sock" debug=false pid=6469
  • Jan 22 12:08:55 minikube-test dockerd[2715]: time="2020-01-22T12:08:55.055837443Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ca0bb8e6abc1067b9ebccb72921e60b7d45ce997a694e8808cf1feefe9983218/shim.sock" debug=false pid=6642
  • Jan 22 12:08:55 minikube-test dockerd[2715]: time="2020-01-22T12:08:55.276547465Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9014c535fe0cce9347b27acd1706ed21518a937e429787753f88ea3d8705deaa/shim.sock" debug=false pid=6686
  • Jan 22 12:08:55 minikube-test dockerd[2715]: time="2020-01-22T12:08:55.627717451Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0e312266fe762601e1fd4e295a574718a60beb8b60ac3830b4f3257f25f62f32/shim.sock" debug=false pid=6769
  • Jan 22 12:08:55 minikube-test dockerd[2715]: time="2020-01-22T12:08:55.628287036Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6d649240828c8c0d3d380ed96cbfb6a4ea59faf1137bb636c4dcd047506b9ca4/shim.sock" debug=false pid=6773
  • Jan 22 12:08:56 minikube-test dockerd[2715]: time="2020-01-22T12:08:56.015057818Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/cbca7b9a4b52c9b97156a9d167649f6bca8969b391b0df121b9fccfd319680a7/shim.sock" debug=false pid=6901
  • Jan 22 12:08:56 minikube-test dockerd[2715]: time="2020-01-22T12:08:56.053072651Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/aa94cb0886023814d744a3588311585dc59dc0b9aba4f8449880f227f34233f6/shim.sock" debug=false pid=6918
  • Jan 22 12:08:59 minikube-test dockerd[2715]: time="2020-01-22T12:08:59.453671114Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6ae1d8b9cacbad0e4471552e0a950b3dc1137ed6ee61337d503fb34d0c6ec4cf/shim.sock" debug=false pid=7160
  • Jan 22 12:08:59 minikube-test dockerd[2715]: time="2020-01-22T12:08:59.476727843Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9c4445c01d4634972180261e39e8dd3e808ad06a2f958bc380da7c16f0550153/shim.sock" debug=false pid=7175
  • Jan 22 12:08:59 minikube-test dockerd[2715]: time="2020-01-22T12:08:59.908160606Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3e7a80d055655c82f51ee1ab8f4ddd7f9b5aa8df9d0ccd6c1a84b91dc923f26d/shim.sock" debug=false pid=7274
  • Jan 22 12:09:00 minikube-test dockerd[2715]: time="2020-01-22T12:09:00.033763321Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/316bfd7c7ae0125fa1c0d9ee28a35b632ee4403210fe14061100871f09317f2d/shim.sock" debug=false pid=7325
  • Jan 22 12:11:46 minikube-test dockerd[2715]: http: TLS handshake error from 192.168.99.1:60896: remote error: tls: bad certificate
  • Jan 22 12:11:46 minikube-test dockerd[2715]: http: TLS handshake error from 192.168.99.1:60897: remote error: tls: bad certificate
  • Jan 22 12:11:46 minikube-test dockerd[2715]: http: TLS handshake error from 192.168.99.1:60898: remote error: tls: bad certificate
  • ==> container status <==
  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
  • 316bfd7c7ae01 3b08661dc379d 2 minutes ago Running dashboard-metrics-scraper 0 6ae1d8b9cacba
  • 3e7a80d055655 eb51a35975256 2 minutes ago Running kubernetes-dashboard 0 9c4445c01d463
  • aa94cb0886023 70f311871ae12 3 minutes ago Running coredns 0 6d649240828c8
  • cbca7b9a4b52c 70f311871ae12 3 minutes ago Running coredns 0 0e312266fe762
  • 9014c535fe0cc 4689081edb103 3 minutes ago Running storage-provisioner 0 ca0bb8e6abc10
  • 372bdf3a9e734 7d54289267dc5 3 minutes ago Running kube-proxy 0 24632203255e9
  • ebf5f2986242f 0cae8d5cc64c7 3 minutes ago Running kube-apiserver 0 e43c7f405e6eb
  • 8608cc541cb6d 78c190f736b11 3 minutes ago Running kube-scheduler 0 ab960fce7bdf2
  • 1c6c2f1566bc0 5eb3b74868724 3 minutes ago Running kube-controller-manager 0 20355247d5a02
  • e7549080d5efd 303ce5db0e90d 3 minutes ago Running etcd 0 6d3b4a4473ea8
  • d960669e72f69 bd12a212f9dcb 3 minutes ago Running kube-addon-manager 0 b72f64fdd55a4
  • ==> coredns ["aa94cb088602"] <==
  • .:53
  • [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
  • CoreDNS-1.6.5
  • linux/amd64, go1.13.4, c2fd1b2
  • ==> coredns ["cbca7b9a4b52"] <==
  • .:53
  • [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
  • CoreDNS-1.6.5
  • linux/amd64, go1.13.4, c2fd1b2
  • ==> dmesg <==
  • [ +0.450551] hpet1: lost 28 rtc interrupts
  • [ +0.467332] hpet1: lost 15 rtc interrupts
  • [ +1.131825] hpet1: lost 27 rtc interrupts
  • [ +0.118127] hpet1: lost 7 rtc interrupts
  • [ +1.928050] hpet1: lost 57 rtc interrupts
  • [ +0.113996] hpet1: lost 7 rtc interrupts
  • [ +0.045213] hpet1: lost 1 rtc interrupts
  • [ +0.221038] hpet1: lost 10 rtc interrupts
  • [ +0.336575] hpet1: lost 8 rtc interrupts
  • [ +0.515522] hpet1: lost 31 rtc interrupts
  • [ +0.815715] hpet1: lost 4 rtc interrupts
  • [ +0.346297] hpet1: lost 22 rtc interrupts
  • [ +1.189807] hpet1: lost 61 rtc interrupts
  • [ +0.273418] hpet1: lost 10 rtc interrupts
  • [ +0.238842] hpet1: lost 14 rtc interrupts
  • [ +0.311744] hpet1: lost 19 rtc interrupts
  • [ +0.509375] hpet1: lost 16 rtc interrupts
  • [ +0.801642] hpet1: lost 50 rtc interrupts
  • [ +0.424143] hpet1: lost 26 rtc interrupts
  • [ +0.282071] hpet1: lost 17 rtc interrupts
  • [ +1.766696] hpet1: lost 45 rtc interrupts
  • [ +0.248093] hpet1: lost 15 rtc interrupts
  • [ +1.172876] hpet1: lost 52 rtc interrupts
  • [ +2.163823] hpet1: lost 23 rtc interrupts
  • [ +0.275537] hpet1: lost 17 rtc interrupts
  • [ +1.274848] hpet1: lost 18 rtc interrupts
  • [ +0.290355] hpet1: lost 17 rtc interrupts
  • [ +0.240555] hpet1: lost 9 rtc interrupts
  • [ +0.478921] hpet1: lost 29 rtc interrupts
  • [ +0.148535] hpet1: lost 8 rtc interrupts
  • [ +1.236963] hpet1: lost 62 rtc interrupts
  • [ +0.661428] hpet1: lost 40 rtc interrupts
  • [ +0.084715] hpet1: lost 4 rtc interrupts
  • [ +1.962911] hpet1: lost 67 rtc interrupts
  • [ +0.083715] hpet1: lost 4 rtc interrupts
  • [ +1.607988] hpet1: lost 81 rtc interrupts
  • [ +0.090025] hpet1: lost 4 rtc interrupts
  • [ +0.267890] hpet1: lost 11 rtc interrupts
  • [ +0.050070] hpet1: lost 2 rtc interrupts
  • [ +1.339004] hpet1: lost 69 rtc interrupts
  • [ +1.774685] hpet1: lost 49 rtc interrupts
  • [ +0.212602] hpet1: lost 7 rtc interrupts
  • [ +0.710889] hpet1: lost 44 rtc interrupts
  • [ +1.157009] hpet1: lost 63 rtc interrupts
  • [ +0.549514] hpet1: lost 33 rtc interrupts
  • [ +2.285726] hpet1: lost 83 rtc interrupts
  • [ +0.034880] hpet1: lost 1 rtc interrupts
  • [ +0.726456] hpet1: lost 30 rtc interrupts
  • [ +0.805006] hpet1: lost 36 rtc interrupts
  • [ +0.483093] hpet1: lost 24 rtc interrupts
  • [ +1.068676] hpet1: lost 48 rtc interrupts
  • [ +0.943251] hpet1: lost 14 rtc interrupts
  • [ +1.044200] hpet1: lost 53 rtc interrupts
  • [ +0.191375] hpet1: lost 6 rtc interrupts
  • [ +0.779243] hpet1: lost 48 rtc interrupts
  • [ +1.093432] hpet1: lost 54 rtc interrupts
  • [ +0.920230] hpet1: lost 57 rtc interrupts
  • [ +0.426656] hpet1: lost 10 rtc interrupts
  • [ +1.481771] hpet1: lost 47 rtc interrupts
  • [ +0.874267] hpet1: lost 28 rtc interrupts
  • ==> kernel <==
  • 12:11:56 up 8 min, 0 users, load average: 0.22, 0.42, 0.29
  • Linux minikube-test 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
  • PRETTY_NAME="Buildroot 2019.02.7"
  • ==> kube-addon-manager ["d960669e72f6"] <==
  • error: no objects passed to apply
  • error: no objects passed to apply
  • error: no objects passed to apply
  • deployment.apps/dashboard-metrics-scraper unchanged
  • deployment.apps/kubernetes-dashboard unchanged
  • namespace/kubernetes-dashboard unchanged
  • role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • serviceaccount/kubernetes-dashboard unchanged
  • secret/kubernetes-dashboard-certs unchanged
  • secret/kubernetes-dashboard-csrf unchanged
  • secret/kubernetes-dashboard-key-holder unchanged
  • service/kubernetes-dashboard unchanged
  • service/dashboard-metrics-scraper unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2020-01-22T12:11:43+00:00 ==
  • INFO: Leader election disabled.
  • INFO: == Kubernetes addon ensure completed at 2020-01-22T12:11:43+00:00 ==
  • INFO: == Reconciling with deprecated label ==
  • INFO: == Reconciling with addon-manager label ==
  • clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • configmap/kubernetes-dashboard-settings unchanged
  • deployment.apps/dashboard-metrics-scraper unchanged
  • deployment.apps/kubernetes-dashboard unchanged
  • namespace/kubernetes-dashboard unchanged
  • role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • serviceaccount/kubernetes-dashboard unchanged
  • secret/kubernetes-dashboard-certs unchanged
  • secret/kubernetes-dashboard-csrf unchanged
  • secret/kubernetes-dashboard-key-holder unchanged
  • service/kubernetes-dashboard unchanged
  • service/dashboard-metrics-scraper unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2020-01-22T12:11:47+00:00 ==
  • INFO: Leader election disabled.
  • INFO: == Kubernetes addon ensure completed at 2020-01-22T12:11:48+00:00 ==
  • INFO: == Reconciling with deprecated label ==
  • INFO: == Reconciling with addon-manager label ==
  • clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • configmap/kubernetes-dashboard-settings unchanged
  • deployment.apps/dashboard-metrics-scraper unchanged
  • deployment.apps/kubernetes-dashboard unchanged
  • namespace/kubernetes-dashboard unchanged
  • role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
  • serviceaccount/kubernetes-dashboard unchanged
  • secret/kubernetes-dashboard-certs unchanged
  • secret/kubernetes-dashboard-csrf unchanged
  • secret/kubernetes-dashboard-key-holder unchanged
  • service/kubernetes-dashboard unchanged
  • service/dashboard-metrics-scraper unchanged
  • serviceaccount/storage-provisioner unchanged
  • INFO: == Kubernetes addon reconcile completed at 2020-01-22T12:11:52+00:00 ==
  • INFO: Leader election disabled.
  • INFO: == Kubernetes addon ensure completed at 2020-01-22T12:11:54+00:00 ==
  • INFO: == Reconciling with deprecated label ==
  • INFO: == Reconciling with addon-manager label ==
  • ==> kube-apiserver ["ebf5f2986242"] <==
  • W0122 12:08:42.503202 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
  • W0122 12:08:42.519669 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
  • W0122 12:08:42.543442 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
  • W0122 12:08:42.548635 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
  • W0122 12:08:42.577000 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
  • W0122 12:08:42.589974 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
  • W0122 12:08:42.590007 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
  • I0122 12:08:42.596310 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
  • I0122 12:08:42.596397 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
  • I0122 12:08:42.597610 1 client.go:361] parsed scheme: "endpoint"
  • I0122 12:08:42.597641 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0122 12:08:42.603785 1 client.go:361] parsed scheme: "endpoint"
  • I0122 12:08:42.603819 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0122 12:08:44.084496 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
  • I0122 12:08:44.084513 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
  • I0122 12:08:44.084807 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
  • I0122 12:08:44.085224 1 secure_serving.go:178] Serving securely on [::]:8443
  • I0122 12:08:44.085264 1 tlsconfig.go:219] Starting DynamicServingCertificateController
  • I0122 12:08:44.085269 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
  • I0122 12:08:44.085276 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
  • I0122 12:08:44.085369 1 crd_finalizer.go:263] Starting CRDFinalizer
  • I0122 12:08:44.085442 1 establishing_controller.go:73] Starting EstablishingController
  • I0122 12:08:44.085654 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
  • I0122 12:08:44.085792 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
  • I0122 12:08:44.085904 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
  • I0122 12:08:44.086008 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
  • I0122 12:08:44.086067 1 available_controller.go:386] Starting AvailableConditionController
  • I0122 12:08:44.086186 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
  • I0122 12:08:44.086199 1 controller.go:81] Starting OpenAPI AggregationController
  • I0122 12:08:44.086785 1 autoregister_controller.go:140] Starting autoregister controller
  • I0122 12:08:44.086914 1 cache.go:32] Waiting for caches to sync for autoregister controller
  • I0122 12:08:44.087083 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
  • I0122 12:08:44.087139 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
  • I0122 12:08:44.085672 1 controller.go:85] Starting OpenAPI controller
  • I0122 12:08:44.085681 1 customresource_discovery_controller.go:208] Starting DiscoveryController
  • I0122 12:08:44.085690 1 naming_controller.go:288] Starting NamingConditionController
  • I0122 12:08:44.090222 1 crdregistration_controller.go:111] Starting crd-autoregister controller
  • I0122 12:08:44.090675 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
  • E0122 12:08:44.096635 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.104, ResourceVersion: 0, AdditionalErrorMsg:
  • I0122 12:08:44.186049 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
  • I0122 12:08:44.186617 1 cache.go:39] Caches are synced for AvailableConditionController controller
  • I0122 12:08:44.187108 1 cache.go:39] Caches are synced for autoregister controller
  • I0122 12:08:44.188198 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
  • I0122 12:08:44.191172 1 shared_informer.go:204] Caches are synced for crd-autoregister
  • I0122 12:08:45.084938 1 controller.go:107] OpenAPI AggregationController: Processing item
  • I0122 12:08:45.085007 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
  • I0122 12:08:45.085026 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
  • I0122 12:08:45.093387 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
  • I0122 12:08:45.103784 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
  • I0122 12:08:45.103807 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
  • I0122 12:08:45.376029 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
  • I0122 12:08:45.419115 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
  • W0122 12:08:45.527202 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.104]
  • I0122 12:08:45.528002 1 controller.go:606] quota admission added evaluator for: endpoints
  • I0122 12:08:46.233045 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
  • I0122 12:08:46.500151 1 controller.go:606] quota admission added evaluator for: serviceaccounts
  • I0122 12:08:46.771436 1 controller.go:606] quota admission added evaluator for: deployments.apps
  • I0122 12:08:46.985558 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
  • I0122 12:08:53.422983 1 controller.go:606] quota admission added evaluator for: replicasets.apps
  • I0122 12:08:53.465696 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
  • ==> kube-controller-manager ["1c6c2f1566bc"] <==
  • I0122 12:08:52.202936 1 resource_quota_monitor.go:303] QuotaMonitor running
  • I0122 12:08:52.219836 1 controllermanager.go:533] Started "deployment"
  • I0122 12:08:52.219925 1 deployment_controller.go:152] Starting deployment controller
  • I0122 12:08:52.219934 1 shared_informer.go:197] Waiting for caches to sync for deployment
  • I0122 12:08:52.447981 1 controllermanager.go:533] Started "cronjob"
  • I0122 12:08:52.448035 1 cronjob_controller.go:97] Starting CronJob Manager
  • I0122 12:08:52.700795 1 controllermanager.go:533] Started "replicationcontroller"
  • I0122 12:08:52.700925 1 replica_set.go:180] Starting replicationcontroller controller
  • I0122 12:08:52.700931 1 shared_informer.go:197] Waiting for caches to sync for ReplicationController
  • I0122 12:08:52.949496 1 controllermanager.go:533] Started "statefulset"
  • I0122 12:08:52.952325 1 stateful_set.go:145] Starting stateful set controller
  • I0122 12:08:52.952373 1 shared_informer.go:197] Waiting for caches to sync for stateful set
  • I0122 12:08:52.958293 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
  • I0122 12:08:52.972536 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
  • I0122 12:08:52.991779 1 shared_informer.go:204] Caches are synced for expand
  • I0122 12:08:53.000533 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
  • I0122 12:08:53.000640 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
  • E0122 12:08:53.009851 1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
  • E0122 12:08:53.021175 1 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
  • I0122 12:08:53.048490 1 shared_informer.go:204] Caches are synced for PV protection
  • I0122 12:08:53.150288 1 shared_informer.go:204] Caches are synced for bootstrap_signer
  • I0122 12:08:53.381681 1 shared_informer.go:204] Caches are synced for ReplicaSet
  • I0122 12:08:53.399152 1 shared_informer.go:204] Caches are synced for endpoint
  • I0122 12:08:53.399628 1 shared_informer.go:204] Caches are synced for disruption
  • I0122 12:08:53.399695 1 disruption.go:338] Sending events to api server.
  • I0122 12:08:53.401231 1 shared_informer.go:204] Caches are synced for ReplicationController
  • W0122 12:08:53.403880 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
  • I0122 12:08:53.421542 1 shared_informer.go:204] Caches are synced for deployment
  • I0122 12:08:53.426191 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"5abff468-d5a9-4ea6-ae0d-d07c851c90d2", APIVersion:"apps/v1", ResourceVersion:"203", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
  • I0122 12:08:53.428885 1 shared_informer.go:204] Caches are synced for job
  • I0122 12:08:53.433680 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"990f7147-8eec-42b6-aefd-5866946eb046", APIVersion:"apps/v1", ResourceVersion:"315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-qmfzf
  • I0122 12:08:53.442330 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"990f7147-8eec-42b6-aefd-5866946eb046", APIVersion:"apps/v1", ResourceVersion:"315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-jxc7c
  • I0122 12:08:53.445836 1 shared_informer.go:204] Caches are synced for GC
  • I0122 12:08:53.448130 1 shared_informer.go:204] Caches are synced for HPA
  • I0122 12:08:53.448272 1 shared_informer.go:204] Caches are synced for PVC protection
  • I0122 12:08:53.448327 1 shared_informer.go:204] Caches are synced for taint
  • I0122 12:08:53.448476 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
  • W0122 12:08:53.448514 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
  • I0122 12:08:53.448557 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
  • I0122 12:08:53.448614 1 taint_manager.go:186] Starting NoExecuteTaintManager
  • I0122 12:08:53.448733 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"5d8ea08f-1e9d-489b-99c2-c8f5db78f549", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
  • I0122 12:08:53.454430 1 shared_informer.go:204] Caches are synced for stateful set
  • I0122 12:08:53.456481 1 shared_informer.go:204] Caches are synced for namespace
  • I0122 12:08:53.459514 1 shared_informer.go:204] Caches are synced for daemon sets
  • I0122 12:08:53.469818 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"2f032995-4acc-4e4f-a221-32034f7a11bd", APIVersion:"apps/v1", ResourceVersion:"210", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-drskv
  • E0122 12:08:53.481649 1 daemon_controller.go:290] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"2f032995-4acc-4e4f-a221-32034f7a11bd", ResourceVersion:"210", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715291726, loc:(time.Location)(0x6b951c0)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(v1.LabelSelector)(0xc000b628a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(nil), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(0xc000325e40), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc000b628c0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc000b628e0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.17.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(v1.EnvVarSource)(0xc000b62920)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(nil), StartupProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc00013b1d0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(int64)(0xc000b87718), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"beta.kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(bool)(nil), SecurityContext:(v1.PodSecurityContext)(0xc000fa92c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(int32)(nil), DNSConfig:(v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil), PreemptionPolicy:(v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(v1.RollingUpdateDaemonSet)(0xc00000ed00)}, MinReadySeconds:0, RevisionHistoryLimit:(int32)(0xc000b87758)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
  • I0122 12:08:53.498950 1 shared_informer.go:204] Caches are synced for attach detach
  • I0122 12:08:53.499577 1 shared_informer.go:204] Caches are synced for TTL
  • I0122 12:08:53.500196 1 shared_informer.go:204] Caches are synced for persistent volume
  • I0122 12:08:53.510529 1 shared_informer.go:204] Caches are synced for service account
  • I0122 12:08:53.556421 1 shared_informer.go:204] Caches are synced for garbage collector
  • I0122 12:08:53.556556 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
  • I0122 12:08:53.560421 1 shared_informer.go:204] Caches are synced for garbage collector
  • I0122 12:08:53.605105 1 shared_informer.go:204] Caches are synced for resource quota
  • I0122 12:08:53.953625 1 shared_informer.go:197] Waiting for caches to sync for resource quota
  • I0122 12:08:53.953668 1 shared_informer.go:204] Caches are synced for resource quota
  • I0122 12:08:59.053160 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"82d92ae8-26cc-45ca-abe2-7eb7ef49600f", APIVersion:"apps/v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
  • I0122 12:08:59.063213 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"982b7e42-fa69-4535-a77e-854310c64dae", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-xr9rg
  • I0122 12:08:59.068251 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"0ad209cd-863b-46e6-aa2f-72796ff00a76", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
  • I0122 12:08:59.078903 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"f09be0a7-9e2e-41df-8506-d55e683a8fae", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-jczf4
  • ==> kube-proxy ["372bdf3a9e73"] <==
  • W0122 12:08:54.358290 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
  • I0122 12:08:54.368635 1 node.go:135] Successfully retrieved node IP: 192.168.99.104
  • I0122 12:08:54.368662 1 server_others.go:145] Using iptables Proxier.
  • W0122 12:08:54.368763 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
  • I0122 12:08:54.368948 1 server.go:571] Version: v1.17.0
  • I0122 12:08:54.369259 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 327680
  • I0122 12:08:54.369293 1 conntrack.go:52] Setting nf_conntrack_max to 327680
  • I0122 12:08:54.369721 1 conntrack.go:83] Setting conntrack hashsize to 81920
  • I0122 12:08:54.382011 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
  • I0122 12:08:54.382075 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
  • I0122 12:08:54.382211 1 config.go:131] Starting endpoints config controller
  • I0122 12:08:54.382236 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
  • I0122 12:08:54.382587 1 config.go:313] Starting service config controller
  • I0122 12:08:54.382591 1 shared_informer.go:197] Waiting for caches to sync for service config
  • I0122 12:08:54.482540 1 shared_informer.go:204] Caches are synced for endpoints config
  • I0122 12:08:54.482848 1 shared_informer.go:204] Caches are synced for service config
  • ==> kube-scheduler ["8608cc541cb6"] <==
  • I0122 12:08:41.548290 1 serving.go:312] Generated self-signed cert in-memory
  • W0122 12:08:41.985407 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
  • W0122 12:08:41.985455 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
  • W0122 12:08:44.102435 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
  • W0122 12:08:44.102467 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
  • W0122 12:08:44.102473 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
  • W0122 12:08:44.102476 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
  • W0122 12:08:44.124648 1 authorization.go:47] Authorization is disabled
  • W0122 12:08:44.124669 1 authentication.go:92] Authentication is disabled
  • I0122 12:08:44.124679 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
  • I0122 12:08:44.126164 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
  • I0122 12:08:44.127188 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  • I0122 12:08:44.128565 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  • I0122 12:08:44.131444 1 tlsconfig.go:219] Starting DynamicServingCertificateController
  • E0122 12:08:44.141198 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
  • E0122 12:08:44.142860 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0122 12:08:44.145901 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • E0122 12:08:44.146356 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0122 12:08:44.146381 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • E0122 12:08:44.146839 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0122 12:08:44.147136 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
  • E0122 12:08:44.147652 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0122 12:08:44.150263 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • E0122 12:08:44.150330 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0122 12:08:44.150446 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0122 12:08:44.150578 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • E0122 12:08:45.142783 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
  • E0122 12:08:45.144522 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0122 12:08:45.147165 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • E0122 12:08:45.148164 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0122 12:08:45.150020 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • E0122 12:08:45.152881 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0122 12:08:45.153594 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • E0122 12:08:45.155558 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
  • E0122 12:08:45.157581 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0122 12:08:45.158419 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0122 12:08:45.158595 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0122 12:08:45.159667 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • I0122 12:08:46.226622 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
  • I0122 12:08:46.229744 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
  • I0122 12:08:46.235977 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
  • ==> kubelet <==
  • -- Logs begin at Wed 2020-01-22 12:03:42 UTC, end at Wed 2020-01-22 12:11:57 UTC. --
  • Jan 22 12:08:46 minikube-test kubelet[5926]: I0122 12:08:46.998115 5926 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.5, apiVersion: 1.40.0
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.014721 5926 server.go:1113] Started kubelet
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.014812 5926 server.go:143] Starting to listen on 0.0.0.0:10250
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.018230 5926 server.go:354] Adding debug handlers to kubelet server.
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.018913 5926 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.019328 5926 volume_manager.go:265] Starting Kubelet Volume Manager
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.020655 5926 desired_state_of_world_populator.go:138] Desired state populator starts to run
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.044485 5926 status_manager.go:157] Starting to sync pod status with apiserver
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.045092 5926 kubelet.go:1820] Starting kubelet main sync loop.
  • Jan 22 12:08:47 minikube-test kubelet[5926]: E0122 12:08:47.045347 5926 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.119677 5926 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
  • Jan 22 12:08:47 minikube-test kubelet[5926]: E0122 12:08:47.146188 5926 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.168533 5926 kubelet_node_status.go:70] Attempting to register node minikube
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.175896 5926 kubelet_node_status.go:112] Node minikube was previously registered
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.175975 5926 kubelet_node_status.go:73] Successfully registered node minikube
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.187804 5926 cpu_manager.go:173] [cpumanager] starting with none policy
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.187826 5926 cpu_manager.go:174] [cpumanager] reconciling every 10s
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.187833 5926 policy_none.go:43] [cpumanager] none policy: Start
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.189147 5926 plugin_manager.go:114] Starting Kubelet Plugin Manager
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425266 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-kubeconfig") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425458 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b1803568f47ccacae75665e8eec6e2e3-ca-certs") pod "kube-apiserver-minikube" (UID: "b1803568f47ccacae75665e8eec6e2e3")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425486 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b1803568f47ccacae75665e8eec6e2e3-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "b1803568f47ccacae75665e8eec6e2e3")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425522 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-kubeconfig") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425599 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-addons") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425614 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/b5e50f91d1f9ce5b602ec847bf53c891-etcd-certs") pod "etcd-minikube" (UID: "b5e50f91d1f9ce5b602ec847bf53c891")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425628 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/b5e50f91d1f9ce5b602ec847bf53c891-etcd-data") pod "etcd-minikube" (UID: "b5e50f91d1f9ce5b602ec847bf53c891")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425641 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b1803568f47ccacae75665e8eec6e2e3-k8s-certs") pod "kube-apiserver-minikube" (UID: "b1803568f47ccacae75665e8eec6e2e3")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425656 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-ca-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425670 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425684 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-k8s-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425699 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425740 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff67867321338ffd885039e188f6b424-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff67867321338ffd885039e188f6b424")
  • Jan 22 12:08:47 minikube-test kubelet[5926]: I0122 12:08:47.425769 5926 reconciler.go:156] Reconciler: start to sync state
  • Jan 22 12:08:53 minikube-test kubelet[5926]: I0122 12:08:53.591651 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-zknpq" (UniqueName: "kubernetes.io/secret/da52960e-c29d-4250-a8ff-0e07c7679a93-kube-proxy-token-zknpq") pod "kube-proxy-drskv" (UID: "da52960e-c29d-4250-a8ff-0e07c7679a93")
  • Jan 22 12:08:53 minikube-test kubelet[5926]: I0122 12:08:53.591705 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/da52960e-c29d-4250-a8ff-0e07c7679a93-xtables-lock") pod "kube-proxy-drskv" (UID: "da52960e-c29d-4250-a8ff-0e07c7679a93")
  • Jan 22 12:08:53 minikube-test kubelet[5926]: I0122 12:08:53.591724 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/da52960e-c29d-4250-a8ff-0e07c7679a93-kube-proxy") pod "kube-proxy-drskv" (UID: "da52960e-c29d-4250-a8ff-0e07c7679a93")
  • Jan 22 12:08:53 minikube-test kubelet[5926]: I0122 12:08:53.591738 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/da52960e-c29d-4250-a8ff-0e07c7679a93-lib-modules") pod "kube-proxy-drskv" (UID: "da52960e-c29d-4250-a8ff-0e07c7679a93")
  • Jan 22 12:08:54 minikube-test kubelet[5926]: E0122 12:08:54.134710 5926 remote_runtime.go:295] ContainerStatus "372bdf3a9e73405c0c47fed74861cf9ef6365800d528668371f01bd8b3b97f80" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 372bdf3a9e73405c0c47fed74861cf9ef6365800d528668371f01bd8b3b97f80
  • Jan 22 12:08:54 minikube-test kubelet[5926]: E0122 12:08:54.134749 5926 kuberuntime_manager.go:955] getPodContainerStatuses for pod "kube-proxy-drskv_kube-system(da52960e-c29d-4250-a8ff-0e07c7679a93)" failed: rpc error: code = Unknown desc = Error: No such container: 372bdf3a9e73405c0c47fed74861cf9ef6365800d528668371f01bd8b3b97f80
  • Jan 22 12:08:54 minikube-test kubelet[5926]: I0122 12:08:54.802486 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/765bd388-6b16-49c6-9e8a-61b2c075ac09-tmp") pod "storage-provisioner" (UID: "765bd388-6b16-49c6-9e8a-61b2c075ac09")
  • Jan 22 12:08:54 minikube-test kubelet[5926]: I0122 12:08:54.802537 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-m6k6m" (UniqueName: "kubernetes.io/secret/765bd388-6b16-49c6-9e8a-61b2c075ac09-storage-provisioner-token-m6k6m") pod "storage-provisioner" (UID: "765bd388-6b16-49c6-9e8a-61b2c075ac09")
  • Jan 22 12:08:55 minikube-test kubelet[5926]: W0122 12:08:55.208755 5926 pod_container_deletor.go:75] Container "ca0bb8e6abc1067b9ebccb72921e60b7d45ce997a694e8808cf1feefe9983218" not found in pod's containers
  • Jan 22 12:08:55 minikube-test kubelet[5926]: I0122 12:08:55.306727 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a385de55-f647-4c42-8392-f2298f1a464a-config-volume") pod "coredns-6955765f44-jxc7c" (UID: "a385de55-f647-4c42-8392-f2298f1a464a")
  • Jan 22 12:08:55 minikube-test kubelet[5926]: I0122 12:08:55.306807 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/aeb489ed-4e06-4508-8fa6-1250bfa3afb4-config-volume") pod "coredns-6955765f44-qmfzf" (UID: "aeb489ed-4e06-4508-8fa6-1250bfa3afb4")
  • Jan 22 12:08:55 minikube-test kubelet[5926]: I0122 12:08:55.306827 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vkxxj" (UniqueName: "kubernetes.io/secret/aeb489ed-4e06-4508-8fa6-1250bfa3afb4-coredns-token-vkxxj") pod "coredns-6955765f44-qmfzf" (UID: "aeb489ed-4e06-4508-8fa6-1250bfa3afb4")
  • Jan 22 12:08:55 minikube-test kubelet[5926]: I0122 12:08:55.306893 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vkxxj" (UniqueName: "kubernetes.io/secret/a385de55-f647-4c42-8392-f2298f1a464a-coredns-token-vkxxj") pod "coredns-6955765f44-jxc7c" (UID: "a385de55-f647-4c42-8392-f2298f1a464a")
  • Jan 22 12:08:55 minikube-test kubelet[5926]: W0122 12:08:55.947951 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-jxc7c through plugin: invalid network status for
  • Jan 22 12:08:55 minikube-test kubelet[5926]: W0122 12:08:55.985616 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qmfzf through plugin: invalid network status for
  • Jan 22 12:08:56 minikube-test kubelet[5926]: W0122 12:08:56.229095 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qmfzf through plugin: invalid network status for
  • Jan 22 12:08:56 minikube-test kubelet[5926]: W0122 12:08:56.243279 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-jxc7c through plugin: invalid network status for
  • Jan 22 12:08:57 minikube-test kubelet[5926]: W0122 12:08:57.263158 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-qmfzf through plugin: invalid network status for
  • Jan 22 12:08:59 minikube-test kubelet[5926]: I0122 12:08:59.263872 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-7blx4" (UniqueName: "kubernetes.io/secret/ed681daf-3270-48f5-9af9-dfa6a053d25c-kubernetes-dashboard-token-7blx4") pod "kubernetes-dashboard-79d9cd965-jczf4" (UID: "ed681daf-3270-48f5-9af9-dfa6a053d25c")
  • Jan 22 12:08:59 minikube-test kubelet[5926]: I0122 12:08:59.264020 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/f35a92c6-1c8c-4315-bace-9709f643e4b1-tmp-volume") pod "dashboard-metrics-scraper-7b64584c5c-xr9rg" (UID: "f35a92c6-1c8c-4315-bace-9709f643e4b1")
  • Jan 22 12:08:59 minikube-test kubelet[5926]: I0122 12:08:59.264039 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-7blx4" (UniqueName: "kubernetes.io/secret/f35a92c6-1c8c-4315-bace-9709f643e4b1-kubernetes-dashboard-token-7blx4") pod "dashboard-metrics-scraper-7b64584c5c-xr9rg" (UID: "f35a92c6-1c8c-4315-bace-9709f643e4b1")
  • Jan 22 12:08:59 minikube-test kubelet[5926]: I0122 12:08:59.264063 5926 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ed681daf-3270-48f5-9af9-dfa6a053d25c-tmp-volume") pod "kubernetes-dashboard-79d9cd965-jczf4" (UID: "ed681daf-3270-48f5-9af9-dfa6a053d25c")
  • Jan 22 12:08:59 minikube-test kubelet[5926]: W0122 12:08:59.848074 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-jczf4 through plugin: invalid network status for
  • Jan 22 12:08:59 minikube-test kubelet[5926]: W0122 12:08:59.976865 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-xr9rg through plugin: invalid network status for
  • Jan 22 12:09:00 minikube-test kubelet[5926]: W0122 12:09:00.285929 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-jczf4 through plugin: invalid network status for
  • Jan 22 12:09:00 minikube-test kubelet[5926]: W0122 12:09:00.290649 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-xr9rg through plugin: invalid network status for
  • Jan 22 12:09:01 minikube-test kubelet[5926]: W0122 12:09:01.345838 5926 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-xr9rg through plugin: invalid network status for
  • ==> kubernetes-dashboard ["3e7a80d05565"] <==
  • 2020/01/22 12:09:00 Using namespace: kubernetes-dashboard
  • 2020/01/22 12:09:00 Using in-cluster config to connect to apiserver
  • 2020/01/22 12:09:00 Using secret token for csrf signing
  • 2020/01/22 12:09:00 Initializing csrf token from kubernetes-dashboard-csrf secret
  • 2020/01/22 12:09:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
  • 2020/01/22 12:09:00 Successful initial request to the apiserver, version: v1.17.0
  • 2020/01/22 12:09:00 Generating JWE encryption key
  • 2020/01/22 12:09:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
  • 2020/01/22 12:09:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
  • 2020/01/22 12:09:00 Initializing JWE encryption key from synchronized object
  • 2020/01/22 12:09:00 Starting overwatch
  • 2020/01/22 12:09:00 Creating in-cluster Sidecar client
  • 2020/01/22 12:09:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
  • 2020/01/22 12:09:00 Serving insecurely on HTTP port: 9090
  • 2020/01/22 12:09:30 Successful request to sidecar
  • ==> storage-provisioner ["9014c535fe0c"] <==
creckord commented 4 years ago

This seems related to #208 and the change in #225. I couldn't really make heads or tails of what's happening there, or I would have tried to provide a PR.

tstromberg commented 4 years ago

Recent refactors make this trivial to add:

https://github.com/kubernetes/minikube/blob/095ccbe562d78fe8aa2fcd9239e2d0ab3b9c8cff/pkg/minikube/bootstrapper/certs.go#L192

Help wanted!

linkvt commented 4 years ago

@tstromberg I looked into this and it seems that the line you mentioned in certs.go is quite far away from the docker certificage logic in https://github.com/kubernetes/minikube/blob/b94d673ae2704efe82141aba2c0511eed0e05b32/pkg/provision/provision.go#L105 The auth options where you could define the SANs are also not close to the k8s bootstrap logic https://github.com/kubernetes/minikube/blob/b94d673ae2704efe82141aba2c0511eed0e05b32/pkg/minikube/machine/client.go#L104-L112

An easier way could be to add "minikube" and the machineName in provision.go#105, the first one so that one always works and the second one as it also seams reasonable to me if you have multiple minikube VMs to talk to them by their profile name.

What do you think about this?

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

sharifelgamal commented 4 years ago

@linkvt Seems like a reasonable approach to me, feel free to open a PR.

/remove-lifecycle stale

linkvt commented 4 years ago

/assign