kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.45k stars 4.88k forks source link

kubectl: "The connection to the server ip:8443 was refused" #2479

Closed toby-griffiths closed 6 years ago

toby-griffiths commented 6 years ago

I appreciate that there are already tickets opened for this issue, but I couldn't see one open for v0.24.1, so please accept my apologies if this is considered a duplicate…

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Please provide the following details:

Environment:

Minikube version (use minikube version): minikube version: v0.24.1

What happened: Running kubectl commands, such as kubectl get pods or kubectl get services return the error…

The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

What you expected to happen: I expected to see the (empty) list of pods or services

How to reproduce it (as minimally and precisely as possible): Install minikube using brew (using brew cask install minikube), run minikube start

Output of minikube logs (if applicable):

-- Logs begin at Mon 2018-01-29 09:07:18 UTC, end at Mon 2018-01-29 09:18:04 UTC. --
Jan 29 09:08:12 minikube systemd[1]: Starting Localkube...
Jan 29 09:08:12 minikube localkube[3773]: listening for peers on http://localhost:2380
Jan 29 09:08:12 minikube localkube[3773]: listening for client requests on localhost:2379
Jan 29 09:08:12 minikube localkube[3773]: name = default
Jan 29 09:08:12 minikube localkube[3773]: data dir = /var/lib/localkube/etcd
Jan 29 09:08:12 minikube localkube[3773]: member dir = /var/lib/localkube/etcd/member
Jan 29 09:08:12 minikube localkube[3773]: heartbeat = 100ms
Jan 29 09:08:12 minikube localkube[3773]: election = 1000ms
Jan 29 09:08:12 minikube localkube[3773]: snapshot count = 10000
Jan 29 09:08:12 minikube localkube[3773]: advertise client URLs = http://localhost:2379
Jan 29 09:08:12 minikube localkube[3773]: initial advertise peer URLs = http://localhost:2380
Jan 29 09:08:12 minikube localkube[3773]: initial cluster = default=http://localhost:2380
Jan 29 09:08:12 minikube localkube[3773]: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
Jan 29 09:08:12 minikube localkube[3773]: 8e9e05c52164694d became follower at term 0
Jan 29 09:08:12 minikube localkube[3773]: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
Jan 29 09:08:12 minikube localkube[3773]: 8e9e05c52164694d became follower at term 1
Jan 29 09:08:12 minikube localkube[3773]: starting server... [version: 3.1.10, cluster version: to_be_decided]
Jan 29 09:08:12 minikube localkube[3773]: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
Jan 29 09:08:13 minikube localkube[3773]: 8e9e05c52164694d is starting a new election at term 1
Jan 29 09:08:13 minikube localkube[3773]: 8e9e05c52164694d became candidate at term 2
Jan 29 09:08:13 minikube localkube[3773]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
Jan 29 09:08:13 minikube localkube[3773]: 8e9e05c52164694d became leader at term 2
Jan 29 09:08:13 minikube localkube[3773]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
Jan 29 09:08:13 minikube localkube[3773]: setting up the initial cluster version to 3.1
Jan 29 09:08:13 minikube localkube[3773]: set the initial cluster version to 3.1
Jan 29 09:08:13 minikube localkube[3773]: enabled capabilities for version 3.1
Jan 29 09:08:13 minikube localkube[3773]: I0129 09:08:13.031568    3773 etcd.go:58] Etcd server is ready
Jan 29 09:08:13 minikube localkube[3773]: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
Jan 29 09:08:13 minikube localkube[3773]: localkube host ip address: 10.0.2.15
Jan 29 09:08:13 minikube localkube[3773]: ready to serve client requests
Jan 29 09:08:13 minikube localkube[3773]: Starting apiserver...
Jan 29 09:08:13 minikube localkube[3773]: Waiting for apiserver to be healthy...
Jan 29 09:08:13 minikube localkube[3773]: I0129 09:08:13.032226    3773 server.go:114] Version: v1.8.0
Jan 29 09:08:13 minikube localkube[3773]: W0129 09:08:13.032433    3773 authentication.go:380] AnonymousAuth is not allowed with the AllowAll authorizer.  Resetting AnonymousAuth to false. You should use a different authorizer
Jan 29 09:08:13 minikube localkube[3773]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Jan 29 09:08:13 minikube localkube[3773]: I0129 09:08:13.032837    3773 plugins.go:101] No cloud provider specified.
Jan 29 09:08:13 minikube localkube[3773]: [restful] 2018/01/29 09:08:13 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi
Jan 29 09:08:13 minikube localkube[3773]: [restful] 2018/01/29 09:08:13 log.go:33: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Jan 29 09:08:14 minikube localkube[3773]: I0129 09:08:14.032801    3773 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Jan 29 09:08:14 minikube localkube[3773]: E0129 09:08:14.033680    3773 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jan 29 09:08:14 minikube localkube[3773]: [restful] 2018/01/29 09:08:14 log.go:33: [restful/swagger] listing is available at https://10.0.2.15:8443/swaggerapi
Jan 29 09:08:14 minikube localkube[3773]: [restful] 2018/01/29 09:08:14 log.go:33: [restful/swagger] https://10.0.2.15:8443/swaggerui/ is mapped to folder /swagger-ui/
Jan 29 09:08:15 minikube localkube[3773]: I0129 09:08:15.033973    3773 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Jan 29 09:08:15 minikube localkube[3773]: E0129 09:08:15.035492    3773 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.033573    3773 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Jan 29 09:08:16 minikube localkube[3773]: E0129 09:08:16.034466    3773 ready.go:40] Error performing healthcheck: Get https://localhost:8443/healthz: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.273802    3773 aggregator.go:138] Skipping APIService creation for scheduling.k8s.io/v1alpha1
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.274426    3773 serve.go:85] Serving securely on 0.0.0.0:8443
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.274752    3773 available_controller.go:192] Starting AvailableConditionController
Jan 29 09:08:16 minikube systemd[1]: Started Localkube.
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.275307    3773 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.275952    3773 controller.go:84] Starting OpenAPI AggregationController
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.276327    3773 apiservice_controller.go:112] Starting APIServiceRegistrationController
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.276430    3773 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.276911    3773 crdregistration_controller.go:112] Starting crd-autoregister controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.276937    3773 controller_utils.go:1041] Waiting for caches to sync for crd-autoregister controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.277011    3773 customresource_discovery_controller.go:152] Starting DiscoveryController
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.277026    3773 naming_controller.go:277] Starting NamingConditionController
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.276463    3773 crd_finalizer.go:242] Starting CRDFinalizer
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.375684    3773 cache.go:39] Caches are synced for AvailableConditionController controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.376720    3773 cache.go:39] Caches are synced for APIServiceRegistrationController controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.377166    3773 controller_utils.go:1048] Caches are synced for crd-autoregister controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.377210    3773 autoregister_controller.go:136] Starting autoregister controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.377224    3773 cache.go:32] Waiting for caches to sync for autoregister controller
Jan 29 09:08:16 minikube localkube[3773]: I0129 09:08:16.479511    3773 cache.go:39] Caches are synced for autoregister controller
Jan 29 09:08:17 minikube localkube[3773]: I0129 09:08:17.032721    3773 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Jan 29 09:08:17 minikube localkube[3773]: I0129 09:08:17.042414    3773 ready.go:49] Got healthcheck response: [+]ping ok
Jan 29 09:08:17 minikube localkube[3773]: [+]etcd ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/generic-apiserver-start-informers ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/start-apiextensions-informers ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/start-apiextensions-controllers ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/bootstrap-controller ok
Jan 29 09:08:17 minikube localkube[3773]: [-]poststarthook/ca-registration failed: reason withheld
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/start-kube-apiserver-informers ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/start-kube-aggregator-informers ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/apiservice-registration-controller ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/apiservice-status-available-controller ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/apiservice-openapi-controller ok
Jan 29 09:08:17 minikube localkube[3773]: [+]poststarthook/kube-apiserver-autoregistration ok
Jan 29 09:08:17 minikube localkube[3773]: [+]autoregister-completion ok
Jan 29 09:08:17 minikube localkube[3773]: healthz check failed
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.033294    3773 ready.go:30] Performing healthcheck on https://localhost:8443/healthz
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.043081    3773 ready.go:49] Got healthcheck response: ok
Jan 29 09:08:18 minikube localkube[3773]: apiserver is ready!
Jan 29 09:08:18 minikube localkube[3773]: Starting controller-manager...
Jan 29 09:08:18 minikube localkube[3773]: Waiting for controller-manager to be healthy...
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.043547    3773 controllermanager.go:109] Version: v1.8.0
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.046921    3773 leaderelection.go:174] attempting to acquire leader lease...
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.056570    3773 leaderelection.go:184] successfully acquired lease kube-system/kube-controller-manager
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.056880    3773 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"f21028ce-04d3-11e8-b451-0800274e32d3", APIVersion:"v1", ResourceVersion:"35", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.071177    3773 plugins.go:101] No cloud provider specified.
Jan 29 09:08:18 minikube localkube[3773]: W0129 09:08:18.072728    3773 controllermanager.go:471] "tokencleaner" is disabled
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.072890    3773 controller_utils.go:1041] Waiting for caches to sync for tokens controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.076696    3773 controllermanager.go:487] Started "endpoint"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.077742    3773 controllermanager.go:487] Started "serviceaccount"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.077973    3773 serviceaccounts_controller.go:113] Starting service account controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.078069    3773 controller_utils.go:1041] Waiting for caches to sync for service account controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.076891    3773 endpoints_controller.go:153] Starting endpoint controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.078282    3773 controller_utils.go:1041] Waiting for caches to sync for endpoint controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.078718    3773 controllermanager.go:487] Started "daemonset"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.078976    3773 daemon_controller.go:230] Starting daemon sets controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.079087    3773 controller_utils.go:1041] Waiting for caches to sync for daemon sets controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.079786    3773 controllermanager.go:487] Started "replicaset"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.080000    3773 replica_set.go:156] Starting replica set controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.080100    3773 controller_utils.go:1041] Waiting for caches to sync for replica set controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.082593    3773 controllermanager.go:487] Started "horizontalpodautoscaling"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.083289    3773 horizontal.go:145] Starting HPA controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.083406    3773 controller_utils.go:1041] Waiting for caches to sync for HPA controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.083374    3773 controllermanager.go:487] Started "cronjob"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.083383    3773 cronjob_controller.go:98] Starting CronJob Manager
Jan 29 09:08:18 minikube localkube[3773]: E0129 09:08:18.084538    3773 certificates.go:48] Failed to start certificate controller: error reading CA cert file "/etc/kubernetes/ca/ca.pem": open /etc/kubernetes/ca/ca.pem: no such file or directory
Jan 29 09:08:18 minikube localkube[3773]: W0129 09:08:18.084709    3773 controllermanager.go:484] Skipping "csrsigning"
Jan 29 09:08:18 minikube localkube[3773]: E0129 09:08:18.086900    3773 core.go:70] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail.
Jan 29 09:08:18 minikube localkube[3773]: W0129 09:08:18.087014    3773 controllermanager.go:484] Skipping "service"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.087833    3773 node_controller.go:249] Sending events to api server.
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.088023    3773 taint_controller.go:158] Sending events to api server.
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.088160    3773 controllermanager.go:487] Started "node"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.088264    3773 node_controller.go:516] Starting node controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.088285    3773 controller_utils.go:1041] Waiting for caches to sync for node controller
Jan 29 09:08:18 minikube localkube[3773]: W0129 09:08:18.089123    3773 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.089957    3773 controllermanager.go:487] Started "attachdetach"
Jan 29 09:08:18 minikube localkube[3773]: W0129 09:08:18.090227    3773 controllermanager.go:484] Skipping "persistentvolume-expander"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.091309    3773 controllermanager.go:487] Started "replicationcontroller"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.091441    3773 attach_detach_controller.go:255] Starting attach detach controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.091607    3773 controller_utils.go:1041] Waiting for caches to sync for attach detach controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.091499    3773 replication_controller.go:151] Starting RC controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.091627    3773 controller_utils.go:1041] Waiting for caches to sync for RC controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.104070    3773 controllermanager.go:487] Started "namespace"
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.104295    3773 namespace_controller.go:186] Starting namespace controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.106385    3773 controller_utils.go:1041] Waiting for caches to sync for namespace controller
Jan 29 09:08:18 minikube localkube[3773]: I0129 09:08:18.175787    3773 controller_utils.go:1048] Caches are synced for tokens controller
Jan 29 09:08:19 minikube localkube[3773]: controller-manager is ready!
Jan 29 09:08:19 minikube localkube[3773]: Starting scheduler...
Jan 29 09:08:19 minikube localkube[3773]: Waiting for scheduler to be healthy...
Jan 29 09:08:19 minikube localkube[3773]: E0129 09:08:19.047921    3773 server.go:173] unable to register configz: register config "componentconfig" twice
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.308798    3773 controllermanager.go:487] Started "garbagecollector"
Jan 29 09:08:19 minikube localkube[3773]: W0129 09:08:19.309134    3773 controllermanager.go:471] "bootstrapsigner" is disabled
Jan 29 09:08:19 minikube localkube[3773]: W0129 09:08:19.309260    3773 core.go:128] Unsuccessful parsing of cluster CIDR : invalid CIDR address:
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.309380    3773 core.go:131] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
Jan 29 09:08:19 minikube localkube[3773]: W0129 09:08:19.309535    3773 controllermanager.go:484] Skipping "route"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.309112    3773 garbagecollector.go:136] Starting garbage collector controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.310515    3773 controller_utils.go:1041] Waiting for caches to sync for garbage collector controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.310652    3773 graph_builder.go:321] GraphBuilder running
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.311258    3773 controllermanager.go:487] Started "ttl"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.311849    3773 ttl_controller.go:116] Starting TTL controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.312020    3773 controller_utils.go:1041] Waiting for caches to sync for TTL controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.312542    3773 controllermanager.go:487] Started "podgc"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.313039    3773 gc_controller.go:76] Starting GC controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.313185    3773 controller_utils.go:1041] Waiting for caches to sync for GC controller
Jan 29 09:08:19 minikube localkube[3773]: W0129 09:08:19.314018    3773 shared_informer.go:304] resyncPeriod 52362576512748 is smaller than resyncCheckPeriod 84105571366640 and the informer has already started. Changing it to 84105571366640
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.314225    3773 controllermanager.go:487] Started "resourcequota"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.315165    3773 controllermanager.go:487] Started "job"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.315449    3773 job_controller.go:138] Starting job controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.315542    3773 controller_utils.go:1041] Waiting for caches to sync for job controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.316326    3773 controllermanager.go:487] Started "deployment"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.315359    3773 resource_quota_controller.go:238] Starting resource quota controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.317454    3773 controllermanager.go:487] Started "disruption"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.317462    3773 disruption.go:288] Starting disruption controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.318515    3773 controller_utils.go:1041] Waiting for caches to sync for disruption controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.317468    3773 deployment_controller.go:151] Starting deployment controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.318686    3773 controller_utils.go:1041] Waiting for caches to sync for deployment controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.317740    3773 controller_utils.go:1041] Waiting for caches to sync for resource quota controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.319280    3773 controllermanager.go:487] Started "statefulset"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.320113    3773 stateful_set.go:146] Starting stateful set controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.320236    3773 controller_utils.go:1041] Waiting for caches to sync for stateful set controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.320658    3773 controllermanager.go:487] Started "csrapproving"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.321458    3773 certificate_controller.go:109] Starting certificate controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.321560    3773 controller_utils.go:1041] Waiting for caches to sync for certificate controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.322112    3773 controllermanager.go:487] Started "persistentvolume-binder"
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.322602    3773 pv_controller_base.go:259] Starting persistent volume controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.322740    3773 controller_utils.go:1041] Waiting for caches to sync for persistent volume controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.378385    3773 controller_utils.go:1048] Caches are synced for service account controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.378468    3773 controller_utils.go:1048] Caches are synced for endpoint controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.386817    3773 controller_utils.go:1048] Caches are synced for replica set controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.387055    3773 controller_utils.go:1048] Caches are synced for HPA controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.391651    3773 controller_utils.go:1048] Caches are synced for node controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.391744    3773 taint_controller.go:181] Starting NoExecuteTaintManager
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.407114    3773 controller_utils.go:1048] Caches are synced for namespace controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.412230    3773 controller_utils.go:1048] Caches are synced for TTL controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.414516    3773 controller_utils.go:1048] Caches are synced for GC controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.416001    3773 controller_utils.go:1048] Caches are synced for job controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.419896    3773 controller_utils.go:1048] Caches are synced for deployment controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.420559    3773 controller_utils.go:1048] Caches are synced for stateful set controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.423178    3773 controller_utils.go:1048] Caches are synced for certificate controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.479615    3773 controller_utils.go:1048] Caches are synced for daemon sets controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.523030    3773 controller_utils.go:1048] Caches are synced for persistent volume controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.592082    3773 controller_utils.go:1048] Caches are synced for attach detach controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.592042    3773 controller_utils.go:1048] Caches are synced for RC controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.611192    3773 controller_utils.go:1048] Caches are synced for garbage collector controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.611236    3773 garbagecollector.go:145] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.619631    3773 controller_utils.go:1048] Caches are synced for resource quota controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.619651    3773 controller_utils.go:1048] Caches are synced for disruption controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.620073    3773 disruption.go:296] Sending events to api server.
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.749731    3773 controller_utils.go:1041] Waiting for caches to sync for scheduler controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.849953    3773 controller_utils.go:1048] Caches are synced for scheduler controller
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.850296    3773 leaderelection.go:174] attempting to acquire leader lease...
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.853390    3773 leaderelection.go:184] successfully acquired lease kube-system/kube-scheduler
Jan 29 09:08:19 minikube localkube[3773]: I0129 09:08:19.853626    3773 event.go:218] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-scheduler", UID:"f3224fec-04d3-11e8-b451-0800274e32d3", APIVersion:"v1", ResourceVersion:"46", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube became leader
Jan 29 09:08:20 minikube localkube[3773]: scheduler is ready!
Jan 29 09:08:20 minikube localkube[3773]: Starting kubelet...
Jan 29 09:08:20 minikube localkube[3773]: Waiting for kubelet to be healthy...
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.046275    3773 feature_gate.go:156] feature gates: map[]
Jan 29 09:08:20 minikube localkube[3773]: W0129 09:08:20.046432    3773 server.go:276] --require-kubeconfig is deprecated. Set --kubeconfig without using --require-kubeconfig.
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.303646    3773 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.308172    3773 client.go:95] Start docker client with request timeout=2m0s
Jan 29 09:08:20 minikube localkube[3773]: W0129 09:08:20.314416    3773 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.351918    3773 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/localkube.service"
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.381329    3773 fs.go:139] Filesystem UUIDs: map[2017-10-19-17-24-41-00:/dev/sr0 3b61907c-194d-4541-8c5b-667d34e23354:/dev/sda1 a5cf8477-6dfe-431e-9fbe-512d1194ada3:/dev/sda2]
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.381361    3773 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/dev/shm major:0 minor:17 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/mnt/sda1 major:8 minor:1 fsType:ext4 blockSize:0}]
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.383228    3773 manager.go:216] Machine: {NumCores:2 CpuFrequency:2793532 MemoryCapacity:2097229824 HugePages:[{PageSize:2048 NumPages:0}] MachineID:39880d9b733c4e4c964c545e997182c6 SystemUUID:6768D26C-938A-485E-B1DA-89E941867204 BootID:10381384-cd43-43be-8ac5-6a8b497f8c56 Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:17 Capacity:1048612864 Type:vfs Inodes:256009 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:17293533184 Type:vfs Inodes:9732096 HasInodes:true} {Device:rootfs DeviceMajor:0 DeviceMinor:1 Capacity:0 Type:vfs Inodes:0 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:20971520000 Scheduler:cfq}] NetworkDevices:[{Name:eth0 MacAddress:08:00:27:4e:32:d3 Speed:-1 Mtu:1500} {Name:eth1 MacAddress:08:00:27:aa:1f:80 Speed:-1 Mtu:1500} {Name:sit0 MacAddress:00:00:00:00 Speed:0 Mtu:1480}] Topology:[{Id:0 Memory:2097229824 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:6291456 Type:Unified Level:3} {Size:134217728 Type:Unified Level:4}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.383919    3773 manager.go:222] Version: {KernelVersion:4.9.13 ContainerOsVersion:Buildroot 2017.02 DockerVersion:17.06.0-ce DockerAPIVersion:1.30 CadvisorVersion: CadvisorRevision:}
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.384307    3773 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.387280    3773 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.387309    3773 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.387392    3773 container_manager_linux.go:288] Creating device plugin handler: false
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.387518    3773 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.387537    3773 kubelet.go:283] Watching apiserver
Jan 29 09:08:20 minikube localkube[3773]: W0129 09:08:20.393439    3773 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.393620    3773 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.397592    3773 docker_service.go:207] Docker cri networking managed by kubernetes.io/no-op
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.403568    3773 docker_service.go:224] Setting cgroupDriver to cgroupfs
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.412034    3773 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.413149    3773 kuberuntime_manager.go:174] Container runtime docker initialized, version: 17.06.0-ce, apiVersion: 1.30.0
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.413474    3773 kuberuntime_manager.go:898] updating runtime config through cri with podcidr 10.180.1.0/24
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.413721    3773 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.180.1.0/24,},}
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.413906    3773 kubelet_network.go:276] Setting Pod CIDR:  -> 10.180.1.0/24
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.415992    3773 server.go:718] Started kubelet v1.8.0
Jan 29 09:08:20 minikube localkube[3773]: E0129 09:08:20.416789    3773 kubelet.go:1234] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.417343    3773 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.417662    3773 server.go:128] Starting to listen on 0.0.0.0:10250
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.419044    3773 server.go:296] Adding debug handlers to kubelet server.
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.442676    3773 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.442906    3773 status_manager.go:140] Starting to sync pod status with apiserver
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.443007    3773 kubelet.go:1768] Starting kubelet main sync loop.
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.443102    3773 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Jan 29 09:08:20 minikube localkube[3773]: E0129 09:08:20.443430    3773 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.443621    3773 volume_manager.go:246] Starting Kubelet Volume Manager
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.462507    3773 factory.go:355] Registering Docker factory
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.463324    3773 factory.go:89] Registering Rkt factory
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.464681    3773 factory.go:157] Registering CRI-O factory
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.464891    3773 factory.go:54] Registering systemd factory
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.465421    3773 factory.go:86] Registering Raw factory
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.465644    3773 manager.go:1140] Started watching for new ooms in manager
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.466143    3773 manager.go:311] Starting recovery of all containers
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.489849    3773 manager.go:316] Recovery completed
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.491692    3773 rkt.go:56] starting detectRktContainers thread
Jan 29 09:08:20 minikube localkube[3773]: E0129 09:08:20.524146    3773 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'minikube' not found
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.546006    3773 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.548282    3773 kubelet_node_status.go:83] Attempting to register node minikube
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.552436    3773 kubelet_node_status.go:86] Successfully registered node minikube
Jan 29 09:08:20 minikube localkube[3773]: E0129 09:08:20.552717    3773 actual_state_of_world.go:483] Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
Jan 29 09:08:20 minikube localkube[3773]: E0129 09:08:20.552736    3773 actual_state_of_world.go:497] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true because nodeName="minikube"  does not exist
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.556088    3773 kuberuntime_manager.go:898] updating runtime config through cri with podcidr
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.556595    3773 docker_service.go:306] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}
Jan 29 09:08:20 minikube localkube[3773]: I0129 09:08:20.556706    3773 kubelet_network.go:276] Setting Pod CIDR: 10.180.1.0/24 ->
Jan 29 09:08:21 minikube localkube[3773]: kubelet is ready!
Jan 29 09:08:21 minikube localkube[3773]: Starting proxy...
Jan 29 09:08:21 minikube localkube[3773]: Waiting for proxy to be healthy...
Jan 29 09:08:21 minikube localkube[3773]: W0129 09:08:21.047490    3773 server_others.go:63] unable to register configz: register config "componentconfig" twice
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.055883    3773 server_others.go:117] Using iptables Proxier.
Jan 29 09:08:21 minikube localkube[3773]: W0129 09:08:21.063185    3773 proxier.go:473] clusterCIDR not specified, unable to distinguish between internal and external traffic
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.063494    3773 server_others.go:152] Tearing down inactive rules.
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.079798    3773 config.go:202] Starting service config controller
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.080204    3773 controller_utils.go:1041] Waiting for caches to sync for service config controller
Jan 29 09:08:21 minikube localkube[3773]: E0129 09:08:21.080338    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.080251    3773 config.go:102] Starting endpoints config controller
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.080525    3773 controller_utils.go:1041] Waiting for caches to sync for endpoints config controller
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.181052    3773 controller_utils.go:1048] Caches are synced for endpoints config controller
Jan 29 09:08:21 minikube localkube[3773]: I0129 09:08:21.181075    3773 controller_utils.go:1048] Caches are synced for service config controller
Jan 29 09:08:22 minikube localkube[3773]: proxy is ready!
Jan 29 09:08:24 minikube localkube[3773]: I0129 09:08:24.392631    3773 node_controller.go:563] Initializing eviction metric for zone:
Jan 29 09:08:24 minikube localkube[3773]: W0129 09:08:24.392794    3773 node_controller.go:916] Missing timestamp for Node minikube. Assuming now as a timestamp.
Jan 29 09:08:24 minikube localkube[3773]: I0129 09:08:24.392820    3773 node_controller.go:832] Controller detected that zone  is now in state Normal.
Jan 29 09:08:24 minikube localkube[3773]: I0129 09:08:24.392850    3773 event.go:218] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"f38cbdcd-04d3-11e8-b451-0800274e32d3", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
Jan 29 09:08:25 minikube localkube[3773]: I0129 09:08:25.545215    3773 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-addons") pod "kube-addon-manager-minikube" (UID: "7b19c3ba446df5355649563d32723e4f")
Jan 29 09:08:25 minikube localkube[3773]: I0129 09:08:25.545285    3773 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/7b19c3ba446df5355649563d32723e4f-kubeconfig") pod "kube-addon-manager-minikube" (UID: "7b19c3ba446df5355649563d32723e4f")
Jan 29 09:08:27 minikube localkube[3773]: I0129 09:08:27.272518    3773 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"storage-provisioner", UID:"f78df45f-04d3-11e8-b451-0800274e32d3", APIVersion:"v1", ResourceVersion:"77", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned storage-provisioner to minikube
Jan 29 09:08:27 minikube localkube[3773]: E0129 09:08:27.273565    3773 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Jan 29 09:08:27 minikube localkube[3773]: I0129 09:08:27.352915    3773 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-h8pm8" (UniqueName: "kubernetes.io/secret/f78df45f-04d3-11e8-b451-0800274e32d3-default-token-h8pm8") pod "storage-provisioner" (UID: "f78df45f-04d3-11e8-b451-0800274e32d3")
Jan 29 09:08:27 minikube localkube[3773]: I0129 09:08:27.876855    3773 event.go:218] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"f7e96120-04d3-11e8-b451-0800274e32d3", APIVersion:"v1", ResourceVersion:"83", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-hc2b2
Jan 29 09:08:27 minikube localkube[3773]: I0129 09:08:27.894072    3773 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-hc2b2", UID:"f7e9b03a-04d3-11e8-b451-0800274e32d3", APIVersion:"v1", ResourceVersion:"84", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kubernetes-dashboard-hc2b2 to minikube
Jan 29 09:08:27 minikube localkube[3773]: I0129 09:08:27.954862    3773 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-h8pm8" (UniqueName: "kubernetes.io/secret/f7e9b03a-04d3-11e8-b451-0800274e32d3-default-token-h8pm8") pod "kubernetes-dashboard-hc2b2" (UID: "f7e9b03a-04d3-11e8-b451-0800274e32d3")
Jan 29 09:08:28 minikube localkube[3773]: I0129 09:08:28.017368    3773 event.go:218] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"f7ff6172-04d3-11e8-b451-0800274e32d3", APIVersion:"extensions", ResourceVersion:"96", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-86f6f55dd5 to 1
Jan 29 09:08:28 minikube localkube[3773]: I0129 09:08:28.035856    3773 event.go:218] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5", UID:"f7ffe907-04d3-11e8-b451-0800274e32d3", APIVersion:"extensions", ResourceVersion:"97", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-86f6f55dd5-tn87q
Jan 29 09:08:28 minikube localkube[3773]: I0129 09:08:28.045243    3773 event.go:218] Event(v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-86f6f55dd5-tn87q", UID:"f80056e1-04d3-11e8-b451-0800274e32d3", APIVersion:"v1", ResourceVersion:"98", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-86f6f55dd5-tn87q to minikube
Jan 29 09:08:28 minikube localkube[3773]: E0129 09:08:28.049753    3773 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Jan 29 09:08:28 minikube localkube[3773]: I0129 09:08:28.157741    3773 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/f80056e1-04d3-11e8-b451-0800274e32d3-kube-dns-config") pod "kube-dns-86f6f55dd5-tn87q" (UID: "f80056e1-04d3-11e8-b451-0800274e32d3")
Jan 29 09:08:28 minikube localkube[3773]: I0129 09:08:28.157813    3773 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-h8pm8" (UniqueName: "kubernetes.io/secret/f80056e1-04d3-11e8-b451-0800274e32d3-default-token-h8pm8") pod "kube-dns-86f6f55dd5-tn87q" (UID: "f80056e1-04d3-11e8-b451-0800274e32d3")
Jan 29 09:08:28 minikube localkube[3773]: E0129 09:08:28.273871    3773 helpers.go:135] readString: Failed to read "/sys/fs/cgroup/memory/system.slice/run-r8f05327dfba84d2898317cd03b9d204a.scope/memory.limit_in_bytes": read /sys/fs/cgroup/memory/system.slice/run-r8f05327dfba84d2898317cd03b9d204a.scope/memory.limit_in_bytes: no such device
Jan 29 09:08:28 minikube localkube[3773]: E0129 09:08:28.276463    3773 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Jan 29 09:08:28 minikube localkube[3773]: W0129 09:08:28.822663    3773 kuberuntime_container.go:191] Non-root verification doesn't support non-numeric user (nobody)
Jan 29 09:08:36 minikube localkube[3773]: E0129 09:08:36.127011    3773 proxier.go:1621] Failed to delete stale service IP 10.96.0.10 connections, error: error deleting connection tracking state for UDP service IP: 10.96.0.10, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH
Jan 29 09:08:50 minikube localkube[3773]: W0129 09:08:50.552655    3773 conversion.go:110] Could not get instant cpu stats: different number of cpus
Jan 29 09:08:50 minikube localkube[3773]: W0129 09:08:50.556370    3773 conversion.go:110] Could not get instant cpu stats: different number of cpus
Jan 29 09:09:21 minikube localkube[3773]: E0129 09:09:21.080646    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:10:21 minikube localkube[3773]: E0129 09:10:21.081000    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:11:21 minikube localkube[3773]: E0129 09:11:21.082557    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:12:21 minikube localkube[3773]: E0129 09:12:21.083545    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:13:21 minikube localkube[3773]: E0129 09:13:21.084914    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:14:21 minikube localkube[3773]: E0129 09:14:21.085674    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:15:21 minikube localkube[3773]: E0129 09:15:21.086086    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:16:21 minikube localkube[3773]: E0129 09:16:21.087236    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address
Jan 29 09:17:21 minikube localkube[3773]: E0129 09:17:21.087492    3773 healthcheck.go:317] Failed to start node healthz on 0: listen tcp: address 0: missing port in address

Anything else do we need to know:

I've also tried stopping, deleting, and removing the ~/.minikube/ directory without any luck.

Copy of my kubectl config:

apiVersion: v1
clusters:
- cluster:
    certificate-authority: /Users/toby/.minikube/ca.crt
    server: https://192.168.99.100:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: /Users/toby/.minikube/client.crt
    client-key: /Users/toby/.minikube/client.key
toby-griffiths commented 6 years ago

I've also upgraded to v0.25.0 this morning, using brew cask upgrade but am still getting the same error.

mhaddon commented 6 years ago

I have this too

kotenyi commented 6 years ago

me too

toby-griffiths commented 6 years ago

Anyone from the project team able to help with this?

tuzla0autopilot4 commented 6 years ago

+1

WarnerHooh commented 6 years ago

+1

testphys commented 6 years ago

+1

nehagup commented 6 years ago

So I faced the same problem and got through it.

So first update kubectl. If you installed it with brew:

brew upgrade kubectl

Otherwise, check the kubectl installation docs here: https://kubernetes.io/docs/tasks/tools/install-kubectl

Then make sure you are targeting your minikube:

kubectl config use-context minikube

If that still fails, stop and delete minikube and re-install by downloading the release from the GitHub release page:

https://github.com/kubernetes/minikube/releases

toby-griffiths commented 6 years ago

For anyone finding this and still having problem, the new Docker for Mac Edge version includes Kubernetes, and works great.

dlorenc commented 6 years ago

To debug this we'll need the output of "minikube logs" after a "minikube delete" and "minikube start".

toby-griffiths commented 6 years ago

I'm no longer using Minikube, as Docker for Mac (Edge) includes Kubernetes, so I'm afraid I can't provide this info any longer.

Happy to close the issue, if you like.

aperrot42 commented 6 years ago

Same problem here, the minikube stops being accsible after a while. Minikube status indicates that it is still running.

inliquid commented 6 years ago

Same here. Clean install, Linux, --vm-driver=none

root@xxxxxx:~# kubectl cluster-info
Kubernetes master is running at https://10.xx.xx.xx:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 10.xx.xx.xx:8443 was refused - did you specify the right host or port?
root@xxxxxx:~# minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 10.xx.xx.xx
root@xxxxxx:~# minikube version
minikube version: v0.28.1
kennytrytek-wf commented 6 years ago

Same problem on Ubuntu 16.04 running in AWS:

sudo minikube --vm-driver=none start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0911 21:59:10.273616    2035 start.go:300] Error starting cluster:  kubeadm init error 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
 running command: : running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

 output: [init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
    [WARNING Hostname]: hostname "minikube" could not be reached
    [WARNING Hostname]: hostname "minikube" lookup minikube on 10.0.0.2:53: no such host
    [WARNING Port-10250]: Port 10250 is in use
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
    [WARNING FileExisting-ebtables]: ebtables not found in system path
    [WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Flag --admission-control has been deprecated, Use --enable-admission-plugins or --disable-admission-plugins instead. Will be removed in a future version.
    [WARNING DirAvailable--data-minikube]: /data/minikube is not empty
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/var/lib/localkube/certs/"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 4.568168 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
error uploading configuration: unable to create configmap: Post https://localhost:8443/api/v1/namespaces/kube-system/configmaps: dial tcp 127.0.0.1:8443: getsockopt: connection refused
: running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

.: exit status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
    minikube config set WantReportErrorPrompt false
================================================================================

Occurs unpredictably, maybe once every three attempts.

tstromberg commented 6 years ago

One interesting message to me is:

"Failed to start node healthz on 0:"

This bug is probably obsolete now, but what does "minikube ssh" do in this environment?

anuj53360 commented 6 years ago

I am getting this error when I am trying to start minikube in MAC.

EXIMR-IM-806:~ anuj.kp$ minikube start Starting local Kubernetes v1.10.0 cluster... Starting VM... Getting VM IP address... Moving files into cluster... Setting up certs... Connecting to cluster... Setting up kubeconfig... Starting cluster components... Kubectl is now configured to use the cluster. Loading cached images from config file. EXIMR-IM-806:~ anuj.kp$ kubectl get nodes The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

thomascraig commented 6 years ago

I encountered the same issue on macOS high Sierra - 10.13.6

Working solution for me was: -> minikube stop -> kubectl config use-context minikube -> minikube start

Those three steps resoved the issue getting: The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

tstromberg commented 6 years ago

Closing open localkube issues, as localkube was long deprecated and removed from the last two minikube releases. I hope you were able to find another solution that worked out for you - if not, please open a new PR.

sahinci commented 5 years ago

Thanks @thomascraig
Solved my problem

tahsin352 commented 5 years ago

minikube start

@thomascraig solved my problem.

aembar commented 5 years ago

try running the commands "minikube stop" followed by "minikube start"

udaybambal commented 8 months ago

I am facing same issue but using kubeadm. How to solve?