kubernetes / minikube

Run Kubernetes locally
https://minikube.sigs.k8s.io/
Apache License 2.0
29.36k stars 4.88k forks source link

Fail when run minikube (hyperkit) with cleanup #4888

Closed brunowego closed 5 years ago

brunowego commented 5 years ago

The exact command to reproduce the issue:

minikube tunnel -c

The full output of the command that failed:

E0727 14:40:19.597685   45180 tunnel.go:50] error cleaning up: conflicting rule in routing table: 10.96/12           192.168.64.16      UGSc            1        0 bridge1

The output of the minikube logs command:

==> coredns <==
.:53
2019-07-27T17:22:50.808Z [INFO] CoreDNS-1.3.1
2019-07-27T17:22:50.808Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-07-27T17:22:50.808Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
2019-07-27T17:22:56.811Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:33200->192.168.64.1:53: i/o timeout
2019-07-27T17:22:59.813Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:60856->192.168.64.1:53: i/o timeout
2019-07-27T17:23:00.814Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:46660->192.168.64.1:53: i/o timeout
2019-07-27T17:23:01.815Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:49371->192.168.64.1:53: i/o timeout
2019-07-27T17:23:04.817Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:57607->192.168.64.1:53: i/o timeout
2019-07-27T17:23:07.817Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:34173->192.168.64.1:53: i/o timeout
2019-07-27T17:23:10.819Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:51606->192.168.64.1:53: i/o timeout
2019-07-27T17:23:13.820Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:42507->192.168.64.1:53: i/o timeout
2019-07-27T17:23:16.824Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:59444->192.168.64.1:53: i/o timeout
2019-07-27T17:23:19.827Z [ERROR] plugin/errors: 2 4048621488476750276.7843096528797689661. HINFO: read udp 172.17.0.2:36359->192.168.64.1:53: i/o timeout

==> dmesg <==
[Jul27 17:20] ERROR: earlyprintk= earlyser already used
[  +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20170831/tbprint-211)
[  +0.000000]  #2
[  +0.067060]  #3
[ +17.889199] ACPI Error: Could not enable RealTimeClock event (20170831/evxfevnt-218)
[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20170831/evxface-654)
[  +0.013756] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.339805] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
[  +0.004895] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000003] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.578054] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +1.930229] vboxguest: loading out-of-tree module taints kernel.
[  +0.003731] vboxguest: PCI device not found, probably running on physical hardware.
[  +1.035510] systemd-fstab-generator[1859]: Ignoring "noauto" for root device
[Jul27 17:21] systemd-fstab-generator[2608]: Ignoring "noauto" for root device
[Jul27 17:22] systemd-fstab-generator[2922]: Ignoring "noauto" for root device
[ +18.654623] kauditd_printk_skb: 68 callbacks suppressed
[ +11.370640] tee (3648): /proc/3369/oom_adj is deprecated, please use /proc/3369/oom_score_adj instead.
[  +3.207572] NFSD: Unable to end grace period: -110
[  +3.821706] kauditd_printk_skb: 20 callbacks suppressed
[  +5.911157] kauditd_printk_skb: 47 callbacks suppressed
[Jul27 17:26] kauditd_printk_skb: 5 callbacks suppressed
[Jul27 17:36] kauditd_printk_skb: 14 callbacks suppressed

==> kernel <==
 17:43:09 up 22 min,  0 users,  load average: 0.14, 0.43, 0.35
Linux minikube 4.15.0 #1 SMP Sun Jun 23 23:02:01 PDT 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:35:50+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:36:48+00:00 ==
INFO: == Reeconcilirrong r: with dnoepr oecatbjeced tlabes l pas==
sed to apply
INFO: == Reconciling with addon-manager label ==
error: no objects passed to applserviceaccount/styo
rage-provisioner unchanged
error: no objects INpFO: assed to a== Kupplberynetes
addon reconcile completed at 2019-07-27T17:36:50+00:00 ==
error: no obINFOj: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:37:48+00:00 ==
IeNcts paFO: =ssed =to ap ply
Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:37:50+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:38:48+00:00 ==
eIrrNor: no objeFOct:s pas ==sed to Reco apnciply
ling with deprecated label ==
error: no objects passed INFO:to  =app=ly
 Reconciling with addonerror:- no objmanagectes par labsel ==sed
 to apply
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:38:50+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:39:48+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:39:49+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:40:49+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:40:50+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:41:48+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:41:50+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-07-27T17:42:48+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-07-27T17:42:50+00:00 ==

==> kube-apiserver <==
I0727 17:22:35.642941       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0727 17:22:35.643669       1 client.go:354] parsed scheme: ""
I0727 17:22:35.643704       1 client.go:354] scheme "" not registered, fallback to default scheme
I0727 17:22:35.643728       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0727 17:22:35.643863       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0727 17:22:35.652561       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0727 17:22:37.507529       1 secure_serving.go:116] Serving securely on [::]:8443
I0727 17:22:37.507617       1 available_controller.go:374] Starting AvailableConditionController
I0727 17:22:37.507634       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0727 17:22:37.507730       1 crd_finalizer.go:255] Starting CRDFinalizer
I0727 17:22:37.508078       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0727 17:22:37.508275       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0727 17:22:37.508356       1 naming_controller.go:288] Starting NamingConditionController
I0727 17:22:37.508449       1 establishing_controller.go:73] Starting EstablishingController
I0727 17:22:37.508528       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0727 17:22:37.508276       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0727 17:22:37.508450       1 controller.go:81] Starting OpenAPI AggregationController
I0727 17:22:37.508623       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0727 17:22:37.509108       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0727 17:22:37.508302       1 autoregister_controller.go:140] Starting autoregister controller
I0727 17:22:37.509257       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0727 17:22:37.508329       1 controller.go:83] Starting OpenAPI controller
E0727 17:22:37.541319       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.16, ResourceVersion: 0, AdditionalErrorMsg:
I0727 17:22:37.608367       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0727 17:22:37.608640       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0727 17:22:37.609434       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0727 17:22:37.609468       1 cache.go:39] Caches are synced for autoregister controller
I0727 17:22:37.802012       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0727 17:22:38.507798       1 controller.go:107] OpenAPI AggregationController: Processing item
I0727 17:22:38.508000       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0727 17:22:38.508172       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0727 17:22:38.519189       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0727 17:22:38.534305       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0727 17:22:38.534412       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0727 17:22:40.290460       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0727 17:22:40.472229       1 controller.go:606] quota admission added evaluator for: endpoints
I0727 17:22:40.570261       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0727 17:22:40.906599       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.64.16]
I0727 17:22:41.897641       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0727 17:22:42.163958       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0727 17:22:42.516221       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0727 17:22:42.946153       1 controller.go:606] quota admission added evaluator for: namespaces
I0727 17:22:48.801327       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0727 17:22:49.198142       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0727 17:25:48.320182       1 controller.go:606] quota admission added evaluator for: deployments.extensions
E0727 17:36:49.614333       1 upgradeaware.go:343] Error proxying data from client to backend: read tcp 192.168.64.16:8443->192.168.64.1:65162: read: connection reset by peer
I0727 17:38:39.019696       1 trace.go:81] Trace[125562297]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-07-27 17:38:38.106734936 +0000 UTC m=+965.505486203) (total time: 912.882273ms):
Trace[125562297]: [912.852921ms] [912.758365ms] About to write a response
I0727 17:38:39.023201       1 trace.go:81] Trace[1814430934]: "Get /api/v1/namespaces/kube-system" (started: 2019-07-27 17:38:37.919363502 +0000 UTC m=+965.318114766) (total time: 1.10380744s):
Trace[1814430934]: [1.103763629s] [1.103752046s] About to write a response

==> kube-proxy <==
W0727 17:22:49.938988       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0727 17:22:49.948678       1 server_others.go:143] Using iptables Proxier.
W0727 17:22:49.948907       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0727 17:22:49.949238       1 server.go:534] Version: v1.15.0
I0727 17:22:49.962780       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0727 17:22:49.962895       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0727 17:22:49.963070       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0727 17:22:49.963204       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0727 17:22:49.963552       1 config.go:187] Starting service config controller
I0727 17:22:49.963594       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0727 17:22:49.963634       1 config.go:96] Starting endpoints config controller
I0727 17:22:49.963643       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0727 17:22:50.064336       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0727 17:22:50.064497       1 controller_utils.go:1036] Caches are synced for service config controller

==> kube-scheduler <==
I0727 17:22:33.610739       1 serving.go:319] Generated self-signed cert in-memory
W0727 17:22:34.155862       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0727 17:22:34.155909       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0727 17:22:34.155932       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0727 17:22:34.158639       1 server.go:142] Version: v1.15.0
I0727 17:22:34.158704       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0727 17:22:34.159621       1 authorization.go:47] Authorization is disabled
W0727 17:22:34.159655       1 authentication.go:55] Authentication is disabled
I0727 17:22:34.159664       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0727 17:22:34.161609       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0727 17:22:37.536693       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0727 17:22:37.547464       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0727 17:22:37.550385       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0727 17:22:37.557347       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0727 17:22:37.560822       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0727 17:22:37.560964       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0727 17:22:37.561137       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0727 17:22:37.564420       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0727 17:22:37.578562       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0727 17:22:37.578845       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0727 17:22:38.537984       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0727 17:22:38.548981       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0727 17:22:38.552929       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0727 17:22:38.559472       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0727 17:22:38.562199       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0727 17:22:38.563308       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0727 17:22:38.565735       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0727 17:22:38.567205       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0727 17:22:38.580084       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0727 17:22:38.581552       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0727 17:22:40.466998       1 leaderelection.go:235] attempting to acquire leader lease  kube-system/kube-scheduler...
I0727 17:22:40.475349       1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sat 2019-07-27 17:20:42 UTC, end at Sat 2019-07-27 17:43:09 UTC. --
Jul 27 17:22:49 minikube kubelet[2945]: I0727 17:22:49.308282    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-ts8qx" (UniqueName: "kubernetes.io/secret/f6ae559f-0bf9-45f7-a11c-35f4923d22eb-coredns-token-ts8qx") pod "coredns-5c98db65d4-djxcd" (UID: "f6ae559f-0bf9-45f7-a11c-35f4923d22eb")
Jul 27 17:22:49 minikube kubelet[2945]: I0727 17:22:49.810401    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/f4e49d61-db67-4f49-ad66-7280a802b3c4-tmp") pod "storage-provisioner" (UID: "f4e49d61-db67-4f49-ad66-7280a802b3c4")
Jul 27 17:22:49 minikube kubelet[2945]: I0727 17:22:49.810511    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-x2xm2" (UniqueName: "kubernetes.io/secret/f4e49d61-db67-4f49-ad66-7280a802b3c4-storage-provisioner-token-x2xm2") pod "storage-provisioner" (UID: "f4e49d61-db67-4f49-ad66-7280a802b3c4")
Jul 27 17:22:50 minikube kubelet[2945]: W0727 17:22:50.359171    2945 pod_container_deletor.go:75] Container "653bcb59c7d6e0512c68dd98df372f466337b55cc4f048eb44888a20ba1d2ca9" not found in pod's containers
Jul 27 17:22:50 minikube kubelet[2945]: W0727 17:22:50.442473    2945 pod_container_deletor.go:75] Container "8c235296cccb8b839e014d8550c8112ab15fe2017cef908a09ed04c8b288c543" not found in pod's containers
Jul 27 17:25:48 minikube kubelet[2945]: I0727 17:25:48.388912    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tiller-token-xcmtr" (UniqueName: "kubernetes.io/secret/ef093916-1f66-4e98-95fe-bc954320e74e-tiller-token-xcmtr") pod "tiller-deploy-767d9b9584-ckgsv" (UID: "ef093916-1f66-4e98-95fe-bc954320e74e")
Jul 27 17:25:49 minikube kubelet[2945]: W0727 17:25:49.193636    2945 pod_container_deletor.go:75] Container "7f4aaf027658a730701df51994ca839bd95bdba86bc48740266debde02212190" not found in pod's containers
Jul 27 17:26:29 minikube kubelet[2945]: E0727 17:26:29.204713    2945 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.14.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:57069->192.168.64.1:53: i/o timeout
Jul 27 17:26:29 minikube kubelet[2945]: E0727 17:26:29.204834    2945 kuberuntime_image.go:51] Pull image "gcr.io/kubernetes-helm/tiller:v2.14.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:57069->192.168.64.1:53: i/o timeout
Jul 27 17:26:29 minikube kubelet[2945]: E0727 17:26:29.204900    2945 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:57069->192.168.64.1:53: i/o timeout
Jul 27 17:26:29 minikube kubelet[2945]: E0727 17:26:29.204957    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:57069->192.168.64.1:53: i/o timeout"
Jul 27 17:26:29 minikube kubelet[2945]: E0727 17:26:29.584775    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:27:24 minikube kubelet[2945]: E0727 17:27:24.219756    2945 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.14.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:43295->192.168.64.1:53: i/o timeout
Jul 27 17:27:24 minikube kubelet[2945]: E0727 17:27:24.219803    2945 kuberuntime_image.go:51] Pull image "gcr.io/kubernetes-helm/tiller:v2.14.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:43295->192.168.64.1:53: i/o timeout
Jul 27 17:27:24 minikube kubelet[2945]: E0727 17:27:24.219871    2945 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:43295->192.168.64.1:53: i/o timeout
Jul 27 17:27:24 minikube kubelet[2945]: E0727 17:27:24.219909    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:43295->192.168.64.1:53: i/o timeout"
Jul 27 17:27:40 minikube kubelet[2945]: E0727 17:27:40.212321    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:28:35 minikube kubelet[2945]: E0727 17:28:35.220291    2945 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.14.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:44603->192.168.64.1:53: i/o timeout
Jul 27 17:28:35 minikube kubelet[2945]: E0727 17:28:35.220499    2945 kuberuntime_image.go:51] Pull image "gcr.io/kubernetes-helm/tiller:v2.14.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:44603->192.168.64.1:53: i/o timeout
Jul 27 17:28:35 minikube kubelet[2945]: E0727 17:28:35.220554    2945 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:44603->192.168.64.1:53: i/o timeout
Jul 27 17:28:35 minikube kubelet[2945]: E0727 17:28:35.220645    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:44603->192.168.64.1:53: i/o timeout"
Jul 27 17:28:47 minikube kubelet[2945]: E0727 17:28:47.211877    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:28:59 minikube kubelet[2945]: E0727 17:28:59.211509    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:29:10 minikube kubelet[2945]: E0727 17:29:10.211288    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:30:05 minikube kubelet[2945]: E0727 17:30:05.220010    2945 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.14.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:51422->192.168.64.1:53: i/o timeout
Jul 27 17:30:05 minikube kubelet[2945]: E0727 17:30:05.220122    2945 kuberuntime_image.go:51] Pull image "gcr.io/kubernetes-helm/tiller:v2.14.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:51422->192.168.64.1:53: i/o timeout
Jul 27 17:30:05 minikube kubelet[2945]: E0727 17:30:05.220185    2945 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:51422->192.168.64.1:53: i/o timeout
Jul 27 17:30:05 minikube kubelet[2945]: E0727 17:30:05.220216    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:51422->192.168.64.1:53: i/o timeout"
Jul 27 17:30:20 minikube kubelet[2945]: E0727 17:30:20.211970    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:30:31 minikube kubelet[2945]: E0727 17:30:31.211857    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:30:46 minikube kubelet[2945]: E0727 17:30:46.211643    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:30:58 minikube kubelet[2945]: E0727 17:30:58.211519    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:31:13 minikube kubelet[2945]: E0727 17:31:13.210659    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:31:24 minikube kubelet[2945]: E0727 17:31:24.211794    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:32:15 minikube kubelet[2945]: E0727 17:32:15.220147    2945 remote_image.go:113] PullImage "gcr.io/kubernetes-helm/tiller:v2.14.2" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:36808->192.168.64.1:53: i/o timeout
Jul 27 17:32:15 minikube kubelet[2945]: E0727 17:32:15.220383    2945 kuberuntime_image.go:51] Pull image "gcr.io/kubernetes-helm/tiller:v2.14.2" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:36808->192.168.64.1:53: i/o timeout
Jul 27 17:32:15 minikube kubelet[2945]: E0727 17:32:15.220605    2945 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:36808->192.168.64.1:53: i/o timeout
Jul 27 17:32:15 minikube kubelet[2945]: E0727 17:32:15.220637    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.16:36808->192.168.64.1:53: i/o timeout"
Jul 27 17:32:27 minikube kubelet[2945]: E0727 17:32:27.211513    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:32:39 minikube kubelet[2945]: E0727 17:32:39.212428    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:32:52 minikube kubelet[2945]: E0727 17:32:52.210313    2945 pod_workers.go:190] Error syncing pod ef093916-1f66-4e98-95fe-bc954320e74e ("tiller-deploy-767d9b9584-ckgsv_kube-system(ef093916-1f66-4e98-95fe-bc954320e74e)"), skipping: failed to "StartContainer" for "tiller" with ImagePullBackOff: "Back-off pulling image \"gcr.io/kubernetes-helm/tiller:v2.14.2\""
Jul 27 17:33:02 minikube kubelet[2945]: I0727 17:33:02.020131    2945 reconciler.go:177] operationExecutor.UnmountVolume started for volume "tiller-token-xcmtr" (UniqueName: "kubernetes.io/secret/ef093916-1f66-4e98-95fe-bc954320e74e-tiller-token-xcmtr") pod "ef093916-1f66-4e98-95fe-bc954320e74e" (UID: "ef093916-1f66-4e98-95fe-bc954320e74e")
Jul 27 17:33:02 minikube kubelet[2945]: I0727 17:33:02.020905    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tiller-token-xcmtr" (UniqueName: "kubernetes.io/secret/a94e806f-ac90-4a7d-ba63-7c31df1bd90e-tiller-token-xcmtr") pod "tiller-deploy-767d9b9584-vz7b2" (UID: "a94e806f-ac90-4a7d-ba63-7c31df1bd90e")
Jul 27 17:33:02 minikube kubelet[2945]: I0727 17:33:02.031681    2945 operation_generator.go:860] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef093916-1f66-4e98-95fe-bc954320e74e-tiller-token-xcmtr" (OuterVolumeSpecName: "tiller-token-xcmtr") pod "ef093916-1f66-4e98-95fe-bc954320e74e" (UID: "ef093916-1f66-4e98-95fe-bc954320e74e"). InnerVolumeSpecName "tiller-token-xcmtr". PluginName "kubernetes.io/secret", VolumeGidValue ""
Jul 27 17:33:02 minikube kubelet[2945]: I0727 17:33:02.122134    2945 reconciler.go:297] Volume detached for volume "tiller-token-xcmtr" (UniqueName: "kubernetes.io/secret/ef093916-1f66-4e98-95fe-bc954320e74e-tiller-token-xcmtr") on node "minikube" DevicePath ""
Jul 27 17:33:02 minikube kubelet[2945]: W0727 17:33:02.619453    2945 pod_container_deletor.go:75] Container "87829b0dd58a24670bd1b4fcb1a760ea4f083e9fb767d43270fa576c3c89bd47" not found in pod's containers
Jul 27 17:36:49 minikube kubelet[2945]: E0727 17:36:49.420761    2945 reflector.go:125] object-"nginx-ingress"/"nginx-ingress-token-xrkln": Failed to list *v1.Secret: secrets "nginx-ingress-token-xrkln" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "nginx-ingress": no relationship found between node "minikube" and this object
Jul 27 17:36:49 minikube kubelet[2945]: I0727 17:36:49.433715    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "nginx-ingress-token-xrkln" (UniqueName: "kubernetes.io/secret/2d3735b0-64cc-46d2-a5c8-c4a6d3f75a2e-nginx-ingress-token-xrkln") pod "nginx-ingress-controller-6dd85d89cb-c5jxm" (UID: "2d3735b0-64cc-46d2-a5c8-c4a6d3f75a2e")
Jul 27 17:36:49 minikube kubelet[2945]: I0727 17:36:49.635148    2945 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-lb8gb" (UniqueName: "kubernetes.io/secret/ac9df9e0-62cb-4e00-9b48-1fa6d202ff00-default-token-lb8gb") pod "nginx-ingress-default-backend-59944969d4-vpph8" (UID: "ac9df9e0-62cb-4e00-9b48-1fa6d202ff00")
Jul 27 17:36:50 minikube kubelet[2945]: W0727 17:36:50.324442    2945 pod_container_deletor.go:75] Container "2dccebeb7b59ac618b8aa4eb8078f001251b5612ec3e71a9617dd5fa9a3201a9" not found in pod's containers

==> storage-provisioner <==

The operating system version:

macOS 10.14.5 Kubernetes v1.15.1 minikube v1.2.0

brunowego commented 5 years ago

After tests and research, this steps solves my issue:

route -n get 10.96/12
sudo route delete -net 10.96/12
minikube tunnel -c