Closed pnisbettmtc closed 4 years ago
My apologies for the poor experience here. It sounds like either a VPN or firewall is interfering, or the apiserver is crashing.
Do you mind what version of minikube you are on, along with the output of minikube logs
and kubectl describe node
when this happens?
That should help us figure out he root cause. Thanks!
OK. Thanks.
I've tried it on a windows 10 desktop in a company network and a windows 10 laptop that is outside the company network. Neither is behind a VPN.
kubectl describe node - currently shows the TLS Timout message from above .
Here is the log output:
C:\Users\pnisbett\_cloudnative_book\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl describe node
Unable to connect to the server: net/http: TLS handshake timeout
C:\Users\pnisbett\_cloudnative_book\cloudnative-abundantsunshine\cloudnative-statelessness>minikube logs
Update - I actually tried to deploy the same set of apps on a mac computer and the same thing happens without actually doing anything. It is just crashing on it's own after a short indeterminate amount of time.
I start up minikube. I run kubectl get all I see 3 pods 1 is mysql and 2 are very small java apps that try to connect to MySql . All are running Under services - there are 4 services - 3 node ports and a Kubernetes ClusterIP 3 deployments - all ready I walk away from the computer for 10 minutes I return and run kubectl get all again -I get "tls handshake timeout" I run minikube status host:running kubelet:running apiserver: Error kubeconfig: Configured
This is three different environments that this technology is failing in the same way. And it is failing without any action on my part . It just runs for a while then crashes all on it's own .
On the apple machine I installed it via brew Kubernetes v1.16.2 on Docker 18.0.9
This is unusable at this point. Maybe I'll try it again after the bugs are fixed in few months or a year or not again. Frustrating because it looked like it had potential.
Thanks! It appears that your VM keeps running out of memory:
[Nov17 06:10] dockerd invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=-999
also:
[ +0.000161] Out of memory: Kill process 4963 (java) score 1138 or sacrifice child
[ +0.000024] Killed process 4963 (java) total-vm:2008440kB, anon-rss:275312kB, file-rss:0kB, shmem-rss:0kB
The default VM is only 2GB, which this pod is likely pushing:
24fa5be67c6ac cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79 7 minutes ago Running connectionsposts 4 2ca616fcaca0f
You will need a bigger VM to use minikube with this application. Try removing your old VM using minikube delete
, and then either persistently tell minikube to use more memory using:
minikube config set memory 8192
Or pass --memory 8192
to minikube start
. Please let me know if this helps!
OK. Thanks for looking at it. Two of the machines are laptops and only have 8GB of memory . Even increasing the VM to 4GB on those machines is a push . One of the pods is a MySQL server and a small amount of data on it. The other two are really small Java apps that pull a handful of rows form that DB. This seems like a pretty light workload.
Do you know what the memory footprint is for an empty pod with nothing running on it?
Thanks.
Hi. I tried it on a computer with 32 GB and am getting the same error(s) . Did minikube delete. Ran minikube start --vm-driver "virtualbox" --memory 8192 One of the services cookbook-deployment-posts worked initiall then after a restart would not start. Another service - cookbook-deployment-connections - timed out after a 15 minutes
This is the output from kubectl describe node. I'll paste the output from minikube logs below that.
C:\Users\pnisbe\cloudnative\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl describe node
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 20 Nov 2019 12:35:11 -0800
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 20 Nov 2019 13:33:45 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 20 Nov 2019 13:33:45 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 20 Nov 2019 13:33:45 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 20 Nov 2019 13:33:45 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.99.105
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 17784772Ki
hugepages-2Mi: 0
memory: 8163932Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784772Ki
hugepages-2Mi: 0
memory: 8163932Ki
pods: 110
System Info:
Machine ID: 1132e5770f3f4c868d59effa0accbd3f
System UUID: 1ffcb2be-6765-40e2-a476-052b5a2c9a76
Boot ID: 33290324-0476-4197-b7f9-41ea17252987
Kernel Version: 4.19.76
OS Image: Buildroot 2019.02.6
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.9
Kubelet Version: v1.16.2
Kube-Proxy Version: v1.16.2
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default connections-589dff75fc-l9zsc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36m
default connectionsposts-7689845fd8-2whvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 24m
default mysql-5988544dd4-vb7xz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57m
default posts-ddd9f5767-d7b26 0 (0%) 0 (0%) 0 (0%) 0 (0%) 36m
kube-system coredns-5644d7b6d9-4q8zd 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 59m
kube-system coredns-5644d7b6d9-gnlhk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 59m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 58m
kube-system kube-addon-manager-minikube 5m (0%) 0 (0%) 50Mi (0%) 0 (0%) 59m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 58m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 26m
kube-system kube-proxy-52fps 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 58m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59m
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-pgds7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59m
kubernetes-dashboard kubernetes-dashboard-57f4cb4545-pd7fs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 755m (37%) 0 (0%)
memory 190Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 59m (x8 over 59m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 59m (x8 over 59m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 59m (x7 over 59m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 59m kube-proxy, minikube Starting kube-proxy.
Normal Starting 26m kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 26m (x8 over 26m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 26m (x7 over 26m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 26m (x8 over 26m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 26m kubelet, minikube Updated Node Allocatable limit across pods
Normal Starting 26m kube-proxy, minikube Starting kube-proxy.
Normal Starting 21m kubelet, minikube Starting kubelet.
Normal NodeAllocatableEnforced 21m kubelet, minikube Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 21m (x8 over 21m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 21m (x7 over 21m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 21m (x8 over 21m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 20m kube-proxy, minikube Starting kube-proxy.
C:\Users\pnisbe\cloudnative\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl get all
NAME READY STATUS RESTARTS AGE
pod/connections-589dff75fc-l9zsc 0/1 CrashLoopBackOff 17 38m
pod/connectionsposts-7689845fd8-2whvf 1/1 Running 1 26m
pod/mysql-5988544dd4-vb7xz 1/1 Running 2 58m
pod/posts-ddd9f5767-d7b26 0/1 CrashLoopBackOff 17 38m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/connections-svc NodePort 10.110.175.205 <none> 80:30133/TCP 38m
service/connectionsposts-svc NodePort 10.100.61.194 <none> 80:32017/TCP 26m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 61m
service/mysql-svc NodePort 10.110.134.75 <none> 3306:31014/TCP 58m
service/posts-svc NodePort 10.111.28.164 <none> 80:30470/TCP 38m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/connections 0/1 1 0 38m
deployment.apps/connectionsposts 1/1 1 1 26m
deployment.apps/mysql 1/1 1 1 58m
deployment.apps/posts 0/1 1 0 38m
NAME DESIRED CURRENT READY AGE
replicaset.apps/connections-589dff75fc 1 1 0 38m
replicaset.apps/connectionsposts-7689845fd8 1 1 1 26m
replicaset.apps/mysql-5988544dd4 1 1 1 58m
replicaset.apps/posts-ddd9f5767 1 1 0 38m
C:\Users\pnisbe\cloudnative\cloudnative-abundantsunshine\cloudnative-statelessness>kubectl describe node
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 20 Nov 2019 12:35:11 -0800
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 20 Nov 2019 13:36:46 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 20 Nov 2019 13:36:46 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 20 Nov 2019 13:36:46 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 20 Nov 2019 13:36:46 -0800 Wed, 20 Nov 2019 12:35:06 -0800 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.99.105
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 17784772Ki
hugepages-2Mi: 0
memory: 8163932Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784772Ki
hugepages-2Mi: 0
memory: 8163932Ki
pods: 110
System Info:
Machine ID: 1132e5770f3f4c868d59effa0accbd3f
System UUID: 1ffcb2be-6765-40e2-a476-052b5a2c9a76
Boot ID: 33290324-0476-4197-b7f9-41ea17252987
Kernel Version: 4.19.76
OS Image: Buildroot 2019.02.6
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.9.9
Kubelet Version: v1.16.2
Kube-Proxy Version: v1.16.2
Non-terminated Pods: (15 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
default connections-589dff75fc-l9zsc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 39m
default connectionsposts-7689845fd8-2whvf 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26m
default mysql-5988544dd4-vb7xz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 59m
default posts-ddd9f5767-d7b26 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38m
kube-system coredns-5644d7b6d9-4q8zd 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 61m
kube-system coredns-5644d7b6d9-gnlhk 100m (5%) 0 (0%) 70Mi (0%) 170Mi (2%) 61m
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60m
kube-system kube-addon-manager-minikube 5m (0%) 0 (0%) 50Mi (0%) 0 (0%) 61m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 60m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 28m
kube-system kube-proxy-52fps 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 60m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61m
kubernetes-dashboard dashboard-metrics-scraper-76585494d8-pgds7 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61m
kubernetes-dashboard kubernetes-dashboard-57f4cb4545-pd7fs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 61m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 755m (37%) 0 (0%)
memory 190Mi (2%) 340Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 61m (x8 over 61m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 61m (x8 over 61m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 61m (x7 over 61m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 61m kube-proxy, minikube Starting kube-proxy.
Normal Starting 28m kubelet, minikube Starting kubelet.
Normal NodeHasSufficientMemory 28m (x8 over 28m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 28m (x7 over 28m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 28m (x8 over 28m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 28m kubelet, minikube Updated Node Allocatable limit across pods
Normal Starting 28m kube-proxy, minikube Starting kube-proxy.
Normal Starting 23m kubelet, minikube Starting kubelet.
Normal NodeAllocatableEnforced 23m kubelet, minikube Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 23m (x8 over 23m) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 23m (x7 over 23m) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 23m (x8 over 23m) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal Starting 23m kube-proxy, minikube Starting kube-proxy.
minikube logs output:
...
...
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
* 1c86b7b52d806 cdavisafc/cloudnative-statelessness-connections@sha256:9405807d18ad427c636a26138b78f9195c1920558f391fdd53b12b62b2f27771 56 seconds ago Exited connections 16 90ccceb1da205
* 82bbdcbf3add2 cdavisafc/cloudnative-statelessness-posts@sha256:351fba985427c6722475bd6a930d83d0fcb9965c5d1bf43e2aac898c9e6821cb About a minute ago Exited posts 16 761287e7b4bb2
* c3767dac48559 6802d83967b99 12 minutes ago Running kubernetes-dashboard 3 c51d2e1006324
* c63a55785d2ee 4689081edb103 12 minutes ago Running storage-provisioner 3 2e65db2cd9e90
* 783b14f62c021 709901356c115 13 minutes ago Running dashboard-metrics-scraper 2 886058575db0b
* 4ed40de4e832a 6802d83967b99 13 minutes ago Exited kubernetes-dashboard 2 c51d2e1006324
* c418726dd39d9 cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79 13 minutes ago Running connectionsposts 1 57aa4bdca25c4
* 9225fe47e042c 8454cbe08dc9f 13 minutes ago Running kube-proxy 2 b605ec221306e
* 1d7c16c38ca62 bf261d1579144 13 minutes ago Running coredns 2 285bcc596080a
* 9b6df92a23814 bf261d1579144 13 minutes ago Running coredns 2 c96860b8e2e5b
* 294b2df0fb4ce 4689081edb103 13 minutes ago Exited storage-provisioner 2 2e65db2cd9e90
* 4fbd5f082acaf 6bb891430fb6e 13 minutes ago Running mysql 2 53adc8a6ba2fd
* b6405881ffba6 c2c9a0406787c 13 minutes ago Running kube-apiserver 2 458bc70122cbe
* abac6ec681c11 6e4bffa46d70b 13 minutes ago Running kube-controller-manager 1 3ed6fbade5a76
* 2fe023ea0144b ebac1ae204a2c 13 minutes ago Running kube-scheduler 2 a51e0f33567f8
* 4a0fbb8d2d69d bd12a212f9dcb 13 minutes ago Running kube-addon-manager 2 e472c627287ef
* 8bbb81cc0696a b2756210eeabf 13 minutes ago Running etcd 2 4edfdffcca98b
* 7e3707fe9a243 cdavisafc/cloudnative-statelessness-connectionsposts-stateful@sha256:1eb63116c784ffd30a3bb4c77ba3bebdf177abde23971fd6f06314ec78c9ce79 16 minutes ago Exited connectionsposts 0 8588b27fa7084
* 896a98ce2f9d4 709901356c115 18 minutes ago Exited dashboard-metrics-scraper 1 3d9d78abc8ad1
* 924f6dc580888 bf261d1579144 18 minutes ago Exited coredns 1 4e5542272180d
* 041a9a3573935 bf261d1579144 18 minutes ago Exited coredns 1 b9ba1d1adfa03
* ec769c83ee93c 8454cbe08dc9f 18 minutes ago Exited kube-proxy 1 1c2f36d5df094
* 8c58ce625906d 6bb891430fb6e 18 minutes ago Exited mysql 1 12660b0677675
* 7f2bec43f2724 bd12a212f9dcb 18 minutes ago Exited kube-addon-manager 1 38ede4257e3b5
* 5f7852a377871 ebac1ae204a2c 18 minutes ago Exited kube-scheduler 1 b9812f3433dd8
* 2b81d9d77e22f b2756210eeabf 18 minutes ago Exited etcd 1 a4f68377b6477
* 419be3aece395 c2c9a0406787c 18 minutes ago Exited kube-apiserver 1 0b06af7d8c269
* 138ded634d4d7 6e4bffa46d70b 18 minutes ago Exited kube-controller-manager 0 6dbd36efb2e17
*
* ==> coredns ["041a9a357393"] <==
...
* E1120 21:10:24.250816 1 reflector.go:270] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=3312&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 10.96.0.1:443: connect: connection refused
* [INFO] SIGTERM: Shutting down servers then terminating
*
* ==> coredns ["1d7c16c38ca6"] <==
...
* E1120 21:14:18.206011 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.E+i1n1c2o0m p2a1t:i1b4l:e18t2o0o60s1/ c a c h e / 1erfelfelcecoorr..ggo:19246:] Fkagi/lmod/ kt8s list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
*
* ==> coredns ["924f6dc58088"] <==
..
* [INFO] SIGTERM: Shutting down servers then terminating
*
* ==> coredns ["9b6df92a2381"] <==
...
* E1120 21:14:18.123389 1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
*
* ==> dmesg <==
...
* [ +15.392989] kauditd_printk_skb: 2 callbacks suppressed
* [Nov20 21:26] kauditd_printk_skb: 2 callbacks suppressed
*
* ==> kernel <==
* 21:27:00 up 14 min, 0 users, load average: 1.19, 0.91, 0.81
* Linux minikube 4.19.76 #1 SMP Tue Oct 29 14:56:42 PDT 2019 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.6"
*
* ==> kube-addon-manager ["4a0fbb8d2d69"] <==
...
* ==> kube-addon-manager ["7f2bec43f272"] <==
...
*
* ==> kube-apiserver ["419be3aece39"] <==
...
* I1120 21:10:02.807148 1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
* I1120 21:10:24.245603 1 controller.go:182] Shutting down kubernetes service endpoint reconciler
* I1120 21:10:24.245817 1 controller.go:87] Shutting down OpenAPI AggregationController
* I1120 21:10:24.245870 1 controller.go:122] Shutting down OpenAPI controller
* I1120 21:10:24.245883 1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController
* I1120 21:10:24.245893 1 establishing_controller.go:84] Shutting down EstablishingController
* I1120 21:10:24.245945 1 naming_controller.go:299] Shutting down NamingConditionController
* I1120 21:10:24.245956 1 customresource_discovery_controller.go:219] Shutting down DiscoveryController
* I1120 21:10:24.245981 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
* I1120 21:10:24.245993 1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
* I1120 21:10:24.246001 1 available_controller.go:395] Shutting down AvailableConditionController
* I1120 21:10:24.246012 1 autoregister_controller.go:164] Shutting down autoregister controller
* I1120 21:10:24.246020 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
* I1120 21:10:24.246035 1 crd_finalizer.go:286] Shutting down CRDFinalizer
* I1120 21:10:24.248961 1 secure_serving.go:167] Stopped listening on [::]:8443
* E1120 21:10:24.258184 1 controller.go:185] Get https://[::1]:8443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp [::1]:8443: connect: connection refused
*
* ==> kube-apiserver ["b6405881ffba"] <==
...
* I1120 21:14:00.038950 1 controller.go:606] quota admission added evaluator for: endpoints
*
* ==> kube-controller-manager ["138ded634d4d"] <==
...
https://localhost:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=2968&timeout=7m37s&timeoutSeconds=457&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
* E1120 21:10:24.254646 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.Ingress: Get https://localhost:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=2968&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kube-controller-manager ["abac6ec681c1"] <==
...
* I1120 21:14:10.159860 1 shared_informer.go:204] Caches are synced for resource quota
* I1120 21:14:10.162696 1 shared_informer.go:204] Caches are synced for endpoint
* I1120 21:14:10.175845 1 shared_informer.go:204] Caches are synced for resource quota
* I1120 21:14:10.197452 1 shared_informer.go:204] Caches are synced for garbage collector
* I1120 21:14:10.197475 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
* I1120 21:14:10.245420 1 shared_informer.go:204] Caches are synced for garbage collector
* I1120 21:14:10.247799 1 shared_informer.go:204] Caches are synced for attach detach
*
* ==> kube-proxy ["9225fe47e042"] <==
...
* I1120 21:13:48.951738 1 shared_informer.go:204] Caches are synced for endpoints config
* I1120 21:13:48.951825 1 shared_informer.go:204] Caches are synced for service config
*
* ==> kube-proxy ["ec769c83ee93"] <==
...
* I1120 21:08:38.017169 1 shared_informer.go:204] Caches are synced for service config
* I1120 21:08:38.017216 1 shared_informer.go:204] Caches are synced for endpoints config
*
* ==> kube-scheduler ["2fe023ea0144"] <==
* I1120 21:13:38.580544 1 serving.go:319] Generated self-signed cert in-memory
...
* I1120 21:14:00.043102 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
*
* ==> kube-scheduler ["5f7852a37787"] <==
* I1120 21:08:29.533878 1 serving.go:319] Generated self-signed cert in-memory
...
* E1120 21:10:24.251921 1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=2968&timeout=5m42s&timeoutSeconds=342&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
*
* ==> kubelet <==
* -- Logs begin at Wed 2019-11-20 21:13:03 UTC, end at Wed 2019-11-20 21:27:02 UTC. --
* Nov 20 21:21:10 minikube kubelet[2992]: E1120 21:21:10.721251 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:18 minikube kubelet[2992]: E1120 21:21:18.725538 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:21:23 minikube kubelet[2992]: E1120 21:21:23.722863 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:30 minikube kubelet[2992]: E1120 21:21:30.723743 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:21:34 minikube kubelet[2992]: E1120 21:21:34.721210 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:41 minikube kubelet[2992]: E1120 21:21:41.720600 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:21:49 minikube kubelet[2992]: E1120 21:21:49.722852 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:21:53 minikube kubelet[2992]: E1120 21:21:53.722862 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:04 minikube kubelet[2992]: E1120 21:22:04.722878 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:04 minikube kubelet[2992]: E1120 21:22:04.723227 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:15 minikube kubelet[2992]: E1120 21:22:15.722691 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:15 minikube kubelet[2992]: E1120 21:22:15.724430 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:26 minikube kubelet[2992]: E1120 21:22:26.721238 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:27 minikube kubelet[2992]: E1120 21:22:27.728381 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:40 minikube kubelet[2992]: E1120 21:22:40.722019 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:41 minikube kubelet[2992]: E1120 21:22:41.720589 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:22:54 minikube kubelet[2992]: E1120 21:22:54.721649 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:22:56 minikube kubelet[2992]: E1120 21:22:56.723021 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:07 minikube kubelet[2992]: E1120 21:23:07.725189 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:10 minikube kubelet[2992]: E1120 21:23:10.721639 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:19 minikube kubelet[2992]: E1120 21:23:19.722636 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:25 minikube kubelet[2992]: E1120 21:23:25.723072 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:31 minikube kubelet[2992]: E1120 21:23:31.721563 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:40 minikube kubelet[2992]: E1120 21:23:40.721843 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:46 minikube kubelet[2992]: E1120 21:23:46.722027 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:23:52 minikube kubelet[2992]: E1120 21:23:52.724589 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:23:58 minikube kubelet[2992]: E1120 21:23:58.721728 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:03 minikube kubelet[2992]: E1120 21:24:03.722293 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:10 minikube kubelet[2992]: E1120 21:24:10.721321 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:18 minikube kubelet[2992]: E1120 21:24:18.721816 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:23 minikube kubelet[2992]: E1120 21:24:23.724293 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:29 minikube kubelet[2992]: E1120 21:24:29.721567 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:34 minikube kubelet[2992]: E1120 21:24:34.722722 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:43 minikube kubelet[2992]: E1120 21:24:43.722199 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:24:49 minikube kubelet[2992]: E1120 21:24:49.723104 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:24:55 minikube kubelet[2992]: E1120 21:24:55.723681 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:00 minikube kubelet[2992]: E1120 21:25:00.723777 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:07 minikube kubelet[2992]: E1120 21:25:07.723381 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:11 minikube kubelet[2992]: E1120 21:25:11.723051 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:20 minikube kubelet[2992]: E1120 21:25:20.722108 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:24 minikube kubelet[2992]: E1120 21:25:24.724102 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:35 minikube kubelet[2992]: E1120 21:25:35.723806 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:25:36 minikube kubelet[2992]: E1120 21:25:36.723952 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:48 minikube kubelet[2992]: E1120 21:25:48.722551 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:25:51 minikube kubelet[2992]: W1120 21:25:51.882627 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:25:53 minikube kubelet[2992]: W1120 21:25:53.009014 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:26:01 minikube kubelet[2992]: W1120 21:26:01.173437 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:26:01 minikube kubelet[2992]: E1120 21:26:01.201865 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:02 minikube kubelet[2992]: W1120 21:26:02.220494 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/posts-ddd9f5767-d7b26 through plugin: invalid network status for
* Nov 20 21:26:04 minikube kubelet[2992]: W1120 21:26:04.270541 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/connections-589dff75fc-l9zsc through plugin: invalid network status for
* Nov 20 21:26:11 minikube kubelet[2992]: E1120 21:26:11.721745 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:17 minikube kubelet[2992]: W1120 21:26:17.583909 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/connections-589dff75fc-l9zsc through plugin: invalid network status for
* Nov 20 21:26:17 minikube kubelet[2992]: E1120 21:26:17.591916 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:26:18 minikube kubelet[2992]: W1120 21:26:18.633288 2992 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for default/connections-589dff75fc-l9zsc through plugin: invalid network status for
* Nov 20 21:26:22 minikube kubelet[2992]: E1120 21:26:22.726827 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:32 minikube kubelet[2992]: E1120 21:26:32.723039 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:26:37 minikube kubelet[2992]: E1120 21:26:37.723683 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:46 minikube kubelet[2992]: E1120 21:26:46.721733 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
* Nov 20 21:26:51 minikube kubelet[2992]: E1120 21:26:51.726672 2992 pod_workers.go:191] Error syncing pod 1e886b83-484e-4651-a433-877310cc63c1 ("posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"), skipping: failed to "StartContainer" for "posts" with CrashLoopBackOff: "back-off 5m0s restarting failed container=posts pod=posts-ddd9f5767-d7b26_default(1e886b83-484e-4651-a433-877310cc63c1)"
* Nov 20 21:26:58 minikube kubelet[2992]: E1120 21:26:58.722865 2992 pod_workers.go:191] Error syncing pod 21e77031-afa2-4910-9e59-92dd4843ccba ("connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"), skipping: failed to "StartContainer" for "connections" with CrashLoopBackOff: "back-off 5m0s restarting failed container=connections pod=connections-589dff75fc-l9zsc_default(21e77031-afa2-4910-9e59-92dd4843ccba)"
*
* ==> kubernetes-dashboard ["4ed40de4e832"] <==
* 2019/11/20 21:13:48 Using namespace: kubernetes-dashboard
* 2019/11/20 21:13:48 Using in-cluster config to connect to apiserver
* 2019/11/20 21:13:48 Using secret token for csrf signing
* 2019/11/20 21:13:48 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2019/11/20 21:13:48 Starting overwatch
* panic: Get https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf: dial tcp 10.96.0.1:443: i/o timeout
*
* goroutine 1 [running]:
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0004d2000)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:40 +0x3b4
* github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/csrf/manager.go:65
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc000343700)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:479 +0xc7
* github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc000343700)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:447 +0x47
* github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
* /home/travis/build/kubernetes/dashboard/src/app/backend/client/manager.go:528
* main.main()
* /home/travis/build/kubernetes/dashboard/src/app/backend/dashboard.go:105 +0x212
*
* ==> kubernetes-dashboard ["c3767dac4855"] <==
* 2019/11/20 21:14:37 Starting overwatch
* 2019/11/20 21:14:37 Using namespace: kubernetes-dashboard
* 2019/11/20 21:14:37 Using in-cluster config to connect to apiserver
* 2019/11/20 21:14:37 Using secret token for csrf signing
* 2019/11/20 21:14:37 Initializing csrf token from kubernetes-dashboard-csrf secret
* 2019/11/20 21:14:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
* 2019/11/20 21:14:37 Successful initial request to the apiserver, version: v1.16.2
* 2019/11/20 21:14:37 Generating JWE encryption key
* 2019/11/20 21:14:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
* 2019/11/20 21:14:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
* 2019/11/20 21:14:38 Initializing JWE encryption key from synchronized object
* 2019/11/20 21:14:38 Creating in-cluster Sidecar client
* 2019/11/20 21:14:38 Successful request to sidecar
* 2019/11/20 21:14:38 Serving insecurely on HTTP port: 9090
*
* ==> storage-provisioner ["294b2df0fb4c"] <==
* F1120 21:14:18.463717 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout
*
* ==> storage-provisioner ["c63a55785d2e"] <==
Hi. I tried it on a computer with 32 GB and am getting the same error(s) .
Do you mind sharing the actual error you received?
Did minikube delete. Ran minikube start --vm-driver "virtualbox" --memory 8192 One of the services cookbook-deployment-posts worked initiall then after a restart would not start.
After a restart of what, the host running minikube, or do you mean minikube start
?
I went through your logs and see that it looks like minikube start
was run multiple times, which is fine. I also see that your posts container is in CrashLoopBackoff. Do you mind running
kubectl get po -A
kubectl describe pod <name of your posts pod>
As far as I can tell, the apiserver should be serving data now.
Do you know what the memory footprint is for an empty pod with nothing running on it?
No, but I would wager it isn't more than 1MB. The main issue that I saw previously was that the minikube VM only had 2GB allocated to it, but the Java process you were running was asking for 2GB, which doesn't leave any room for Kubernetes.
Hope this helps. Please let me know what you find out!
@pnisbettmtc Do you still need help here?
Thanks but I’ve pretty much written off minikube as unusable for the time being .
I spent way too much time just to get three small pods to run only to have one or two pods run but a third won’t start because the API crashed . Different runs ,different pods wouldn’t start.
Minikube did not work as it was intended to.
From: Sharif Elgamal notifications@github.com Sent: Wednesday, January 29, 2020 11:23 AM To: kubernetes/minikube minikube@noreply.github.com Cc: Paul Nisbett pnisbett@bayareametro.gov; Mention mention@noreply.github.com Subject: Re: [kubernetes/minikube] OOM + kubectl - Unable to connect to the server: net/http: TLS handshake timeout (#5933)
External Email
@pnisbettmtchttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnisbettmtc&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C7ffcfe38caff40c938d108d7a4f0b118%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637159225988085877&sdata=iLRgqjLBcPCKL9fjlUuUHysxaDVps7L%2B67%2BKeXlV4CU%3D&reserved=0 Do you still need help here?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fminikube%2Fissues%2F5933%3Femail_source%3Dnotifications%26email_token%3DADMHWUQQQYZJHOGSGERNSGDRAHJSHA5CNFSM4JOG75EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKINPOY%23issuecomment-579917755&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C7ffcfe38caff40c938d108d7a4f0b118%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637159225988095869&sdata=a45g8WcuApGjSiBBhVgTuyIMBlvsCJMpaokn%2FnztPD4%3D&reserved=0, or unsubscribehttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADMHWUQ2TILZFYZCIZDMRR3RAHJSHANCNFSM4JOG75EA&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C7ffcfe38caff40c938d108d7a4f0b118%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637159225988095869&sdata=p%2FcWyLNB%2B3BD6l2dvFsxKCWHocg57VhmZVJtrwoYXX0%3D&reserved=0.
@pnisbettmtc applogize for the bad expereince you faced in windows :( our windows integration tests has been broken and we need to fix our integraiton tests for windows to get better eye on the users experience.
meanwhile do you mind trying with our new vm-driver docker ?
if you have docker on your windows, you could try
minikube start --vm-driver=docker
I will do if I get the time. Thanks. Paul
From: Medya Ghazizadeh notifications@github.com Sent: Wednesday, March 4, 2020 11:23 AM To: kubernetes/minikube minikube@noreply.github.com Cc: Paul Nisbett pnisbett@bayareametro.gov; Mention mention@noreply.github.com Subject: Re: [kubernetes/minikube] OOM + kubectl - Unable to connect to the server: net/http: TLS handshake timeout (#5933)
External Email
@pnisbettmtchttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fpnisbettmtc&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C2952834bc73f4afa32d108d7c071717a%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637189465789413064&sdata=WqWN3BBba62kLyxuEAsQMRCvdVO8norD01fSQTQA7OQ%3D&reserved=0 applogize for the bad expereince you faced in windows :( our windows integration tests has been broken and we need to fix our integraiton tests for windows to get better eye on the users experience.
meanwhile do you mind trying with our new vm-driver docker ?
if you have docker on your windows, you could try
minikube start --vm-driver=docker
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fkubernetes%2Fminikube%2Fissues%2F5933%3Femail_source%3Dnotifications%26email_token%3DADMHWUTZP5IMPQ6QU2UX3IDRF2TA5A5CNFSM4JOG75EKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENZYAQQ%23issuecomment-594772034&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C2952834bc73f4afa32d108d7c071717a%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637189465789413064&sdata=pZqlVVRaGfqHf47gANC2v944kHX4R2VmNxpRewf61fU%3D&reserved=0, or unsubscribehttps://gcc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FADMHWUTSWGCXI6IWOGRJBBTRF2TA5ANCNFSM4JOG75EA&data=02%7C01%7Cpnisbett%40bayareametro.gov%7C2952834bc73f4afa32d108d7c071717a%7C0d1e7a5560f044919f2e363ea94f5c87%7C0%7C1%7C637189465789423060&sdata=jK%2FLQ%2B3wncQfwfmxMuAs8SU28lScCsYpvE9AUMG%2F6Fg%3D&reserved=0.
Unable to connect to the server: net/http: TLS handshake timeout
I get "Unable to connect to the server: net/http: TLS handshake timeout" after using minikube for a while . It works for a while then stops working with with above message.
This happens on windows 10 using both hyper-v and virtualbox as the vm host. After working with this technology for a few weeks, I have come to the conclusion it's flaky as hell . In terms of using kubernetes ,my experience with minikube really discourages me from recommending Kubernetes to my company as viable solution. The number of times minikube crashes or responds with a stupid message recommending I delete the cluster that I have spent hours creating is a joke.