DickChesterwood / k8s-fleetman

MIT License
342 stars 816 forks source link

Networking crash between cluster pods after deploying position-tracker #29

Closed gmacc00 closed 5 years ago

gmacc00 commented 5 years ago

When applying services.yaml and workload.yaml cloning from _course_files/Chapter 11 Microservices/ I get the following errors :

$ kubectl get all Unable to connect to the server: unexpected EOF Unable to connect to the server: read tcp 192.168.99.1:7842->192.168.99.100:8443: wsarecv: An existing connection was forcibly closed by the remote host. Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it. Unable to connect to the server: net/http: TLS handshake timeout Unable to connect to the server: net/http: TLS handshake timeout Unable to connect to the server: net/http: TLS handshake timeout Unable to connect to the server: net/http: TLS handshake timeout

..and submitting the command once more after a while : ...

$ kubectl get all Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: No connection could be made because the target machine actively refused it.

DickChesterwood commented 5 years ago

Most likely it's due to a lack of resources on your minikube instance, if you overload minikube it locks up and you lose access to kubectl. It's annoying.

Fleetman needs 4Gb for comfort...

minikube delete
minikube start --memory 4069

This assumes you've enough host RAM.

(Underlying this is that stupidly I forgot to set correct heap sizes on the microservices, so they consume FAR more ram than they actually need, but even if you take the time to shrink the heaps, once mongo comes into the cluster things go astray again. It's easier to throw more ram at it since it's only a demo.).

If you're following the course on Udemy, VPP or Manning, don't forget there are support forums there also!