Closed Mierdin closed 4 years ago
This is due to the vagrant provisioners not running on a "vagrant up" if the provisioners had already been run. Vagrant does not seem to have a way to differentiate between the first up and the second "up". A temporary solution would be to run "vagrant ssh ./selfmedicate.sh resume". Or you can pause the vagrant and resume later. The commands "vagrant suspend" and "vagrant resume" will assist with this.
Looking at the vagrantfile, it looks like you need to run "vagrant reload" after a halt.
After investigating, the resume is being run, and it is being run the first time the provisioner runs as well, not what I wanted. Also, the resume works but takes a few minutes for the pods to all come back up. First, the Kube-system pods run, then the antidote pods run. So, I will be putting some messages into the vagrant to show this.
Things seem to work fine on the first start, but if we halt the VM and then run vagrant up later, none of the Antidote services are running inside of k8s.
Looks like sub_resume runs
minikube start
, which in theory should just re-start what was running before, but it doesn't seem like the cluster remembers previously deployed artifacts. Maybe this is a byproduct of using thenone
driver, since I don't remember this being a problem when we were using the minikube VM.