Closed yuvipanda closed 7 years ago
Alternate way to deploy with zero downtime:
kubectl --namespace=datahub edit deployment hub-deployment
<EDITOR-POPS-UP>
change value of SINGLEUSER_IMAGE to point to new tag
save and exit
if hub pod hasn't restarted, delete it manually
Additional helpful commands:
How to see which cluster you're in:
kubectl config current-context
How to switch between dev and prod:
kubectl config use-context gke_data-8_us-central1-a_prod
(access needs to be enabled)
kubectl config use-context gke_data-8_us-central1-a_dev
Command to use for deployment to dev, slightly different than what's posted above:
kubectl --namespace=datahub-dev edit deployment hub-deployment
(use datahub-dev namespace)
I just tried doing a helm upgrade in dev and the hub restarted and the proxy did not!
Update: This is really strange then. Why did the proxy restart the last time?!
Mysteriously, I did a helm upgrade in prod too and it just worked. proxy pod didn't restart, users weren't interrupted.
wat.
Yuvi, did you run the helm upgrades after editing the deployment, or just by themselves as normal?
Gunjan, I've got the following in my .bashrc so that I can run cxt
to view my current context, or cxt whatever
to set it.
function cxt () {
case $1 in
prod|dev|playground)
kubectl config use-context gke_data-8_us-central1-a_${1}
;;
*)
kubectl config get-contexts --no-headers | grep ^* | \
awk '{ print $2 }' | sed -e 's/.*_//'
;;
esac
}
@ryanlovett I updated the hub image and ran the deployment.
With ./deploy.py --debug prod data8 7e0c38f
It did not restart for dev and prod deploys yesterday.
Ryan
On Mon, Feb 20, 2017 at 9:53 PM, Yuvi Panda notifications@github.com wrote:
With ./deploy.py --debug prod data8 7e0c38f
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/data-8/jupyterhub-k8s/issues/108#issuecomment-281252091, or mute the thread https://github.com/notifications/unsubscribe-auth/AJxfvbGUBIvii7PJKyxYFaf9CNk7-d3Aks5renvtgaJpZM4L6WaS .
@yuvipanda @ryanlovett Since this has not been happening, and does not seem like a persistent issue, would this be OK to close?
I think Yuvi fixed this. We can always reopen if it happens again.
OK, awesome. I'll close this then.
This causes problems for users, since they can't access notebook when proxy is down. It takes a few minutes to come back up. We should find ways to make sure the proxy doesn't restart.