Right now, if we change any configurations from the Deployment to ConfigMaps, and run kubectl apply -f wp-Deployment.yaml, the rolling update will end badly and fail with a mount error.
This is bad because changes in things like the NGINX web server configuration require a reload or restart of the NGINX container. Doing kubectl apply is resulting in downtime.
This was fixed by separating the PVC from the Deployment YAML, so now when you kubectl apply -f wp-Deployment.yaml it does the expected behaviour and just touches the Deployment and not the PVC.
Right now, if we change any configurations from the Deployment to ConfigMaps, and run
kubectl apply -f wp-Deployment.yaml
, the rolling update will end badly and fail with a mount error.This is bad because changes in things like the NGINX web server configuration require a reload or restart of the NGINX container. Doing
kubectl apply
is resulting in downtime.This issue is discussed at Rolling update of replication controller with GCE persistent disk fails due to ImageNotReady and Reload nginx conf in kubernetes pods.
There's also the possibility of getting a shell to the NGINX container and running
nginx -s reload
as shown here.