Closed eldada closed 5 years ago
Not really sure what to do about this. The main issue is I don't know of a way to block deployment, or otherwise intercept deployment before it completes. The closest thing I know of is what KubeXray does right now: receive a trigger after a deployment completes, and then modify the deployment to no longer run. This happens after helm is done and thinks everything went well, so there's no way to let it know that something broke and that it should keep the old version up, and KubeXray can't do that on its own because it doesn't have the context to recognize things like helm upgrade
.
I understand. A solution should be found to allow service continuity. I will not want to have kubexray running in production and shutting down my service if an issue is found. I want it to block the new version and keep the existing. Maybe save the currently running version (if exists) before any changes are made, so it's able to compare new state to old one? This will require a state to be recorded and "remembered" by kubexray for every running container.
@DarthFennec - one way to intercept requests before the actual deployment is to use admission controllers. Let's discuss more and provide a revised spec for the next version of kubexray.
IMHO - do be used in production, we must implement a rollback
option or the adoption of this will be very limited.
In my opinion, we can not rely on helm lifecycle event hook, as not all the people use helm. Kubernetes container hook only support postStart and preStop, so this also can not be a good solution. Admission controller may do something like podSecurityPolicy
provide admit
mechanism to allow or reject pod run, user can choose if they need it.
BUG
What happened:
helm upgrade .....
to bad version 1.2What should happen: