Closed webnotesweb closed 10 months ago
Hello,
The tmp volumes should not be an issue here as they are ephemeral which means a new once is created for each pod. So WriteOnce should be fine.
Could you please describe in more detail what goes wrong during the rolling update? Perhaps describe the new pod that is supposed to come up? I've just tested it with 1 replica and tmp volumes, and it worked as expected. But then I tested it on minikube.
Hello,
Thank you very much for your reply!
Our current environment is on k8s v1.24.17, storage class supports RWX and we are using RWX where it can be set in Helm chart provided. We are setting up OpenProject with ArgoCD and what we are trying to achieve is when we make a change for example to OPENPROJECT_ATTACHMENTMAXSIZE env var, that change is reloaded in app itself. It seems that we need to restart app (delete pod) in order for changes to take effect and that works (with a bit of downtime).
It lead me to think that RWO/RWX has influence in this situation since over here when develop set to true skips this part of yaml as it seems. I was putting together huge reply regarding RWO/RWX situation but in the meanwhile it lead me to think just as you said that this might be slightly different situation.
Right now, as soon as change is made to values.yaml of course related secret containing environment values is updated in k8s but rollout (app restart) does not happen. So when change is made to any of the env vars it is updated right away in proper secret but rollout (restart) does not happen and changes do not take effect. Even manual restart is fine of course but I was thinking how it can be achieved without downtime.
In our case we want to watch this specific secret and do update of openproject-staging-web deployment to reload app.
> kubectl describe secrets openproject-staging -n openproject
...
...
...
Type: Opaque
Data
====
OPENPROJECT_HTTPS: 4 bytes
OPENPROJECT_SEED_ADMIN_USER_MAIL: 27 bytes
OPENPROJECT_CACHE__MEMCACHE__SERVER: 35 bytes
OPENPROJECT_HOST__NAME: 38 bytes
OPENPROJECT_HSTS: 4 bytes
OPENPROJECT_RAILS__RELATIVE__URL__ROOT: 0 bytes
OPENPROJECT_SEED_ADMIN_USER_NAME: 17 bytes
DATABASE_HOST: 60 bytes
OPENPROJECT_APP__TITLE: 20 bytes
OPENPROJECT_ATTACHMENT__MAX__SIZE: 4 bytes
OPENPROJECT_SEED_LOCALE: 2 bytes
DATABASE_PORT: 4 bytes
OPENPROJECT_SEED_ADMIN_USER_PASSWORD_RESET: 4 bytes
POSTGRES_STATEMENT_TIMEOUT: 4 bytes
DATABASE_URL: 85 bytes
OPENPROJECT_RAILS__CACHE__STORE: 8 bytes
OPENPROJECT_SEED_ADMIN_USER_PASSWORD: 15 bytes
I have just found actually this information and it seems to explains exactly what we are facing now and what we would like to achieve.
We would like to watch if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
Ref: https://github.com/stakater/Reloader#problem
So it seems that we need to look at other direction to sort this out as it does not seem to be related to ReadWriteOnce/ReadWriteMany in ephemeral tmp specs that we tried to tweak if there is no influence by tweaking this setting in tmp volumes.
Please do let me know if you have any insight/advice about the best approach to achieve this with OpenProject it would be highly appreciated! We are actually trying a way to make changes to OpenProject environment variables and gracefully reload them on k8s.
Thank you again very much for your time.
Hello,
I would just like to provide update in case anyone else is in similar situation.
This was solution for our situation and does what we expected (graceful reload of env vars) as a result even with single replica of *-web deployment.
# kubectl rollout restart deployment openproject-staging-web -n openproject
deployment.apps/openproject-staging-web restarted
Thank you again for your help and support.
Thank you for posting the solution, @webnotesweb! 🙏
Hello,
We are currently testing configuration of environment vars via values.yaml and on initial run everything is working as expected.
Changes do take effect of course when initial deployment is made and they do take effect once when "openproject-staging-web" is redeployed (actual pod deleted).
These are for example super simple changes that I am trying to make just as a test:
accessModes is set to "ReadWriteMany" and strategy is set to "RollingUpdate".
I was able to achieve this without downtime with setting of number of OpenProject web process replicas to 2 and deleting one pod, and second one afterwards.
Is it possible to achieve environment vars configuration reload without downtime for single openproject-staging-web replica?
Might it be related to
*web-*-tmp
and*web-*-app-tmp
volumeClaimTemplate accessModes that we are currently not able to override to ReadWriteMany because they are set by default to ReadWriteOnce?Any insight is really appreciated! Thank you very much for your time!