Closed gapodo closed 1 year ago
and just to add having a defined config format could also allow us to make the config-source pluggable, so we could cater to different ways of providing the config (i.e. Kubernetes API, file, or even git repos).
Once an "interface" is properly spec-ed this would allow users to basically extend config loading without requiring it to be within the core functionality of woodpecker. Allowing for interest groups to adapt woodpecker to their specific needs, as an example, the config could become K8s native for users of Kubernetes.
if we have the option of a proper config file, it could also be better packaged by normal linux distros ...
what format(s) would you propose, ini, yml, ...?
if we have the option of a proper config file, it could also be better packaged by normal linux distros ...
I fully agree... env-config is a pain when packaging software (esp. if the env vars also change and need migration basically from any version to any version on update, e.g. in rolling distros like arch and manjaro, or upgradable ones like Debian and Ubuntu)
what format(s) would you propose, ini, yml, ...?
I'm not too opinionated re. formats, though yaml has somewhat taken over (and would also be a nice fit with automation/orchestration systems like Kubernetes and Ansible), also IIRC yaml support for go is fairly good, while ini et al are basically up to you or a 3rd party module (also "well structured" configs make automation and validation easier).
I've mostly written this up, while was at it, not thinking too deep into the implementation yet. The only thing I have a somewhat strong opinion on is, don't create a new config format and please use something "well structured" (key value also matches this, but things like ini have the problem of key reuse in different sections, which makes parsing more difficult, as there is only limited support in most languages / frameworks, and misunderstandings happen when humans talk about the keys as they are section dependent), common and in the best case is human- and machine-readable
I would like to not add a config file. Changing those in deployments is most of the time an absolute pain. Instead we should stick to env vars for fixed values and database settings changeable via UI for "dynamic" values IMO. If you want your variables to come from a file you could use a good old .env
file with appropriate values.
I would like to not add a config file. Changing those in deployments is most of the time an absolute pain. Instead we should stick to env vars for fixed values and database settings changeable via UI for "dynamic" values IMO. If you want your variables to come from a file you could use a good old
.env
file with appropriate values.
While I respect your opinion, I tend to disagree severely... env-vars are a giant pain when running on Kubernetes, also not really nice when running on "bare metal" (the OS), not optimal in docker, and the .env file doesn't solve this (while it is a file, it is "just a file" no verification, no hot reloading,...).
When running gitOps, you change the file in the repo and apply it automatically, this is easy with a config file (works even better if there is a spec to the file, so it can be verified upfront), but no matter what, when using envs will require a full redeployment/restart (as that is the only way to get them into the container, the systemd unit,...).
Moving the settings to the GUI and database is IMHO even worse without a programmatic way to change them (spinning up a container to change settings would be a huge pain and also not declarative). Though, if they apply in real time, it would take away one of the problems I am facing (having to restart the CI just to add another admin or permitted org or repo owner).
Live reloading (where possible) well-defined (and therefore verifiable) files, would be the nicest option to achieve this, having an API that lets you configure the changeable settings would be bearable (though require another sidecar just to set config options in a declarative way and would also be potentially problematic to automate, as there would be a need to exchange some kind of token).
Clear and concise description of the problem
When improoving the helm chart I've tried to stick to K8s best practices, but since Woodpecker loads almost all settings only from the ENV or flags, this becomes almost impossible.
Suggested solution
It may be a good idea to define a config format, which can be used alternatively to the env setup. Defining a well known/defined config format would also allow us to potentially change the env names without disrupting existing installations (and the helm chart) and also potentially create an easy upgrade path if we restructure the config.
The container (or the server and agent binaries directly) could include a mechanism to verify the config (and hand out matching warnings) and either hot reload or restart on changes.
The possibility to load secrets from files should not be replaced by this, as secrets are handled differently by Kubernetes and docker-secrets and should be loaded from another source (the secret mount point).
Alternative
An alternative would be to implement a sidecar container, which prepares the config for use by woodpecker and stores this file in a shared mountpoint, so woodpecker can source it.
This would require the woodpecker container images to contain an entrypoint which, if
WOODPECKER_RELOAD_CONFIG
(just an example) is set totrue
, waits for the file to be present, then sources it and re-/starts the server or agent. The sidecar could then watch for changes in the configMaps and secrets (either via file watch or directly via the K8s API) or on docker just for file changes to the configs, update the shared file, which would then retrigger the server/agent to restart with the new config.Additional context
The current use-cases I can think of mostly revolve around the server, as its config is bigger, has more to verify and has a bigger impact on the availability overall, but having it for both would be nice.
Changing configMaps and secrets on the fly is pretty common (esp. in GitOps scenarios) and if the app does not reload by itself, often causes users to miss restarting the containers, which causes them to use the old config while thinking the new one is active (this also applies when updating configMaps or secrets via helm as changes to configMaps and secrets don't trigger pods to restart).
This feature would mostly help in containerized environments, but if the first approach is used, this would help in all setups by having a declarative setup, which is well suited to GitOps / versioning.
Being able to verify the config would also reduce the risk of misconfigurations and potential downtime caused by applying new/changed settings which are invalid (e.g. activating multiple forges at the same time, which is currently not possible).
Validations