Open jnordberg opened 3 years ago
Ensuring files have the right access is an ongoing challenge for CouchDB, because it expects to be able to write to the last config file in the chain.
If Docker Swarm can't mount a file in a way that couchdb running inside the container can write to the file, using non-root permissions, this is a WONTFIX.
Am I misunderstanding the issue?
How about if the docker entrypoint script copies the config files given in e.g. /opt/couchdb/etc/copy.d/*
to /opt/couchdb/etc/local.d/
? That would allow swarm users to provide config files without workarounds
The docker container is mature at this point, and used in many environments other than Swarm, so changing the functionality that dramatically is a non-starter.
The whole point of the file being external (in our recommended approach, where you externalize the entire etc/local.d
directory) is so that it is persisted after the container exits, too. This is important in non-Swarm scenarios. Copying it in loses that advantage.
Adding what I suggested won't change functionality for anyone not explicitly mounting config files into etc/copy.d
.
We'll take it under consideration, but I would not expect a change soon.
Hi @wohali,
I also tried the same to update the configuration by putting a new file in /opt/couchdb/etc/local.d/ through configmap from k8s. Container was crashing without any error message at all and could not figure what is the reason for container to fail. I checked the helm chart on how it is handled there, it copies the file to target path via init container and that works.
From your comment only last file loaded in config has to be writable by non-root. Is the last file decided based on alphabetical order? In that case mounting a file less than the name 10-xxxx
will solve the issue?
Can we add some errors to indicate the reason in this case?
Would a PR adding trace mode to the entrypoint scripts be accepted? This would at least allow folks to figure out where the entrypoint is bailing out. Something like the following is what I'm thinking:
[ -n "$TRACE" ] && set -x
Users can then set the TRACE env var to figure out why its not starting. This pattern is used across many heroku buildpacks for debugging purposes.
Expected Behavior
The container should start using the config files provided.
Current Behavior
The container exits with code 1 and no log messages.
Possible Solution
Make configs in /etc/local.d/* readable before running couchdb in the docker entrypoint
Steps to Reproduce (for bugs)
stack.yml
admins.ini
Context
This bug prevents me from launching a cluster of couchdb instances in my docker swarm without setting the password as plaintext in the environment variables.
Workaround can be found here: https://github.com/apache/couchdb-docker/issues/73#issuecomment-766179802
Your Environment
Docker 20.10.8