sclorg / postgresql-container

PostgreSQL container images based on Red Hat Software Collections and intended for OpenShift and general usage. Users can choose between Red Hat Enterprise Linux, Fedora, and CentOS based images.
http://softwarecollections.org
Apache License 2.0
161 stars 209 forks source link

set passwords for a set of accounts #101

Open ibotty opened 8 years ago

ibotty commented 8 years ago

It would be great to create roles and set the password for many accounts when starting the pod.

For many databases you have fine-grained permissions. It would be great to manage credentials in one place (one secret per database role), instead of having to set it once in the database, and once in the application.

I propose, that instead of hardcoding /etc/credentials/pgmaster, /etc/credentials/pguser, /etc/credentials/pgadmin, we use every directory within /etc/credentials, create the role if it does not exist yet, and set the password.

If/when a solution to downward-api for secrets (kubernetes/kubernetes#18372) lands, we could/should set role passwords automatically. That still leaves the question on what to do with additions. That can only be fixed properly when one can mount new volumes (or at least secrets) into running kubernetes pods, I suppose.

ibotty commented 7 years ago

Well, now that secrets get updated automatically, I'd like to get a run at this issue.

My plan of action is the following. I do welcome comments on the approach.

What do you think?

bparees commented 7 years ago

the additional container feels heavy-weight, why can't you watch in the background of the existing container?

ibotty commented 7 years ago

Should be possible, but is harder to monitor. Fine with me either way.

praiskup commented 7 years ago

In the beginning, this issue seemed to be about additional roles, but during start of the container. Now it looks like we need to make some async actions for changing passwords -- which all looks like a wrong idea to me (this might be done by client tool, doesn't have to be done by container itself).

I'm not sure, can you elaborate please what is really needed and how it should be done?

ibotty commented 7 years ago

I guess you can say that's a meta-issue. For my part, there are three things I'd like:

I think the first two part are the most important (for my use case), but the third is a very nice thing when updating credentials. That secrets mounts get changed automatically when the secret changes is to enable that kind of thing. Is there a technical reason for not wanting it?

I also don't see what part of an application deployment should be in charge of changing credentials if not the containers: When rerolling postgresql user credentials, are you expected to either redeploy (with downtime), or require the user to change the secret (to persist the change) and run psql?

praiskup commented 7 years ago

Such change sounds like way to have race conditions pretty easily .. it sounds like there should be some container API where environment (openshift) should be able to request changing credentials, with clear error reporting.

praiskup commented 7 years ago

I'm still interested in the no credentials in logs. How that can be guaranteed that password is not going to occur in logs? That still requires maintainer to keep the script sane.

ibotty commented 7 years ago

Well, right now every environment variable set as env variable will be visible in the host's docker logs as well as openshift's node and master logs. With secrets, that's not the case.

Of course I assume, the container scripts don't log the password by accident and that can't be guaranteed. I frankly don't see the point of your remark.

ibotty commented 7 years ago

I missed your comment re race conditions above.

Volume mounts are eventually consistent. Can you clarify what race conditions you expect? I can not think of a scenario in which it won't converge to the correct outcome.

praiskup commented 7 years ago

Well, right now every environment variable set as env variable will be visible in the host's docker logs as well as openshift's node and master logs. With secrets, that's not the case.

That's not remark, one question mark is missing.. :) I'm rather curious.

Docker has something like docker run -e ENVVAR, that can be used instead of docker run -e ENVVAR=value. So, if maintainer really pays attention (and it is not clear now whether it is really needed), there is no real need to put the environment variable content somewhere. Maybe that's something to be fixed yet, regardless of the secrets feature.

The problem is that we need to read the password from secrets .. somehow, and as we (by default) put everything into stderr (by set -x) it is very likely credentials will be on stderr too ... so, something like "policy" for working with credentials should be written?

praiskup commented 7 years ago

Volume mounts are eventually consistent. Can you clarify what race conditions you expect? I can not think of a scenario in which it won't converge to the correct outcome.

What if the request for password change is hit within very short period, while the password change is done in some infinite loop in background?

ibotty commented 7 years ago

Oh, I get what you mean. I was worried not about the container's log but the cluster's log. You are right, that the other thing should be fixed as well, but are you sure you enable tracing? I can only see set -x in the test script and not on production logs in a few postgresql containers.

ibotty commented 7 years ago

Re password change race: do you mean that the application might have the wrong password for a small amount of time? I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense?

praiskup commented 7 years ago

I can only see set -x in the test script and not on production logs in a few postgresql containers.

Correct, sorry for mystification. set -x is not enabled, and should never be :)

So the what exactly exposes the credentials? Would doing this:

export PASS=value
docker run -e PASS ...

.. instetad of:

docker run -e PASS=value

help?

ibotty commented 7 years ago

I am only concerned about openshift/kubernetes usage, so I have not thought deeply about a docker run case.

In openshift, specifying environment vars directly logs them in the pod's annotation as well as in the master's and node's log (I am not sure with env variables set through valueFrom).

Any use of environment variables (docker or openshift) will expose the env on the host. See ps e. I think using volume mounts for credentials has way less potential for getting exposed.

ibotty commented 7 years ago

Re password change race: do you mean that the application might have the wrong password for a small amount of time? I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense?

praiskup commented 7 years ago

On Tuesday, September 27, 2016 8:20:09 AM CEST Tobias Florek wrote:

Re password change race: do you mean that the application might have the wrong password for a small amount of time?

Exactly. If container got into such situtation, recovering it would be nontrivial task for user.

I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense?

IMO, there just needs to be a "protocol" between environment (kubernetes) and container. As I'm not familiar with kubernetes, I'm not sure whether this is realistic dream, but environment should be able to wait till the request in container-cluster is really done (containers should be able to confirm, somehow).

Putting something into file and blindly expect that the request is going to be executed (immediately) sounds like really bad design, that would definitely bite us soon. In container - atfter executing the request - can I write something into "secrets" to simply inform kubernetes know about "exit status"?

ibotty commented 7 years ago

I am not sure, I am following.

ConfigMaps are eventually consistent (and additionally: updates in it are atomic), so the latest inotify-event will have the latest change. If the inotify-events gets handled serially (that is not concurrent), I still don't see, how you could end up in a state that does not correct itself.

ibotty commented 7 years ago

And: (sorry forgot) kube does not have any idea about confirmation re configmap changes.

ibotty commented 7 years ago

Care to comment on the wip pull request: #145?

praiskup commented 7 years ago

I still don't see what happens if the request isn't correctly handled.. but app thinks it is handled. That can happen because of low-memory issue, queue problem or whatever other reason. When we talk about transactional database, this looks like really weak best-effort approach.

ibotty commented 7 years ago

What request? The password change? If so, if it is not handled correctly, the app won't be able to connect until the request is handled. That's the same that happens when resetting passwords in traditional deployments, so I don't quiet see what you are after.

I have no problem not doing that part btw, so we can also agree to disagree here ;).