Closed simonferquel closed 3 years ago
There's a security consideration here in that all users with pod create
permissions in the same namespace as the created stack will be able to retrieve the user's registry auth credentials and impersonate the user. This may significantly impact our multi-tenancy story.
Consider the following scenario:
appteam
appteam
AFAIU, the flaw is not with this PR, but with the very notion of Kubernetes Pull Secrets, and the fact that there is no standard way for the Docker CLI to generate a long-standing pull-token for a specific repo (that can be revoked by the owning user), and so Pull Secrets contain credentials with much more access rights than necessary.
Within the set of constraints we have:
@alexmavr @justincormack do you think of potential alternatives ? I am a bit short on ideas.
I think there is no issue on the API Server / Controller side, it is more a User Experience problem
I don't think that this workflow should use the user's credentials. I know that is how the pull secrets docs suggest you use it, but as Alex points out this secret then belongs to the service account, so it should be a service account registry account (ie read only access, not attached to a specific user, scoped for the repos that this service account needs) that is used. For dev use it is kind of ok to use your own creds, but this is never the right thing in production.
The ideal workflow for EE would probably have UCP set up DTR service accounts that could be populated easily as read only accounts for the team that is using that service account. That would be out of band from the compose controller though. For desktop a simpler scheme that just uses your creds would be ok (until we get better read only token support).
Hmm IIRC, swarm's implementation of --with-registry-auth
seems to have the exact same flaw then (it copies the user credentials to the service definition). So we have to decide on the approach to take here for the docker CLI:
--with-registry-auth
with the same semantic as the swarm implementation (transparently creating pull secrets with the user's credentials)2b. add a create pull-secret
subcommand to populate a pull secret with provided credentials
In any case, I think that on the compose-on-kubernetes side, these different approaches have no impact.
WDYT?
I definitely don't think we should do 1. or 3., and would generally be in favour of removing that from Swarm, ideally to match Kub behaviour once we have a usable path there. 2 probably with something like 2b sounds good to me.
Reopening for tracking CLI side
CLI side is merged, but we also want to have private image support trough using a Service Account already associated with pull secrets. (keeping it open then)
Any docs for how to use this?
Hello @cjancsar there are no documentations right now, as these features are still experimental. But you will find more explanation on this PR.
(Experimental) When targetting Kubernetes, add support for x-pull-secret: some-pull-secret in compose-files service configs. (Experimental) When targetting Kubernetes, add support for x-pull-policy: <Never|Always|IfNotPresent> in compose-files service configs.
I've adding x-pull-secret: regcred
to my compose file in the service(s) that needed it, but that doesn't seem to have any effect. I created the regcred secret with the following command:
kubectl create secret generic regcred --from-file=.dockerconfigjson=~/.docker/config.json --type=kubernetes.io/dockerconfigjson
It definitely exists, I can kubectl describe
the secret, but my services fail to deploy because they aren't getting authenticated to pull the image. Is there something that needs to be done on this project to get this working?
Alternatively, are there any workarounds that one can do right now to get their private images into a k8s cluster using a compose file?
I have specified
version: '3.7'
services:
myservice:
x-pull-secret: mysecret
[...]
secrets:
mysecret:
external: true
and checked client and server versions:
$ docker version
Client: Docker Engine - Community
Version: 19.03.8
API version: 1.40
[...]
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 19.03.8
API version: 1.40 (minimum version 1.12)
[...]
Experimental: true
Kubernetes:
Version: v1.15.10
StackAPI: v1alpha3
yet, when running docker stack deploy -c docker-compose.yml myapp
, no imagePullSecrets
specification seem to end up in the AKS (v1.15.10) Kubernetes pod specifications. The compose-api-XXX
pod has label com.docker.image-tag: v0.5.0-alpha1
, and mysecret
already exists in the default
namespace where the app is deployed.
What more do I need to do, and how can I debug the problem?
Depends on #26. We need to add a field on ServiceConfig for referencing a PullSecret, and convert it on reconciliation. The flow would be:
$ docker stack deploy --with-registry-auth
docker cli
get the registry auth info and populate a k8s secret named<service>.pull-secret
docker cli
populate the PullSecret field in the service config with the secret namedocker cli
post the stackcompose controller
converts service configPullSecret
field into the PullSecret field of a ContainerSpeckubernetes
is capable of using the pull secret to make kubelet pull the image