Closed mvladev closed 2 years ago
@mvladev what's the plan here?
We still need the {{shoot-name}}.kubeconfig
as it's used by admins to grant privileges to the end-users.
The plan is the following:
If OIDC is enabled add additional context
and user to the existing {{shoot-name}}.kubeconfig
with a prefix -oidc
:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://api.foo.bar:443
name: shoot--default--preset
contexts:
- name: shoot--default--preset
context:
cluster: shoot--default--preset
user: shoot--default--preset
- name: shoot--default--preset-oidc
context:
cluster: shoot--default--preset
user: shoot--default--preset-oidc
current-context: shoot--default--preset
users:
- name: shoot--default--preset
user:
client-certificate-data: abcd1234
client-key-data: abcd1234
- name: shoot--default--preset-basic-auth
user:
username: admin
password: abcd1234
- name: shoot--default--preset-oidc
user:
auth-provider:
name: oidc
config:
client-id: cluster-preset
cliient-secret: oidc-client-secret
idp-issuer-url: https://auth.foo.bar
extra-scopes: email,offline_access,profile
foo: bar
if OIDC is enabled, then a {{shoot-name}}.oidc-kubeconfig
secret is created containing:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://api.foo.bar:443
name: shoot--default--preset
contexts:
- context:
cluster: shoot--default--preset
user: shoot--default--preset
name: shoot--default--preset
current-context: shoot--default--preset
users:
- name: shoot--default--preset
user:
auth-provider:
name: oidc
config:
client-id: cluster-preset
cliient-secret: oidc-client-secret
idp-issuer-url: https://auth.foo.bar
extra-scopes: email,offline_access,profile
foo: bar
this additional secret is then copied to the gardener cluster by SyncShootCredentialsToGarden function.
I think that it should be sufficient?
/touch
/touch @mvladev still plans to implement this?
The more and more I think about this - It's my gut feeling that we should have this as an Shoot subresource - e.g. shoots/kubeconfig
that accepts a request for kubeconfig
:
apiVersion: authentication.gardener.cloud/v1alpha1
kind: KubeConfigRequest
spec:
type: OpenIDConnect
and then the APIServer responds with:
apiVersion: authentication.gardener.cloud/v1alpha1
kind: KubeConfigRequest
spec:
type: OpenIDConnect
status:
kubeconfig: # this is base64-encoded. but decoded for the sample
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: base64-encoded-certificate
server: https://api.my-shoot-cluster
name: shoot-1234
contexts:
- context:
cluster: shoot-1234
user: shoot-1234
name: shoot-1234
current-context: shoot-1234
users:
- name: shoot-1234
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- oidc-login
- get-token
- --oidc-issuer-url=https://my-issuer.example.com
- --oidc-client-id=some-client
- --oidc-client-secret=some-secret
- --oidc-extra-scope=email
- --oidc-extra-scope=profile
- --oidc-extra-scope=groups
- --grant-type=auto
command: kubectl
Or if the cluster-admin kubeconfig is needed then:
apiVersion: authentication.gardener.cloud/v1alpha1
kind: KubeConfigRequest
spec:
type: ClusterAdmin
status:
kubeconfig: # this is base64-encoded. but decoded for the sample
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: base64-encoded-certificate
server: https://api.my-shoot-cluster
name: shoot-1234
contexts:
- context:
cluster: shoot-1234
user: shoot-1234
name: shoot-1234
current-context: shoot-1234
users:
- name: shoot-1234
user:
token: fkjj12 # cluster-admin token
Those resource won't be preserved in the system, but fetched and constructed from the ShootState
resource by the API Server. Permissions to create KubeConfigRequest
of type ClusterAdmin
should be granted only to users that are allowed to perform administrate
verb on shoots/kubeconfig
.
@vlerenc can you have a look into https://github.com/gardener/gardener/issues/1433#issuecomment-790654980 ? With subresource apporach we do not duplicate the information already stored in the ShootState
and can easily apply authorization based on the the user making the request.
@mvladev I guess you asked me specifically because of:
I am digressing, but there are many topics that are discussed now in parallel and they are all (more or less) related.
Let's focus to your proposal at hand:
OpenIDConnect
s for the Gardener OpenID provider and the user-defined cluster OpenID provider; project admins can download `kubeconfig's for either and all other roles only for the latter (ugly, because, contrary to the user's wish, we would always "inject" the web hook authenticator that is an availability- and a security-relevant component and possibly a danger/threat)ClusterRoleBinding
for cluster-admin
s as specified in the shoot
spec (ugly, because it feels clumsy as project members would not be authorised by default, but then again this may be intentional)ClusterRoleBinding
that authorises all project admins (ugly, because subjects in Gardener's OpenID provider may not be the same as in the cluster's user-defined OpenID provider)oidc-client-secret
in the generated kubeconfig
coming from, because we don't have/didn't need to have it yet in https://github.com/gardener/gardener/blob/master/example/90-shoot.yaml#L140I do not like the static token that much in it, because we said we would like to offer clusters where users can opt out of basic auth and static tokens and now this one would require static tokens. Then again, we plan to implement on-demand rotation and we could/would then include the static tokens into rotation. However, it's still surprising to end users, if they didn't explicitly asked for it, especially if they implemented their own rotation and want to explicitly forbid it/forbade it.
The static token is the current implementation which can be changed in the future, when those other ways to authenticate can be used.
Concretely, where is the oidc-client-secret in the generated kubeconfig coming from, because we don't have/didn't need to have it yet in https://github.com/gardener/gardener/blob/master/example/90-shoot.yaml#L140 . Also I'm also thinking that it would be possible to send this information when making
KubeConfigRequest
.
I haven't updated the example documentation, but there is additional information that can be passed when the cluster is created - https://gardener.cloud/documentation/references/core/#core.gardener.cloud/v1beta1.OpenIDConnectClientAuthentication the secret is stored there.
The static token is the current implementation which can be changed in the future, when those other ways to authenticate can be used.
Maybe, but then again, implementing it this way and throwing it away soon after would be odd/waste.
Also I'm also thinking that it would be possible to send this information when making
KubeConfigRequest
.
@mvladev The "Also I'm also thinking" part is your thought/should not be in quotes citing me.
Yes, if we go down that path, that's a nice idea. Then it's not our responsibility. We just take and insert it. Whether it's correct is not our concern.
Then again, it makes the interactions more clumsy in the dashboard ( :-/ ), so taking it in as configuration seems more reasonable, hopefully.
Personally, I do not like very much the idea of sending a custom object to the subresource. Can we simplify it to do just one of the following
shoot/oidcKubeconfig
and one for the static kubeconfig shoot/staticKubeconfig
.KubeConfigRequest
and create them on the fly similar to the SelfSubjectRulesReview
:
kubectl create -f - -o yaml << EOF
apiVersion: authentication.gardener.cloud/v1alpha1
kind: KubeConfigRequest
spec:
type: ClusterAdmin
shootRef:
name: crazy-botany
namespace: garden-dev
EOF
I lean toward the first option as the role(binding)s can be defined per subresource. In the other case, an additional authorization logic is required to consider the kubeConfigRequest.spec.type
if we want to provide the option for fine-grained access control to the kubeconfigs.
Have 2 different subresources - one for the OIDC shoot/oidcKubeconfig and one for the static kubeconfig shoot/staticKubeconfig.
I think this might be the better approach as we might need different semantics for both ( see bellow ).
Maybe, but then again, implementing it this way and throwing it away soon after would be odd/waste.
Now that the API sever could read the Shoot state it should be quite easy to generate a short-lived certificate - e.g. for 30 mins:
apiVersion: authentication.gardener.cloud/v1alpha1
kind: AdminKubeConfigRequest
spec:
expirationSeconds: 1800
The API server now can generate a client certificate, sign it by the client-ca and return it in the kubeconfig. The certificate would be valid only for 30 mins.
The API server now can generate a client certificate, sign it by the client-ca and return it in the kubeconfig. The certificate would be valid only for 30 mins.
@mvladev Ha, I like that. No static tokens, just a short-lived cert? Very nice!
@mvladev What's the plan with this issue now that we have merged the shoots/adminkubeconfig
subresource feature? Do you still plan to generate an OIDC kubeconfig if the Shoot
has configured OIDC settings?
friendly reminder @mvladev
OIDC configuration in the Gardener shoot clusters is owned by the shoot cluster owner. Gardner should not mess with a provider that is not controlled by it.
What would you like to be added:
When OIDC is specified in the Shoot cluster, a OIDC kubeconfig should be generated. Given:
it should produce the following kubeconfig
This have a requirement to #1394