OSC / ondemand

Supercomputing. Seamlessly. Open, Interactive HPC Via the Web
https://openondemand.org/
MIT License
277 stars 103 forks source link

Kubernetes integration with OIDC #3741

Open yhal-nesi opened 3 weeks ago

yhal-nesi commented 3 weeks ago

So, I am not sure if this is a bug or I don't understand how the whole thing is supposed to work.

The credentials setting code will pass ondemand refresh token and access token. When the access token expires, new pods cannot be launched unless it is refreshed. But to refresh a token, you also need to pass OOD client_id and secret. Which makes a secret accessible for any user in $HOME/.kube/config.

This looks like a security issue to me?

...
users:
- name: kand204
  user:
    auth-provider:
      config:
        client-id: ondemand-client
        client-secret: *****

We modified that script for our use-case to perform a token exchange in the beginning and request a token for a dedicated kubernetes client. This is specific to Keycloak, because I don't think all IdPs support token exchange grant.

#!/bin/bash
...

# here we get the kubernetes client refresh token instead of the OOD one
KUBERNETES_RESPONSE=$(curl -s -X POST \
        -d "client_id=$CLIENT_ID" \
        -d "client_secret=$CLIENT_SECRET" \
        --data-urlencode \
        "grant_type=urn:ietf:params:oauth:grant-type:token-exchange" \
        -d "subject_token=${OOD_OIDC_ACCESS_TOKEN}" \
        --data-urlencode \
        "subject_token_type=urn:ietf:params:oauth:token-type:access_token" \
        $IDP_ISSUER_URL/protocol/openid-connect/token )

K8S_ACCESS_TOKEN=$(echo $KUBERNETES_RESPONSE | jq -r .access_token)
K8S_REFRESH_TOKEN=$(echo $KUBERNETES_RESPONSE | jq -r .refresh_token)

# we use pass ACCESS_TOKEN into the id-token arg. That's OK, it works and refreshes.
sudo -u "$ONDEMAND_USERNAME" kubectl config set-credentials "$K8S_USERNAME" \
   --auth-provider=oidc \
   --auth-provider-arg=idp-issuer-url="$IDP_ISSUER_URL" \
   --auth-provider-arg=client-id="$CLIENT_ID" \
   --auth-provider-arg=client-secret="$CLIENT_SECRET" \
   --auth-provider-arg=refresh-token="$K8S_REFRESH_TOKEN" \
   --auth-provider-arg=id-token="$K8S_ACCESS_TOKEN"

This way a user will still get a secret but the secret scope is limited to a kubernetes client.

treydock commented 3 weeks ago

It looks like you're still passing the CLIENT_SECRET into client-secret arg so it will end up in the .kube/config. If the issue is the exposure of the client secret, I don't think there is a way to avoid that. Either you expose the client secret of the OnDemand instance or the client secret of the Kubernetes cluster. At OSC we use client audience to allow the OnDemand tokens to be valid for the Kubernetes client.

yhal-nesi commented 2 weeks ago

Yep, I tried using public client for kubernetes, but keycloak token exchange doesn't work with public clients. At least a dedicated kubernetes client has more limited scope.

Ideally the refresh would be handled by ondemand itself.

treydock commented 2 weeks ago

Not sure what a public client refers to. Could you elaborate? Is that some special kind of OIDC Client in Keycloak?

I'm not sure how OnDemand refreshing tokens would prevent the requirement to expose the client secret. The client secret is needed to get the initial token so whether OnDemand does the refresh or not, that wouldn't change the need for the client secret.

OnDemand does not run a dedicated daemon of any kind and all processes run as the user and will be terminated once the user stops using OnDemand. Doing any kind of background token refresh wouldn't really make sense with OnDemand since OnDemand's PUN processes aren't long running daemons.

yhal-nesi commented 2 weeks ago

A public client is an OIDC client without a secret: https://datatracker.ietf.org/doc/html/rfc6749#section-2.1 Kubernetes maintainers seem to favor using public client: https://github.com/kubernetes/kubernetes/issues/37822#issuecomment-264198821 Unfortunately, Keycloak doesn't allow performing a token exchange on a public client using a token from a confidential client, so OOD client would also need to be public, and I don't know if that makes sense or is going to work.

The client secret is needed to get the initial token so whether OnDemand does the refresh or not, that wouldn't change the need for the client secret.

Sure, but the issue is about having that secret available in user's home directory. If token refresh is handled by a user running apache, users will not have access to client secret. I don't think set-k8s-creds.sh is run by the same user running PUN, or is it?

treydock commented 2 weeks ago

The hook executes set-k8s-creds.sh as Apache and uses sudo to run the commands as root before the PUN launches.

The issue with refresh for PUN is that PUN isn't guaranteed to be running. If a user logs in and goes idle their processes will be killed within an hour or 2 and then there is no process available to refresh the token so the PUN launch hook has to be used to generate a new token.

OSC uses OIDC "Audience" so that the OnDemand tokens are valid for Kubernetes. I haven't tested it but that might mean you could setup an audience and expose Kubernetes secret and use the OnDemand token. Right now what we do is use the OnDemand secret and token to access Kubernetes which uses a different client and secret that isn't exposed.

In your original example I still see you passing CLIENT_SECRET to the kubectl command, so I'm not sure how that token exchange avoids the exposure of client secret, or is the issue which client secret is exposed?

yhal-nesi commented 2 weeks ago

The issue with refresh for PUN is that PUN isn't guaranteed to be running. If a user logs in and goes idle their processes will be killed within an hour or 2 and then there is no process available to refresh the token so the PUN launch hook has to be used to generate a new token.

Token isn't needed unless PUN is running though?

OSC uses OIDC "Audience" so that the OnDemand tokens are valid for Kubernetes. I haven't tested it but that might mean you could setup an audience and expose Kubernetes secret and use the OnDemand token. Right now what we do is use the OnDemand secret and token to access Kubernetes which uses a different client and secret that isn't exposed.

I don't think this is possible you need a a client secret from a client that issued a token to refresh it.

In your original example I still see you passing CLIENT_SECRET to the kubectl command, so I'm not sure how that token exchange avoids the exposure of client secret, or is the issue which client secret is exposed?

yep, ideally we would make kubernetes client public but since this is not possible with token exchange, the best we can do is to expose kubernetes client secret only. Ondemand tokens can potentially be used for other things such as Globus, in addition to kubernetes access so I think this is lesser risk.

treydock commented 2 weeks ago

The hooks distributed with OnDemand can be modified as needed for a site's particular needs so you should be able to use your current changes to files like /opt/ood/hooks/k8s-bootstrap/set-k8s-creds.sh without changing OnDemand itself. The current method utilizes the ID token and refresh token that are part of the OOD session. I don't think we'd be able to use different behavior that utilizes things like token exchange since that may be too specific to Keycloak's capabilities.

This might be a good documentation update to offer sites an alternative solution that you've come up with. That would likely belong here: https://github.com/OSC/ood-documentation/blob/latest/source/installation/resource-manager/kubernetes.rst