Open cameronclaero opened 1 year ago
@cameronclaero Did you ever get clarity on this? For k8s, this is the best description I have found.
It is a bit of a concern for me as I am using k3s. Not much information that I can find besides replacing the self-signed CA / certs with your own. I personally would rather leave k3s to do it's own cert management and just leverage this additional certificate for the purposes of OIDC with Azure.
Digging into the default behavior of k3s, those flags are already set and leveraging certificates the installer creates / rotates automatically. I am not clear if manually setting these flags will override or use both the default certs and the one we manually create (latter is obviously preferred).
For example, service-account-signing-key-file is already set to service.currentkey which service.currentkey is generated during install
This also might be something specific to k3s, although it is a distribution and not a fork and I would expect behavior of setting these flags to be the same. The only difference that I see from the default behavior is the file locations, where Kubernetes is /etc/kubernetes/pki/ vs k3s of /var/lib/rancher/k3s/server/tls/.
After several days hacking away, today I successfully installed it on minikube.
For minikube I only needed two apiserver flags service-account-issuer
and service-account-jwks-uri
.
I kept the certs minikube generates instead of replacing them. To get the jwks I used kubectl get —raw
.
For anyone interested in more detail, it’s here as a Pulumi program.
Agree this documentation is not well though out and needs a lot of work -- example the azwi cli flags are plain wrong in teh self managed part. --output-file is the correct flag. Also it seems like this is missing a lot of context and its not quite clear what to do -- when using the endpoint i assume it is based on the example the workloads just straight up crash.
I found this example here in the help pages how to do it using Kind.. Maybe that will help?
After several days hacking away, today I successfully installed it on minikube.
For minikube I only needed two apiserver flags
service-account-issuer
andservice-account-jwks-uri
.I kept the certs minikube generates instead of replacing them. To get the jwks I used
kubectl get —raw
.For anyone interested in more detail, it’s here as a Pulumi program.
But then, what is the public key you put into the list in the JWKI file on the storage account? Azure AD will fail to validate the k8s-issued tokens as those will be signed by a different private key.
But then, what is the public key you put into the list in the JWKI file on the storage account?
The public key you get from kubectl get --raw
.
Azure AD will fail to validate the k8s-issued tokens as those will be signed by a different private key.
It validates perfectly as the tokens are signed by the private key matching the public key.
I did an investigation on this a while back and I managed to get it work in Rancher Desktop, this gist is a PowerShell 7 script that automates the process enabling it for Rancher Desktop. The script is for Windows, but it should not be that difficult to translate to other shells as well.
Since Rancher is running K3s, I think the script reflects changes needed for K3s as well (in addition to default K3s configuration).
I am attempting at getting this set up on a private kubernetes cluster, and have got to this part in the docs:
https://azure.github.io/azure-workload-identity/docs/installation/self-managed-clusters/configurations.html
It lists the configuration flags, but does indicate what they should be set to, (am assuming private/public keys) that were generated previously. It would be helpful to show the values that should be set as examples, to ensure the right values are being set.