sky-uk / osprey

Kubernetes OIDC CLI login
BSD 3-Clause "New" or "Revised" License
49 stars 17 forks source link

Allow osprey client to read CA cert directly from API server #69

Closed howardburgess closed 2 years ago

howardburgess commented 2 years ago

Add an optional api-server: field to the Osprey client config file. If this is used, the API Server CA certificate will be fetched directly from the API Server rather than relying on a deployment of Osprey server to serve the CA.

To use this, the cluster must have a ConfigMap called kube-root-ca.crt in the kube-public namespace that is accessible by system:anonymous users. The following should work:

curl -k https://APISERVER/api/v1/namespaces/kube-public/configmaps/kube-root-ca.crt

This ConfigMap is created automatically as part of the RootCAConfigMap feature gate introduced in Kubernetes 1.13, which became enabled by default in v1.20 (changelog).

Note to reviewers: there are some tidy-ups in separate commits, so it might be best to review them one-by-one.

aecay commented 2 years ago

There's a competing "standard" which places the cluster CA in a config map called cluster-info in the kube-public namespace. This is used by the kubeadm tool (and was the subject of #67). Is it worth trying to support both? I think we would like to migrate our existing clusters over to some method like this rather than needing to stash the CA info in a configmap for osprey to read -- but we can easily use the new standard instead (e.g. by enabling the feature gate in our deploys). But for completeness, I thought I'd ask the question...

howardburgess commented 2 years ago

Thanks @aecay for the comments and review. I actually saw your PR #67 during our investigation of OIDC on GKE, but then I saw the CA ConfigMap in the v1.20 clusters we are using, where RootCAConfigMap feature is enabled by default. I implemented this to check it was feasible, but raised this before chatting to you about it vs #67, so apologies for that.

It's unknown whether GKE uses kubeadm or custom provisioning, but it doesn't have the cluster-info ConfigMap so I guess not.

❯ kubectl -n kube-public get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      18h

What do you think about the two solutions? We could support both, but I'm thinking that since the RootCAConfigMap feature flag is supported from Kubernetes 1.13 we maybe should try enabling that in our AWS clusters to check. It'd be good if we could avoid running osprey server simply to serve the CA cert.

I see you have some nice refactorings in #67 so whatever we decide it'd be good not to lose those.

aecay commented 2 years ago

What do you think about the two solutions?

My feeling is that for our own internal use, we can probably just use the feature gate and forget about the kubeadm. I guess the question is whether we see supporting kubeadm as a nice feature to have for open-source (and whether it's feasible for us to add it, if we don't have any kubeadm clusters...) I guess I see it as nice to have but not really feasible to add, test, and support for us. [When I was working on my PR, I didn't know about the feature gate and had gotten the impression from the core-aws team that we wanted to move to kubeadm-compatibility. I dunno if that's still a goal now that the alternative feature gate is on the table.]

It'd be good if we could avoid running osprey server simply to serve the CA cert.

Agree!

I see you have some nice refactorings in #67 so whatever we decide it'd be good not to lose those.

Thanks :slightly_smiling_face: I think it would be best to land this PR since it ties directly into your GKE work. Then I can redo the refactorings (minus the cluster-info stuff) on top of master. [I have a bad habit of rolling refactorings into feature branches...maybe this will finally learn me not to do that :stuck_out_tongue_closed_eyes:]

howardburgess commented 2 years ago

The release phase failed due to Bintray closing down. Raised #70 to get the build passing so we can release v2.5.0.