Open jessica-hofmeister opened 1 year ago
@jessica-hofmeister I think this is a limitation of the default Magnum policy. I think the issue here is that the policy disallows a user in the same project to pull in the KUBECONFIG
file.
It is a bit silly, do you think we should make the change to allow users in the same project to view project configs?
Hi @mnaser, thanks for looking into this! Ideally for us any users within a project would be able to pull the kubeconfig file. Especially since any users in a project can take other actions on the cluster (resize, delete, etc) from the GUI. We do have workarounds though, so this is not the most urgent item in the world :)
@mnaser i personally think the current behavior is very consistent compared to VMs. You can also take all actions with the Openstack APIs but cannot get the injected private key. So the owner or better creator should be the only one who can grant access to VMs or Kubernetes Clusters. I think putting the kubeconfig in a safe place and sharing it with the users is a good approach or even better use openid to authorize users to the cluster.
@fnpanic @jessica-hofmeister mcapi driver supports OIDC already https://github.com/vexxhost/magnum-cluster-api/blob/main/docs/user/labels.md#oidc. You can define the above labels with your IdP information and you can access kube-api of workload clusters using oidc. If this can be the solution for this issue, i will be happy
@okozachenko1203 i totally agree. This is the cleanest and best way to do this.
Upon further discussion, the best way to handle this would be the following approach:
keystone-auth
on clusters moving forward, and setup a mapping where reader
in OpenStack project maps to view
in Kubernetes, member
and admin
both mapped to cluster-admin
.With that, it gives control to the creator of the cluster in order to manage things and access can be revoked simply if you're not in the right project.
When I try to get a context file for a cluster that my peer has made, I get the below error instead. Is there a way I should be able to get a context file for any cluster within a project?