Closed pmorie closed 5 years ago
For the record, and in case it's not clear - there will be an accompanying controller that updates the status of ClusterConnection
. Once we agree on the API I will add that to this PR.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: pmorie, quinton-hoole-2
The full list of commands accepted by this bot can be found here.
The pull request process is described here
@quinton-hoole-2: changing LGTM is restricted to assignees, and only kubernetes/cluster-registry repo collaborators may be assigned issues.
It would be useful to have requirements or use-cases for this feature, to know the scope of what it is trying to solve. Since I'm not clear what this API is attempting to model or accomplish, it's hard for me to give feedback on how well it's succeeding.
It would be useful to have requirements or use-cases for this feature, to know the scope of what it is trying to solve. Since I'm not clear what this API is attempting to model or accomplish, it's hard for me to give feedback on how well it's succeeding.
Good feedback @jonmoter . Here is an excerpt from previous meeting notes where it was discussed, to provide further context:
perotinus] Moving clusterregistry.Cluster to beta; minor update to cluster registry.
Quinton: How do people use cluster registry without auth-info (which seems to currently be an unused field in cluster-registry).
Paul: The field is just to indicate what/how the auth should be done. Personally does not feel that cluster object need to have the auth info field.
Quinton: How does cluster registry become useful without the authentication information for a user who discovers the clusters in cluster registry?
Jonathan: The initial idea of auth field in cluster registry against a cluster was to provide enough info to the user to figure out how to get this auth info.
Quinton: Ideally we should be able to do away with the FederatedCluster resource in federation and enable cluster registry to provide more useful information rather than just the API endpoint of the cluster.
Need to have more discussions on this before further moving it to beta.
What wasn't captured in the notes, but from my memory, is that perotinus and others made points that:
Hence it was decide to explore what would need to be added to cluster registry w.r.t. cluster credentials to obviate the need for the addon in Federation-v2, hopefully to pave the way for other users.
Sorry for not taking a look at this sooner!
Some overall thoughts:
kubeconfig
like this concerns me: AFAICT it's meant to be for one user on one machine. I understand the appeal of making it easier to bootstrap with an existing file, but the number of fields that are not needed in this object, some of which are duplicated with the Cluster object itself, give me pausekubeconfig
file can farm out to custom commands to get tokens, which may not work from within a clusters@jonmoter @perotinus @quinton-hoole I pushed changes to address your review comments.
One thing we are asking ourselves at Red Hat is whether it would be better to descope this API in name to something more specific like HealthCheckedCluster
- that way we at least can iterate on a more purposeful API than one that is more general (which is always harder).
WDYT?
@pmorie As discussed in person, I think that particular name is a bad idea. We're not talking about health checks here, we're talking about authenticating against a cluster. Personally I don't mind "ClusterCredentials" because that's exactly what they are. But feel free to find another name that's not confusing or misleading (which adding "HealthChecked" would be).
@sohankunkerkar is going to pick this up and open a new PR. Apologies for the latency!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
I'm honestly not sure whether this PR should remain open at this point. Since I opened it, it's come to light that the way we were using the cluster-registry API in the kubefed project allowed escalation: if you have write access to the Cluster
resource, you can effectively steal the secrets associated with a ClusterCredentials
(or KubeFedCluster
in the case of kubefed). I'm not sure I want to continue this PR since it would build on that usage pattern.
@pmorie It's not immediately clear how having write access to Cluster allows stealing of Secrets referred to by ClusterCredentials. Could you clarify, or provide a link to an explanation?
Intiutively it seems that RBAC rules on the Secret should control read access to the secrets. But presumably you've discovered a hole in that?
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closed this PR.
Adds a
ClusterCredentials
API type that provides a kubeconfig to communicate with a cluster in the cluster registry. Submitting for comments per our recent discussion in the SIG./sig multicluster