kcp-dev / kcp

Kubernetes-like control planes for form-factors and use-cases beyond Kubernetes and container workloads.
https://kcp.io
Apache License 2.0
2.35k stars 381 forks source link

bug: cache-server misses the scope client #1829

Closed p0lyn0mial closed 4 months ago

p0lyn0mial commented 2 years ago

Describe the bug

The ApiExtensionsClusterClient in the cache server should use scope clients instead of NewClusterForConfig

Steps To Reproduce

I tried to wire in a context-based client as advised in https://github.com/kcp-dev/kcp/pull/1815#discussion_r954151311 by using SetMultiClusterRoundTripper along with SetCluster

So basically I did apiextensionsclient.NewForConfig( kcpclienthelper.SetMultiClusterRoundTripper(kcpclienthelper.SetCluster(rest.CopyConfig(cfg), logicalcluster.Wildcard)).

Then when I tried to create a CRD with a ctx with a logical cluster name set it failed because the path was incorrect. The path was set to /clusters/*/clusters/system:system-crds/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apiresourceschemas.apis.kcp.dev

Expected Behaviour

Either something is broken or I don't know how to use a context-based client. I'd expect the path to be correctly set to the cluster from the context.

Additional Context

No response

p0lyn0mial commented 2 years ago

/cc @varshaprasad96

I'm happy to prepare a PR if you could provide some input here. Thanks!

varshaprasad96 commented 2 years ago

@p0lyn0mial The process which I usually follow to scope clients is: Option 1:

  1. Wrap an existing config with the cluster round tripper - (ie) using https://pkg.go.dev/github.com/kcp-dev/apimachinery/pkg/client#SetMultiClusterRoundTripper
  2. Pass a scoped context while making client calls (logical cluster.WithContext(ctx, someCluster)

Option 2:

  1. Set the cluster in the rest config directly (ie) with https://github.com/kcp-dev/apimachinery/blob/dbb759406933f20051f134e5d2cd740bcda53900/pkg/client/cluster_config.go#L43. If we do this, then we need not pass a scoped context, or even use cluster aware round tripper.

Based on the url, it looks like both options are being performed at once. If the rest config is directly being modified, we shouldn't be passing scoped context.

p0lyn0mial commented 2 years ago

@varshaprasad96 yes, it looks like I applied both options at the same time.

Would you accept a PR that would allow for overwriting the cluster set by Option 2 when a cluster is also set in the context (Option 1)?

/cc @stevekuznetsov

stevekuznetsov commented 2 years ago

Hm, I thought both at once should have worked. In any case, I think for now we are pausing this so let's re-consider it when we have more clarity about how we're moving forward.

kcp-ci-bot commented 6 months ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kcp-ci-bot commented 5 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kcp-ci-bot commented 4 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kcp-ci-bot commented 4 months ago

@kcp-ci-bot: Closing this issue.

In response to [this](https://github.com/kcp-dev/kcp/issues/1829#issuecomment-2164995810): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.