kcp-dev / kcp

Kubernetes-like control planes for form-factors and use-cases beyond Kubernetes and container workloads.
https://kcp.io
Apache License 2.0
2.35k stars 381 forks source link

bug: Version endpoint for consumer workspaces via APIExport fails #2243

Closed rajivnathan closed 4 months ago

rajivnathan commented 2 years ago

Describe the bug

Get version of workspace via APIExport doesn't work

k get --raw <kcpurl>/services/apiexport/root:rh-sso-15850190:rajiv:provider/workspace-resource-controller-core.appstudio.redhat.com/version

Get configmaps from the workspace via APIExport works

k get --raw <kcpurl>/services/apiexport/root:rh-sso-15850190:rajiv:provider/workspace-resource-controller-core.appstudio.redhat.com/clusters/root:rh-sso-15850190:rajiv:consumer/api/v1/namespaces/default/configmaps

Get version of workspace directly works

k get --raw <kcpurl>/clusters/root:rh-sso-15850190:rajiv:consumer/version

Steps To Reproduce

  1. Set up Workspace A with an APIExport and Workspace B with an APIBinding that successfully
  2. Attempt to get the version via the APIExport eg. k get --raw https://<kcp-stable>/services/apiexport/<workspace path>/<apiexport>/version
  3. Observe that it fails with 403 Forbidden

Expected Behaviour

It should return the server version info eg.

{
  "major": "1",
  "minor": "24",
  "gitVersion": "v1.24.3+kcp-v0.9.1-6-gebb3478a9a1df6",
  "gitCommit": "ebb3478a",
  "gitTreeState": "clean",
  "buildDate": "2022-10-18T15:29:27Z",
  "goVersion": "go1.18",
  "compiler": "gc",
  "platform": "linux/amd64"
}%

Additional Context

I'm trying to use helm sdk from within a service provider controller to apply resources in a consumer workspace but it's getting stuck because it tries to get the server version to check if it's reachable: https://github.com/helm/helm/blob/main/pkg/kube/client.go#L120

kcp-ci-bot commented 6 months ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kcp-ci-bot commented 5 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kcp-ci-bot commented 4 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kcp-ci-bot commented 4 months ago

@kcp-ci-bot: Closing this issue.

In response to [this](https://github.com/kcp-dev/kcp/issues/2243#issuecomment-2166713826): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.