kcp-dev / controller-runtime

Repo for the controller-runtime subproject of kubebuilder (sig-apimachinery)
Apache License 2.0
1 stars 15 forks source link

Unable to use controller-runtime for ClusterWorkspaceType initializer #16

Closed MatousJobanek closed 4 months ago

MatousJobanek commented 2 years ago

Consider that I have a ClusterWorkspaceType and an initializer that is supposed to create an APIBinding. That initializer is watching a VirtualWorkspace of the given CWT, however, when it tries to create an APIBinding, then it fails with:

1.65719959317604e+09    ERROR   controller.clusterworkspace Reconciler error    {"reconciler group": "tenancy.kcp.dev", "reconciler kind": "ClusterWorkspace", "name": "appp", "namespace": "", "error": "no matches for kind \"APIBinding\" in version \"apis.kcp.dev/v1alpha1\""}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
    /home/mjobanek/go-workspace/src/github.com/kcp-dev/controller-runtime/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
    /home/mjobanek/go-workspace/src/github.com/kcp-dev/controller-runtime/pkg/internal/controller/controller.go:227

The reason is that the rest mapper tries to do the discovery call before calling the POST for the APIBinding. But it does it for the VirtualWorkspace endpoint with the wildcard /clusters/* at the end of the URL, not with the name of the workspace that is being initialized /clusters/root:plane:usersignup:foo/:

I0707 13:13:13.510869       1 round_trippers.go:463] GET https://192.168.1.133:6443/services/initializingworkspaces/root:plane:usersignup:Appstudio/clusters/%2A/api/v1
I0707 13:13:13.510875       1 round_trippers.go:469] Request Headers:
I0707 13:13:13.510882       1 round_trippers.go:473]     Authorization: Bearer <masked>
I0707 13:13:13.510888       1 round_trippers.go:473]     Accept: application/json, */*
I0707 13:13:13.521749       1 round_trippers.go:574] Response Status: 404 Not Found in 10 milliseconds
I0707 13:13:13.521845       1 request.go:1372] body was not decodable (unable to check for Status): couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
I0707 13:13:13.524075       1 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
1.6571995935242307e+09  ERROR   controller.clusterworkspace Reconciler error    {"reconciler group": "tenancy.kcp.dev", "reconciler kind": "ClusterWorkspace", "name": "appp", "namespace": "", "error": "no matches for kind \"APIBinding\" in version \"apis.kcp.dev/v1alpha1\""}

There are two problems in the rest mapper code:

  1. The rest mapper doesn't use the cluster-aware client nor the cluster-aware round-tripper. But even if I modify the code so it uses the cluster-aware round-tripper, then I face the second problem:
  2. The context that is being passed as part of the discovery call is context.TODO() https://github.com/kubernetes/client-go/blob/release-1.23/discovery/discovery_client.go#L172 and not the one that is provided from the controller. This means that the context doesn't have the cluster name value so the cluster-aware round-tripper cannot do its job.
MatousJobanek commented 2 years ago

I created a reproducer for this issue: https://github.com/MatousJobanek/controller-runtime-example/tree/cwt-reproducer

  1. build & deploy the controller
  2. create workspace of the type Widget
stevekuznetsov commented 2 years ago

But it does it for the VirtualWorkspace endpoint, not for the actual workspace that is being initialized:

This is actually fine - the virtual workspace will proxy the request through to the correct cluster. The proximal issue here is that the request is not being made to the correct cluster.

MatousJobanek commented 2 years ago

Yeah, I used the wrong wording - what I meant was that it's using the URL with the wildcard at the end instead of putting the name of the workspace that is being initialized. I fixed it in the description.

kcp-ci-bot commented 6 months ago

Issues go stale after 90d of inactivity. After a furter 30 days, they will turn rotten. Mark the issue as fresh with /remove-lifecycle stale.

If this issue is safe to close now please do so with /close.

/lifecycle stale

kcp-ci-bot commented 5 months ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle rotten

kcp-ci-bot commented 4 months ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

/close

kcp-ci-bot commented 4 months ago

@kcp-ci-bot: Closing this issue.

In response to [this](https://github.com/kcp-dev/controller-runtime/issues/16#issuecomment-2160117528): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.