kubernetes-retired / kubefed

Kubernetes Cluster Federation
Apache License 2.0
2.5k stars 529 forks source link

create federatedcustomresourcedefinitions, but found ` Failed to watch apiextensions.k8s.io/v1` error, all CR is created in host cluster #1350

Closed CharlesQQ closed 3 years ago

CharlesQQ commented 3 years ago

I want to federate CRD & CR, but found error, the host cluster version is k8s 1.18, member cluster version is k8s 1.10 here is error log

E0128 16:18:15.823707 39 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.3/tools/cache/reflector.go:156: Failed to watch apiextensions.k8s.io/v1, Kind=CustomResourceDefinition: failed to list apiextensions.k8s.io/v1, Kind=CustomResourceDefinition: the server could not find the requested resource E0128 16:18:38.518458 39 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.3/tools/cache/reflector.go:156: Failed to watch apps.eagle.io/v1alpha1, Kind=PhpSidecarSet: failed to list apps.eagle.io/v1alpha1, Kind=PhpSidecarSet: the server could not find the requested resource

hectorj2f commented 3 years ago

@CharlesQQ I don't know where these errors are prompted but I assume the problem might come with the usage of deprecated or new api versions that might not be available on 1.10.

We have not used kubefed with clusters whose kubernetes version differed so much. Many of the federated types in the kubefed control plane cluster won't work on the managed clusters due to the usage or newer api versions than are not present in 1.10.

swiftslee commented 3 years ago

I guess you need to upgrade your member cluster version to v1.18(same as the host cluster).

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/kubefed/issues/1350#issuecomment-869462993): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.