kubernetes-retired / cluster-registry

[EOL] Cluster Registry API
https://kubernetes.github.io/cluster-registry/
Apache License 2.0
238 stars 94 forks source link

CRD definition is broken on kube 1.11.1 cluster #255

Closed patrickshan closed 5 years ago

patrickshan commented 6 years ago

/sig multicluster when applying the crd config to a kube 1.11.1 cluster, it will error out with this message:

$ kubectl apply -f cluster-registry-crd.yaml
error: error validating "cluster-registry-crd.yaml": error validating data: [ValidationError(CustomResourceDefinition.status): missing required field "conditions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus, ValidationError(CustomResourceDefinition.status): missing required field "storedVersions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus]; if you choose to ignore these errors, turn validation off with --validate=false

Currently we worked around the issue by updating CRD with this change although we are not quite sure if that's the right way to fix it:

-  conditions: null
+  conditions: []
+  storedVersions: []
perotinus commented 6 years ago

Thanks for filing the issue! Apologies, I missed the GitHub notification.

It believe that this is an issue that will have to be fixed in kubebuilder, by upgrading that to use kube 1.11 binaries: https://github.com/kubernetes-sigs/kubebuilder/issues/339

In the meantime, you can create the CRD with --validate=false.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

embik commented 5 years ago

Hey @perotinus any update here? It's still broken on Kubernetes 1.11.4 and the issue going stale isn't exactly correct. We're looking into cluster-registry as a "datastore" for cluster information and I wonder what the current state of the project is here.

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 5 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/cluster-registry/issues/255#issuecomment-455571552): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.