kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.89k stars 924 forks source link

kubectl rollout status -f fails with "unable to decode" for file containing CRD reference #690

Closed Ghazgkull closed 3 years ago

Ghazgkull commented 5 years ago

I’m finding the kubectl rollout status fails when I point it either a file (-f) or a kustomize directory (-k) with manifests that reference CRDs. An example would be if the manifests include an Istio VirtualService.

Here’s an example of what I see. I’ve got some CRDs (DestinationRule and VirtualService) in my setup along with deployments, services, etc. Nothing fancy:

$ kubectl kustomize overlays/sandbox > out.yaml $ kubectl rollout status -f out.yaml
error: unable to decode "out.yaml": no kind "DestinationRule" is registered for version "networking.istio.io/v1alpha3" in scheme "k8s.io/kubernetes/pkg/kubectl/scheme/scheme.go:28"

bhcleek commented 5 years ago

I'm seeing a similar error when running kubectl auth reconcile -f on a file that contains any custom resource or APIServer resources.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

ixdy commented 5 years ago

I encountered the same issue as @bhcleek - when trying to run kubectl auth reconcile -f on a file containing custom resource definitions, kubectl aborts.

I created a small PR (over in kubernetes/kubernetes) which reproduces this failure, too: https://github.com/kubernetes/kubernetes/pull/85708

Explicitly attempting to use this example YAML file outside of the test, kubectl apply works fine, but kubectl auth reconcile fails.

$ kubectl auth reconcile -f rbac-resource-plus.yaml
clusterrole.rbac.authorization.k8s.io/testing-CR reconciled
        reconciliation required create
        missing rules added:
                {Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]}
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled
        reconciliation required create
        missing subjects added:
                {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:}
rolebinding.rbac.authorization.k8s.io/testing-RB reconciled
        reconciliation required create
        missing subjects added:
                {Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:}
role.rbac.authorization.k8s.io/testing-R reconciled
        reconciliation required create
        missing rules added:
                {Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
unable to get type info from the object "*runtime.Unknown": no kind is registered for the type runtime.Unknown in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"

$ kubectl apply -f rbac-resource-plus.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/testing-CR configured
pod/valid-pod created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/testing-RB configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/testing-R configured
customresourcedefinition.apiextensions.k8s.io/foos.crds.example.com created
fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-578565297): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ixdy commented 4 years ago

/reopen

k8s-ci-robot commented 4 years ago

@ixdy: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-578992580): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-591697357): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ixdy commented 4 years ago

/reopen

ixdy commented 4 years ago

/kind bug

k8s-ci-robot commented 4 years ago

@ixdy: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-624348617): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
seans3 commented 4 years ago

/remove-lifecycle rotten /sig cli /area kubectl /kind bug /priority P2

seans3 commented 4 years ago

This is probably obvious, but I'll ask anyways: was the CRD applied successfully before the attempt to create the custom resource?

Also, is it possible to report the version (e.g. kubectl version).

Ghazgkull commented 4 years ago

@seans3 Yes. The use-case here is deploying a pretty vanilla microservice to a cluster with Istio deployed. The Istio CRDs were created and in use long before attempting this rollout. My kubectl version at the time of opening this issue was 1.11.3.

ixdy commented 4 years ago

I'm not sure if it's the same root cause (it's a similar error message), but I wrote a simple repro for kubectl auth reconcile -f failing when applied to CRDs in https://github.com/kubernetes/kubernetes/pull/85708.

ixdy commented 4 years ago

and kubectl auth reconcile -f fails even if the CRD has already been applied successfully. Using the example I linked:

$ kind create cluster
...
$ kubectl create ns some-other-random
namespace/some-other-random created
$ kubectl apply -f rbac-resource-plus.yaml 
clusterrole.rbac.authorization.k8s.io/testing-CR created
pod/valid-pod created
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB created
rolebinding.rbac.authorization.k8s.io/testing-RB created
role.rbac.authorization.k8s.io/testing-R created
customresourcedefinition.apiextensions.k8s.io/foos.crds.example.com created
$ kubectl auth reconcile -f rbac-resource-plus.yaml 
clusterrole.rbac.authorization.k8s.io/testing-CR reconciled
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled
rolebinding.rbac.authorization.k8s.io/testing-RB reconciled
role.rbac.authorization.k8s.io/testing-R reconciled
unable to get type info from the object "*runtime.Unknown": no kind is registered for the type runtime.Unknown in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
seans3 commented 4 years ago

/assign

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-705826489): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ixdy commented 4 years ago

/reopen /remove-lifecycle rotten

k8s-ci-robot commented 4 years ago

@ixdy: Reopened this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-705831490): >/reopen >/remove-lifecycle rotten Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

xcq1 commented 3 years ago

/remove-lifecycle stale Any progress with this? Just encountered this issue with PrometheusRules CRDs in kubectl 1.18.10.

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-contributor-experience at kubernetes/community. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-864415624): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
ChristianCiach commented 3 years ago

Please reopen.

k8s-ci-robot commented 3 years ago

@ChristianCiach: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-874664780): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
emirot commented 1 year ago

Any updates on that issue ? I think this is still an on going issue

Shanky2304 commented 1 year ago

/reopen /remove-lifecycle rotten

k8s-ci-robot commented 1 year ago

@Shanky2304: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes/kubectl/issues/690#issuecomment-1735946777): >/reopen >/remove-lifecycle rotten Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.