Closed Ghazgkull closed 3 years ago
I'm seeing a similar error when running kubectl auth reconcile -f
on a file that contains any custom resource or APIServer resources.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
I encountered the same issue as @bhcleek - when trying to run kubectl auth reconcile -f
on a file containing custom resource definitions, kubectl
aborts.
I created a small PR (over in kubernetes/kubernetes) which reproduces this failure, too: https://github.com/kubernetes/kubernetes/pull/85708
Explicitly attempting to use this example YAML file outside of the test, kubectl apply
works fine, but kubectl auth reconcile
fails.
$ kubectl auth reconcile -f rbac-resource-plus.yaml
clusterrole.rbac.authorization.k8s.io/testing-CR reconciled
reconciliation required create
missing rules added:
{Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]}
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled
reconciliation required create
missing subjects added:
{Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:}
rolebinding.rbac.authorization.k8s.io/testing-RB reconciled
reconciliation required create
missing subjects added:
{Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:}
role.rbac.authorization.k8s.io/testing-R reconciled
reconciliation required create
missing rules added:
{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
unable to get type info from the object "*runtime.Unknown": no kind is registered for the type runtime.Unknown in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
$ kubectl apply -f rbac-resource-plus.yaml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrole.rbac.authorization.k8s.io/testing-CR configured
pod/valid-pod created
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
rolebinding.rbac.authorization.k8s.io/testing-RB configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
role.rbac.authorization.k8s.io/testing-R configured
customresourcedefinition.apiextensions.k8s.io/foos.crds.example.com created
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen
@ixdy: Reopened this issue.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen
/kind bug
@ixdy: Reopened this issue.
/remove-lifecycle rotten /sig cli /area kubectl /kind bug /priority P2
This is probably obvious, but I'll ask anyways: was the CRD applied successfully before the attempt to create the custom resource?
Also, is it possible to report the version (e.g. kubectl version
).
@seans3 Yes. The use-case here is deploying a pretty vanilla microservice to a cluster with Istio deployed. The Istio CRDs were created and in use long before attempting this rollout. My kubectl version at the time of opening this issue was 1.11.3.
I'm not sure if it's the same root cause (it's a similar error message), but I wrote a simple repro for kubectl auth reconcile -f
failing when applied to CRDs in https://github.com/kubernetes/kubernetes/pull/85708.
and kubectl auth reconcile -f
fails even if the CRD has already been applied successfully. Using the example I linked:
$ kind create cluster
...
$ kubectl create ns some-other-random
namespace/some-other-random created
$ kubectl apply -f rbac-resource-plus.yaml
clusterrole.rbac.authorization.k8s.io/testing-CR created
pod/valid-pod created
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB created
rolebinding.rbac.authorization.k8s.io/testing-RB created
role.rbac.authorization.k8s.io/testing-R created
customresourcedefinition.apiextensions.k8s.io/foos.crds.example.com created
$ kubectl auth reconcile -f rbac-resource-plus.yaml
clusterrole.rbac.authorization.k8s.io/testing-CR reconciled
clusterrolebinding.rbac.authorization.k8s.io/testing-CRB reconciled
rolebinding.rbac.authorization.k8s.io/testing-RB reconciled
role.rbac.authorization.k8s.io/testing-R reconciled
unable to get type info from the object "*runtime.Unknown": no kind is registered for the type runtime.Unknown in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
/assign
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen /remove-lifecycle rotten
@ixdy: Reopened this issue.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale Any progress with this? Just encountered this issue with PrometheusRules CRDs in kubectl 1.18.10.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
Please reopen.
@ChristianCiach: You can't reopen an issue/PR unless you authored it or you are a collaborator.
Any updates on that issue ? I think this is still an on going issue
/reopen /remove-lifecycle rotten
@Shanky2304: You can't reopen an issue/PR unless you authored it or you are a collaborator.
I’m finding the kubectl rollout status fails when I point it either a file (-f) or a kustomize directory (-k) with manifests that reference CRDs. An example would be if the manifests include an Istio VirtualService.
Here’s an example of what I see. I’ve got some CRDs (DestinationRule and VirtualService) in my setup along with deployments, services, etc. Nothing fancy: