Closed rgalexey closed 4 years ago
That is something related to helm 2.16 release downgrading to 2.15 fixed issue Was able to install prometheus-operator-8.1.2
follow up helm 2.16 k8s 1.16.2
helm install --name prom --namespace prometheus -f vv.yml stable/prometheus-operator
Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Description I am trying to install prometheus-operator in AWS EKS But not able to do that for versions higher than 6.4.0. Tried with 7.0.0 as well as last 8.1.2 On some release
Version of Helm and Kubernetes: Helm and tiller version Client: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.16.0", GitCommit:"e13bc94621d4ef666270cfbe734aaabf342a49bb", GitTreeState:"clean"}
kube: Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.10", GitCommit:"37d169313237cb4ceb2cc4bef300f2ae3053c1a2", GitTreeState:"clean", BuildDate:"2019-08-19T10:52:43Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.10-eks-5ac0f1", GitCommit:"5ac0f1d9ab2c254ea2b0ce3534fd72932094c6e1", GitTreeState:"clean", BuildDate:"2019-08-20T22:39:46Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Which chart: stable/prometheus-operator
What happened: Error during install: helm install --name mon --namespace monitoring stable/prometheus-operator Error: no kind "Job" is registered for version "batch/v1" in scheme "k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:30"
What you expected to happen: Process successfully complete.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know: When I am tryiing to create Job object manually with kubectl from yaml file I am able to create it without any issues.
After mentioned error I see only one pod in monitoring namespace: [progman@fantom k8s-cluster]$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system aws-node-2ch8n 1/1 Running 0 14m kube-system aws-node-st5pb 1/1 Running 0 14m kube-system coredns-677ff99477-4wzgc 1/1 Running 0 19m kube-system coredns-677ff99477-xsjlf 1/1 Running 0 19m kube-system kube-proxy-bgm7v 1/1 Running 0 14m kube-system kube-proxy-zg2q6 1/1 Running 0 14m kube-system tiller-deploy-649db8f75b-qgf6v 1/1 Running 0 7m37s monitoring mon-prometheus-operator-admission-create-4vv98 0/1 Completed 0 3m52s