Open tdcox opened 3 years ago
Facing same issue. What's the fix/solution?
same here: (minikube):
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: '' make: *** [versionStream/src/Makefile.mk:242: regen-check] Error 1
same problem. can anyone help?
I faced same issue with minikube and using specific kubernetes version solved in my case.
minikube start --cpus 4 --memory 8048 --disk-size=100g --addons=ingress --vm=true --kubernetes-version v1.21.8
I'm having the same issue on both Minikube and On-premise Cluster. Did anyone figure out a fix for this?
I'm having the same issue on both Minikube and On-premise Cluster. Did anyone figure out a fix for this?
Which version of kubernetes do you have? We do not support kubernetes 1.22 yet (it's coming soon), so if you downgrade to 1,21, it should work.
I'm having the same issue, first time installing JX3, below are the logs I am getting from JX ADMIN LOGS, I am using the version EKS 1.21, ....................make[1]: [versionStream/src/Makefile.mk:351: push] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-vj82t has Failed
tailing boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-zljdd
make[1]: Leaving directory '/workspace/source'
make[1]: [versionStream/src/Makefile.mk:325: commit] Error 1 (ignored)
remote: [Pre-receive Hook]: Illegal email: jenkins-x@googlegroups.com
make[1]: [versionStream/src/Makefile.mk:351: push] Error 1
error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: ''
make[1]: Leaving directory '/workspace/source'
make: [versionStream/src/Makefile.mk:243: regen-check] Error 1
boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-zljdd has Failed
error: boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-6crrn has Failed
remote: reject_external_email.sh: failed with exit status 1
remote: [Pre-receive Hook]: Illegal email: jenkins-x@googlegroups.com
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://github.XXXXXX/XXXXX/jx3-eks-asm1'
make[1]: [versionStream/src/Makefile.mk:351: push] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-vj82t has Failed
Is there any update on the support on Kubernetes 1.22 to solve this issue?
Facing the similar issue and job.batch/jx-boot-*
job is incomplete. I'm using Kubernetes 1.25.1 on Ubuntu 22.04. This is frustrating! Kindly someone help. What are the supported Kubernetes versions? I cannot find the list anywhere.
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khchecks.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khjobs.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khstates.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
make[1]: *** [versionStream/src/Makefile.mk:301: kubectl-apply] Error 1
make[1]: Leaving directory '/workspace/source'
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:255: regen-check] Error 1
Same error even with 1.22.14
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.14", GitCommit:"bccf857df03c5a99a35e34020b3b63055f0c12ec", GitTreeState:"clean", BuildDate:"2022-09-14T22:41:51Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.14", GitCommit:"bccf857df03c5a99a35e34020b3b63055f0c12ec", GitTreeState:"clean", BuildDate:"2022-09-14T22:36:04Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Works with 1.21.14 which is EOL'ed 🤕
@ankitm123 Any update on this? Does cert-manager still doesn't work in k8s version 1.22?
Same/similar issue with on-premises k8s, fails both either v1.24 and v1.22, see below.
Would appreciate any medicine/workaround suggestion. Going back to v1.21 is not really an option...
$ jx admin operator --username xxx --token yyy . . . customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khchecks.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khjobs.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khstates.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" make[1]: [versionStream/src/Makefile.mk:289: kubectl-apply] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job jx-boot-4e526053-fa76-4b3e-82f6-c0a898e2d049 has Failed error: failed to tail the Jenkins X boot Job pods: job jx-boot-4e526053-fa76-4b3e-82f6-c0a898e2d049 failed
Removing kuberhealthy will fix this issue. Remove https://github.com/jx3-gitops-repositories/jx3-kubernetes/blob/e92160a1fe90573a461ae53d4c1d4f659defb8ea/helmfile.yaml#L4 and then push ur changes.
Removing kuberhealthy will fix this issue. Remove https://github.com/jx3-gitops-repositories/jx3-kubernetes/blob/e92160a1fe90573a461ae53d4c1d4f659defb8ea/helmfile.yaml#L4 and then push ur changes.
Thank you, tried it against bare metal k8s 1.24.7, now it goes much further passing the place where failed last time: . . . customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured <=== failed after this kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=cluster -R -f config-root/cluster namespace/jx-production configured . . .
But still fails (from attached jx_admin_log_k8s_1_24_7.txt). Would appreciate any suggestion how to proceed: . . . serviceaccount/tekton-pipelines-webhook unchanged service/tekton-pipelines-webhook unchanged unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-bot-token-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1" unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-events-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1" unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1" secret/webhook-certs unchanged make[1]: [versionStream/src/Makefile.mk:291: kubectl-apply] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-dcfc3986-5bd2-428e-8153-bef81fa58977-ksq7m has Failed
...and same jx-boot failure with kubehealthy disabled if bare metal k8s v1.22.16 used, see attached:
secret/webhook-certs created [unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-bot-token-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-events-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1"] Error from server (InternalError): error when creating "config-root/namespaces/jx/jx-pipelines-visualizer/jx-pipelines-visualizer-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/bucketrepo-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/hook-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found make[1]: [versionStream/src/Makefile.mk:322: kubectl-apply] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:269: regen-check] Error 1 boot Job pod jx-boot-05d64476-2761-4edb-9247-71a504fdeab8-c5qz7 has Failed
make[1]: [versionStream/src/Makefile.mk:320: kubectl-apply] Error 1 error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:269: regen-check] Error 1 boot Job jx-boot-3f90f428-8efa-456b-a16b-ac96382c14d5 has Failed error: failed to tail the Jenkins X boot Job pods: job jx-boot-3f90f428-8efa-456b-a16b-ac96382c14d5 failed
jx version
version: 3.10.19
shaCommit: 6638c0ef41cf8ccdd08765227258c42804e1011c
buildDate: Wed Dec 7 14:42:35 UTC 2022
goVersion: 1.19.3
branch: main
gitTreeState: clean
% kubectl get ns NAME STATUS AGE default Active 12h ingress-nginx Active 12h jx-git-operator Active 5h7m kube-node-lease Active 12h kube-public Active 12h kube-system Active 12h
@moorthi07 What version of Kubernetes are u using?
Exactly the same here: k8s 1.24 running on colima (v3.10.58) how to recover this?
Same on k8s 1.23.10.
using kubectl to apply resources
if [ -d config-root/customresourcedefinitions ]; then \
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=customresourcedefinitions -R -f config-root/customresourcedefinitions; \
fi
customresourcedefinition.apiextensions.k8s.io/environments.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/pipelineactivities.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/releases.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/sourcerepositories.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/previews.preview.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousebreakpoints.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousejobs.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/externalsecrets.kubernetes-client.io unchanged
customresourcedefinition.apiextensions.k8s.io/clustertasks.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/conditions.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineresources.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/resolutionrequests.resolution.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/runs.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khchecks.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khjobs.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khstates.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
make[1]: *** [versionStream/src/Makefile.mk:320: kubectl-apply] Error 1
make[1]: Leaving directory '/workspace/source'
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:269: regen-check] Error 1
Same error with kuberhealthy disabled on k8s version 1.24.10
serviceaccount/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
secret/webhook-certs created
[unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-bot-token-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-events-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1"]
Error from server (InternalError): error when creating "config-root/namespaces/jx/jx-pipelines-visualizer/jx-pipelines-visualizer-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/bucketrepo-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/hook-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
make[1]: Leaving directory '/workspace/source'
make[1]: *** [versionStream/src/Makefile.mk:322: kubectl-apply] Error 1
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:269: regen-check] Error 1
You can get the exact cause of the error by manually running make regen-phase-1 NEW_CLUSTER=true
in the cluster repo directory. For me, I had the following errors:
helm
wasn't installedhelmfile
wasn't installedkubectl
version was 1.26
(up to 1.24
is supported)I just wanted to let you all know that we are looking at this.
Ted Gelpi wrote in the kubernetes #jenkins-x-user channel the following write up (thanks Ted!)
(https://kubernetes.slack.com/archives/C9MBGQJRH/p1679344807123509?thread_ts=1679325968.633559&cid=C9MBGQJRH) for a better formatted example
Example of building GKE/DNS/TLS/v1.24 config. Create service account with appropriate permissions You will be using a service account to access Google and build the necessary resources. Follow the link below for instructions to create the service account. Once complete you will have a JSON file and it’s location will be set in the variable: GOOGLE_APPLICATION_CREDENTIALS https://jenkins-x.io/v3/admin/platforms/google/svc_acct/ NOTE: If your DNS zone is under a separate Google project make sure you include those additional steps. For DNS you only need to define the apex domain (i.e. jx3rocks.com). The sub domain will be defined later on under the infra repo. The environment will consist of two repos; Infra and Cluster For the Cluster you have a choice of either Google Secret Manger (GSM) or Vault. The preferred Cluster type is GSM and will be used by the demonstration below.
I am having the same issue on a baremetal cluster with the following versions:
jx: version: 3.10.154 shaCommit: b99bd1e38ef65efb04665bce6bdfcf0ae7982188 buildDate: Wed Jul 10 15:11:20 UTC 2024 goVersion: 1.19.3 branch: main gitTreeState: clean
kubernetes v1.30
Any tip that could help me?
During installation, one of the jx-boot pods fails with the following error:
This pod does not seem to be garbage collected and it is not clear if the cluster is deployed correctly or not. A jx-install pod runs every 2 minutes and reports:
It is not clear if this is due to an incomplete install, or if it is intended to be an ongoing health check. If the latter, it should probably be renamed and have modified log wording to indicate its purpose. The existence of these regularly created pods and their associated webhook pods has the effect of obscuring other system activity with background noise.