jenkins-x / jx

Jenkins X provides automated CI+CD for Kubernetes with Preview Environments on Pull Requests using Cloud Native pipelines from Tekton
https://jenkins-x.io/
Apache License 2.0
4.58k stars 788 forks source link

JX3 boot pod fails #7705

Open tdcox opened 3 years ago

tdcox commented 3 years ago

During installation, one of the jx-boot pods fails with the following error:

Error from server (InternalError): error when creating "config-root/namespaces/jx/acme-jx/tls-kill-9-uk-p-certificate.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": Post https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s: no endpoints available for service "cert-manager-webhook"
make[1]: *** [versionStream/src/Makefile.mk:280: kubectl-apply] Error 1
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:240: regen-check] Error 1

This pod does not seem to be garbage collected and it is not clear if the cluster is deployed correctly or not. A jx-install pod runs every 2 minutes and reports:

time="2021-04-22T15:18:05Z" level=info msg="Found instance namespace: jx-git-operator"
time="2021-04-22T15:18:05Z" level=info msg="Kuberhealthy is located in the jx-git-operator namespace."
starting jx-install health checks
successfully reported

It is not clear if this is due to an incomplete install, or if it is intended to be an ongoing health check. If the latter, it should probably be renamed and have modified log wording to indicate its purpose. The existence of these regularly created pods and their associated webhook pods has the effect of obscuring other system activity with background noise.

shgattu commented 2 years ago

Facing same issue. What's the fix/solution?

oren-sava commented 2 years ago

same here: (minikube): error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: '' make: *** [versionStream/src/Makefile.mk:242: regen-check] Error 1

necromashka commented 2 years ago

same problem. can anyone help?

dullest commented 2 years ago

I faced same issue with minikube and using specific kubernetes version solved in my case.

minikube start --cpus 4 --memory 8048 --disk-size=100g --addons=ingress --vm=true --kubernetes-version v1.21.8
Chabouchakour commented 2 years ago

I'm having the same issue on both Minikube and On-premise Cluster. Did anyone figure out a fix for this?

ankitm123 commented 2 years ago

I'm having the same issue on both Minikube and On-premise Cluster. Did anyone figure out a fix for this?

Which version of kubernetes do you have? We do not support kubernetes 1.22 yet (it's coming soon), so if you downgrade to 1,21, it should work.

vpvmohan commented 2 years ago

I'm having the same issue, first time installing JX3, below are the logs I am getting from JX ADMIN LOGS, I am using the version EKS 1.21, ....................make[1]: [versionStream/src/Makefile.mk:351: push] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-vj82t has Failed

tailing boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-zljdd

make[1]: Leaving directory '/workspace/source' make[1]: [versionStream/src/Makefile.mk:325: commit] Error 1 (ignored) remote: [Pre-receive Hook]: Illegal email: jenkins-x@googlegroups.com
make[1]: [versionStream/src/Makefile.mk:351: push] Error 1 error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: '' make[1]: Leaving directory '/workspace/source' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-zljdd has Failed error: boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-6crrn has Failed

remote: reject_external_email.sh: failed with exit status 1
remote: [Pre-receive Hook]: Illegal email: jenkins-x@googlegroups.com
! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to 'https://github.XXXXXX/XXXXX/jx3-eks-asm1'

make[1]: [versionStream/src/Makefile.mk:351: push] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate phase 3: failed to run 'make regen-phase-3' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-1ce59d48-7746-4a23-8d3c-d6cb484997fa-vj82t has Failed

phoerselmann commented 2 years ago

Is there any update on the support on Kubernetes 1.22 to solve this issue?

spaily commented 2 years ago

Facing the similar issue and job.batch/jx-boot-* job is incomplete. I'm using Kubernetes 1.25.1 on Ubuntu 22.04. This is frustrating! Kindly someone help. What are the supported Kubernetes versions? I cannot find the list anywhere.

customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khchecks.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khjobs.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khstates.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
make[1]: *** [versionStream/src/Makefile.mk:301: kubectl-apply] Error 1
make[1]: Leaving directory '/workspace/source'
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:255: regen-check] Error 1
spaily commented 2 years ago

Same error even with 1.22.14

Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.14", GitCommit:"bccf857df03c5a99a35e34020b3b63055f0c12ec", GitTreeState:"clean", BuildDate:"2022-09-14T22:41:51Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.14", GitCommit:"bccf857df03c5a99a35e34020b3b63055f0c12ec", GitTreeState:"clean", BuildDate:"2022-09-14T22:36:04Z", GoVersion:"go1.16.15", Compiler:"gc", Platform:"linux/amd64"}
spaily commented 2 years ago

Works with 1.21.14 which is EOL'ed 🤕

nishantn3 commented 2 years ago

@ankitm123 Any update on this? Does cert-manager still doesn't work in k8s version 1.22?

ykantoni commented 2 years ago

Same/similar issue with on-premises k8s, fails both either v1.24 and v1.22, see below.

Would appreciate any medicine/workaround suggestion. Going back to v1.21 is not really an option...

$ jx admin operator --username xxx --token yyy . . . customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khchecks.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khjobs.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khstates.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" make[1]: [versionStream/src/Makefile.mk:289: kubectl-apply] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job jx-boot-4e526053-fa76-4b3e-82f6-c0a898e2d049 has Failed error: failed to tail the Jenkins X boot Job pods: job jx-boot-4e526053-fa76-4b3e-82f6-c0a898e2d049 failed

ankitm123 commented 2 years ago

Removing kuberhealthy will fix this issue. Remove https://github.com/jx3-gitops-repositories/jx3-kubernetes/blob/e92160a1fe90573a461ae53d4c1d4f659defb8ea/helmfile.yaml#L4 and then push ur changes.

ykantoni commented 1 year ago

Removing kuberhealthy will fix this issue. Remove https://github.com/jx3-gitops-repositories/jx3-kubernetes/blob/e92160a1fe90573a461ae53d4c1d4f659defb8ea/helmfile.yaml#L4 and then push ur changes.

Thank you, tried it against bare metal k8s 1.24.7, now it goes much further passing the place where failed last time: . . . customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured <=== failed after this kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=cluster -R -f config-root/cluster namespace/jx-production configured . . .

But still fails (from attached jx_admin_log_k8s_1_24_7.txt). Would appreciate any suggestion how to proceed: . . . serviceaccount/tekton-pipelines-webhook unchanged service/tekton-pipelines-webhook unchanged unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-bot-token-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1" unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-events-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1" unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1" secret/webhook-certs unchanged make[1]: [versionStream/src/Makefile.mk:291: kubectl-apply] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:243: regen-check] Error 1 boot Job pod jx-boot-dcfc3986-5bd2-428e-8153-bef81fa58977-ksq7m has Failed

jx_admin_log_k8s_1_24_7.txt

ykantoni commented 1 year ago

...and same jx-boot failure with kubehealthy disabled if bare metal k8s v1.22.16 used, see attached:

secret/webhook-certs created [unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-bot-token-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-events-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1"] Error from server (InternalError): error when creating "config-root/namespaces/jx/jx-pipelines-visualizer/jx-pipelines-visualizer-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/bucketrepo-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/hook-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found make[1]: [versionStream/src/Makefile.mk:322: kubectl-apply] Error 1 make[1]: Leaving directory '/workspace/source' error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:269: regen-check] Error 1 boot Job pod jx-boot-05d64476-2761-4edb-9247-71a504fdeab8-c5qz7 has Failed

jx_admin_log_k8s_1_22_16.txt

moorthi07 commented 1 year ago

make[1]: [versionStream/src/Makefile.mk:320: kubectl-apply] Error 1 error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: '' make: [versionStream/src/Makefile.mk:269: regen-check] Error 1 boot Job jx-boot-3f90f428-8efa-456b-a16b-ac96382c14d5 has Failed error: failed to tail the Jenkins X boot Job pods: job jx-boot-3f90f428-8efa-456b-a16b-ac96382c14d5 failed

jx version
version: 3.10.19 shaCommit: 6638c0ef41cf8ccdd08765227258c42804e1011c buildDate: Wed Dec 7 14:42:35 UTC 2022 goVersion: 1.19.3 branch: main gitTreeState: clean

% kubectl get ns NAME STATUS AGE default Active 12h ingress-nginx Active 12h jx-git-operator Active 5h7m kube-node-lease Active 12h kube-public Active 12h kube-system Active 12h

i-am-yuvi commented 1 year ago

@moorthi07 What version of Kubernetes are u using?

hjstorch commented 1 year ago

Exactly the same here: k8s 1.24 running on colima (v3.10.58) how to recover this?

tankilo commented 1 year ago

Same on k8s 1.23.10.

using kubectl to apply resources
if [ -d config-root/customresourcedefinitions ]; then \
  kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=customresourcedefinitions -R -f config-root/customresourcedefinitions; \
fi
customresourcedefinition.apiextensions.k8s.io/environments.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/pipelineactivities.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/releases.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/sourcerepositories.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/previews.preview.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousebreakpoints.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousejobs.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/externalsecrets.kubernetes-client.io unchanged
customresourcedefinition.apiextensions.k8s.io/clustertasks.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/conditions.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineresources.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/resolutionrequests.resolution.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/runs.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khchecks.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khjobs.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "config-root/customresourcedefinitions/kuberhealthy/kuberhealthy/khstates.comcast.github.io-crd.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
make[1]: *** [versionStream/src/Makefile.mk:320: kubectl-apply] Error 1
make[1]: Leaving directory '/workspace/source'
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:269: regen-check] Error 1
CPinhoK commented 1 year ago

Same error with kuberhealthy disabled on k8s version 1.24.10

serviceaccount/tekton-pipelines-webhook created
service/tekton-pipelines-webhook created
secret/webhook-certs created
[unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-bot-token-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-events-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1", unable to recognize "config-root/namespaces/jx/jx-kh-check-health-checks-jx/jx-webhook-kuberhealthycheck.yaml": no matches for kind "KuberhealthyCheck" in version "comcast.github.io/v1"]
Error from server (InternalError): error when creating "config-root/namespaces/jx/jx-pipelines-visualizer/jx-pipelines-visualizer-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/bucketrepo-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
Error from server (InternalError): error when creating "config-root/namespaces/jx/jxboot-helmfile-resources/hook-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.nginx.svc:443/networking/v1/ingresses?timeout=10s": service "ingress-nginx-controller-admission" not found
make[1]: Leaving directory '/workspace/source'
make[1]: *** [versionStream/src/Makefile.mk:322: kubectl-apply] Error 1
error: failed to regenerate: failed to regenerate phase 1: failed to run 'make regen-phase-1 NEW_CLUSTER=true' command in directory '.', output: ''
make: *** [versionStream/src/Makefile.mk:269: regen-check] Error 1
omics42 commented 1 year ago

You can get the exact cause of the error by manually running make regen-phase-1 NEW_CLUSTER=true in the cluster repo directory. For me, I had the following errors:

tomhobson commented 1 year ago

I just wanted to let you all know that we are looking at this.

Ted Gelpi wrote in the kubernetes #jenkins-x-user channel the following write up (thanks Ted!)

(https://kubernetes.slack.com/archives/C9MBGQJRH/p1679344807123509?thread_ts=1679325968.633559&cid=C9MBGQJRH) for a better formatted example

Example of building GKE/DNS/TLS/v1.24 config. Create service account with appropriate permissions You will be using a service account to access Google and build the necessary resources. Follow the link below for instructions to create the service account. Once complete you will have a JSON file and it’s location will be set in the variable: GOOGLE_APPLICATION_CREDENTIALS https://jenkins-x.io/v3/admin/platforms/google/svc_acct/ NOTE: If your DNS zone is under a separate Google project make sure you include those additional steps. For DNS you only need to define the apex domain (i.e. jx3rocks.com). The sub domain will be defined later on under the infra repo. The environment will consist of two repos; Infra and Cluster For the Cluster you have a choice of either Google Secret Manger (GSM) or Vault. The preferred Cluster type is GSM and will be used by the demonstration below.

  1. Create two Git repos Create a new Infra Repos from jx3-terraform-gke https://github.com/jx3-gitops-repositories/jx3-terraform-gke/generate Create a new Cluster Repo from jx3-gke-gsm (Cluster Repo Google Secret Manager) PREFERRED https://github.com/jx3-gitops-repositories/jx3-gke-gsm/generate -or- from jx3-gke-vault (Cluster Repo) https://github.com/jx3-gitops-repositories/jx3-gke-vault/generate At this point you should now have two repos.
  2. Configure the Cluster Repo Change to cluster repo Remove health-chek-jx chart in helmfiles/jx/helmfile.yaml
    • chart: jxgh/jx-kh-check version: 0.0.78 condition: jxRequirementsKuberhealthy.enabled name: health-checks-jx values:
    • ../../versionStream/charts/jxgh/health-checks-jx/values.yaml.gotmpl
    • jx-values.yaml In the same file (helmfile.yaml) include in the acme-jx chart 2 statements to disable issuer.
    • chart: jxgh/acme version: 0.0.24 condition: jxRequirementsIngressTLS.enabled name: acme-jx values:
    • ../../versionStream/charts/jxgh/acme-jx/values.yaml.gotmpl
    • jx-values.yaml
    • issuer: ## Include this line enabled: false ## Inlude this line The next step is optional. To set your staging environment URLs to be different then your non-staging urls modify the jx-requirements.yml ‘environments:’ and ‘ingress:’ sections with the following: environments:
    • key: dev
    • key: staging ingress: namespaceSubDomain: -stg.
    • key: production ingress: domain: "" externalDNS: false namespaceSubDomain: . tls: email: "" enabled: false production: false As stated this step is optional but I find it generates a better URL naming convention.
  3. Push the Cluster Repo Changes to Git git commit -a -m "my init" git push
  4. Modify the Infra Repo NOTE: You should have Google Service Account created and you GOOGLE_APPLICATION_CREDENTIALS In the Infra repo root directory modify the main.tf file to disable kuberhealthy (it defaults to true) kuberhealthy = false Create a new values.auto.tfvars file with the following: jx_git_url = "https://github.com/xxx/jx3-gke-gsm.git" gcp_project = "jx3rocks-project-sub apex_domain_gcp_project = "jx3rocks-project-apex" apex_domain = "jx3rocks.com" subdomain = "tst" tls_email = "xxx@gmail.com" cluster_location = "us-east1-b" cluster_name = "jx3tst" gsm = true force_destroy = true The above file should be modified to your specs. For this example it using tst.jx3rocks.com with 2 separate Google projects and naming the cluster jx3tst. If you don’t have a separate project for DNS apex just remove the apex_domain_gcp_project line from this file.
  5. Run the Terraform commands: terraform init terraform apply Once this completes (hopefully without error) try viewing the cluster nodes with kubectl get nodes. If you don’t have cluster access try submitting the connect string (terraform output connect). Switch to the cluster repo directory and following the status with the command jx admin log. It can take some time (about 20 minutes) to complete the existing requirements. Keep using the command: jx admin log to monitor when the job is completed. When run completes do the following: Pull down the latest changes to your local cluster repo (git pull) Check for certs with command kubectl get certs -n jx They should be there with READY = false. Go back to helmfile.yaml file that was edited in Step 3 to disable issuer and remove those lines. Push the recent changes to Git (git commit -a -m "blah"; git push) Follow the status of the change with jx admin log command. Once changes have been make pull down the changes to your local repo (git pull) Check the certs again (kubectl get certs -n jx) it might take some time (5-10 minutes) but eventully it should turn to true. Check your ingresses with the command “kubectl get ing -n jx” You should see something like the following: kubectl get ing NAME CLASS HOSTS ADDRESS PORTS AGE chartmuseum chartmuseum.smp.jx3rocks.org 34.75.211.42 80, 443 164m hook hook.smp.jx3rocks.org 34.75.211.42 80, 443 164m jx-pipelines-visualizer dashboard.smp.jx3rocks.org 34.75.211.42 80, 443 164m nexus nexus.smp.jx3rocks.org 34.75.211.42 80, 443 164m In your browser access your chartmuseum URL (HOST). You should get a secure welcome page.
  6. EXTRA CREDIT If you want to launch an app position yourself into an ideal location (not your repo root directories) and run the following type of command: jx project quickstart --git-token \ --git-username t \ --org \ --name \ --filter node-http \ --batch-mode It shouldn’t prompt you but if so respond and it will launch your first app! Hope this helps. Good Luck.
Vargaf commented 2 months ago

I am having the same issue on a baremetal cluster with the following versions:

Any tip that could help me?