jenkins-x / jx

Jenkins X provides automated CI+CD for Kubernetes with Preview Environments on Pull Requests using Cloud Native pipelines from Tekton
https://jenkins-x.io/
Apache License 2.0
4.57k stars 787 forks source link

jx project quickcreate fails to trigger lighthouse, "lighthouse-hmac-token" not found, similar to issue #7589 different error #7941

Closed michaelerobertsjr closed 2 years ago

michaelerobertsjr commented 3 years ago

jx version 3.2.188 eks-jx version: 1.15.41


Steps to reproduce the error:

When I create the quickstart project, everything works fine, till it waits to find the trigger for my gitrepo

using:

jx project quickstart

The error I am receiving is:

error: failed to find hmac token from secret: could not find lighthouse hmac token lighthouse-hmac-token in namespace jx: secrets "lighthouse-hmac-token" not found

Sema Reaction: :hammer_and_wrench: This code needs a fix

rawlingsj commented 3 years ago

what installation are you using? It sounds like the git operator didn't succeed in creating the required secrets, can you check the git operator job logs with

jx admin logs
michaelerobertsjr commented 3 years ago
tailing boot Job pod jx-boot-67b58e05-922e-49ef-8bb8-16c08111c174-wsxn2

jx gitops git setup
found git user.name ccc-jenkins from requirements
found git user.email  from requirements
setup git user  email jenkins-x@googlegroups.com
generated Git credentials file: /workspace/xdg_config/git/credentials with username: ccc-jenkins email:
jx gitops apply
found last commit message: Merge pull request #1 from SanDiegoCodeSchool/pr-27d114b6-da20-455c-a51f-1213a0ec8112

chore: import repository https://github.com/ccc-jenkins/jenkins-x-node-demo-simple.git
last commit was a merge pull request without changing an ExternalSecret so not regenerating
make regen-none
make[1]: Entering directory '/workspace/source'
make[1]: Nothing to be done for 'regen-none'.
make[1]: Leaving directory '/workspace/source'
using kubectl to apply resources
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=customresourcedefinitions -R -f config-root/customresourcedefinitions
customresourcedefinition.apiextensions.k8s.io/environments.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/pipelineactivities.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/releases.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/sourcerepositories.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/previews.preview.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousebreakpoints.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousejobs.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/externalsecrets.kubernetes-client.io unchanged
customresourcedefinition.apiextensions.k8s.io/clustertasks.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/conditions.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineresources.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/runs.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=cluster                   -R -f config-root/cluster
namespace/jx-production configured
namespace/jx-staging configured
namespace/jx unchanged
namespace/nginx configured
namespace/secret-infra configured
clusterrole.rbac.authorization.k8s.io/jx-build-controller-jx unchanged
clusterrolebinding.rbac.authorization.k8s.io/jx-build-controller-jx unchanged
clusterrole.rbac.authorization.k8s.io/jx-pipelines-visualizer unchanged
clusterrolebinding.rbac.authorization.k8s.io/jx-pipelines-visualizer unchanged
clusterrole.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
clusterrolebinding.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
clusterrole.rbac.authorization.k8s.io/jenkinsx-aggregate-view unchanged
clusterrole.rbac.authorization.k8s.io/tekton-bot unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-bot-jx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-external-secrets-auth unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-external-secrets unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-external-secrets unchanged
clusterrole.rbac.authorization.k8s.io/pusher-wave-pusher-wave unchanged
clusterrolebinding.rbac.authorization.k8s.io/pusher-wave-pusher-wave unchanged
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-edit unchanged
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-view unchanged
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access unchanged
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access unchanged
namespace/tekton-pipelines unchanged
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access unchanged
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=namespaces                -R -f config-root/namespaces
deployment.apps/jenkins-x-chartmuseum configured
persistentvolumeclaim/jenkins-x-chartmuseum unchanged
externalsecret.kubernetes-client.io/jenkins-x-chartmuseum unchanged
service/jenkins-x-chartmuseum unchanged
serviceaccount/jenkins-x-controllerbuild unchanged
deployment.apps/jx-build-controller configured
rolebinding.rbac.authorization.k8s.io/jx-build-controller unchanged
role.rbac.authorization.k8s.io/jx-build-controller unchanged
kuberhealthycheck.comcast.github.io/jx-bot-token unchanged
rolebinding.rbac.authorization.k8s.io/jx-webhook-check-rb unchanged
rolebinding.rbac.authorization.k8s.io/jx-webhook-events-check-rb unchanged
kuberhealthycheck.comcast.github.io/jx-webhook-events unchanged
serviceaccount/jx-webhook-events-sa unchanged
role.rbac.authorization.k8s.io/jx-webhook-events-service-role unchanged
kuberhealthycheck.comcast.github.io/jx-webhook unchanged
serviceaccount/jx-webhook-sa unchanged
role.rbac.authorization.k8s.io/jx-webhook-service-role unchanged
deployment.apps/jx-pipelines-visualizer configured
ingress.networking.k8s.io/jx-pipelines-visualizer unchanged
serviceaccount/jx-pipelines-visualizer unchanged
service/jx-pipelines-visualizer unchanged
cronjob.batch/jx-preview-gc-jobs unchanged
rolebinding.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
role.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
serviceaccount/jx-preview-gc-jobs unchanged
ingress.networking.k8s.io/chartmuseum unchanged
role.rbac.authorization.k8s.io/committer unchanged
environment.jenkins.io/dev unchanged
sourcerepository.jenkins.io/dev unchanged
rolebinding.rbac.authorization.k8s.io/gcactivities unchanged
role.rbac.authorization.k8s.io/gcactivities unchanged
rolebinding.rbac.authorization.k8s.io/gcpods unchanged
role.rbac.authorization.k8s.io/gcpods unchanged
ingress.networking.k8s.io/hook unchanged
configmap/ingress-config unchanged
externalsecret.kubernetes-client.io/jenkins-maven-settings unchanged
configmap/jenkins-x-docker-registry unchanged
configmap/jenkins-x-extensions unchanged
externalsecret.kubernetes-client.io/jx-basic-auth-htpasswd unchanged
externalsecret.kubernetes-client.io/jx-basic-auth-user-password unchanged
cronjob.batch/jx-gcactivities unchanged
serviceaccount/jx-gcactivities unchanged
cronjob.batch/jx-gcpods unchanged
serviceaccount/jx-gcpods unchanged
role.rbac.authorization.k8s.io/jx-pipeline-activity-updater unchanged
role.rbac.authorization.k8s.io/jx-view unchanged
configmap/kapp-config unchanged
ingress.networking.k8s.io/nexus unchanged
role.rbac.authorization.k8s.io/owner unchanged
environment.jenkins.io/production unchanged
environment.jenkins.io/staging unchanged
rolebinding.rbac.authorization.k8s.io/tekton-bot unchanged
role.rbac.authorization.k8s.io/tekton-bot unchanged
serviceaccount/tekton-bot configured
externalsecret.kubernetes-client.io/tekton-container-registry-auth unchanged
externalsecret.kubernetes-client.io/tekton-git unchanged
role.rbac.authorization.k8s.io/viewer unchanged
service/hook unchanged
configmap/lighthouse-external-plugins unchanged
deployment.apps/lighthouse-foghorn configured
rolebinding.rbac.authorization.k8s.io/lighthouse-foghorn unchanged
role.rbac.authorization.k8s.io/lighthouse-foghorn unchanged
serviceaccount/lighthouse-foghorn unchanged
cronjob.batch/lighthouse-gc-jobs unchanged
rolebinding.rbac.authorization.k8s.io/lighthouse-gc-jobs unchanged
role.rbac.authorization.k8s.io/lighthouse-gc-jobs unchanged
serviceaccount/lighthouse-gc-jobs unchanged
externalsecret.kubernetes-client.io/lighthouse-hmac-token unchanged
deployment.apps/lighthouse-keeper configured
rolebinding.rbac.authorization.k8s.io/lighthouse-keeper unchanged
role.rbac.authorization.k8s.io/lighthouse-keeper unchanged
serviceaccount/lighthouse-keeper unchanged
service/lighthouse-keeper unchanged
externalsecret.kubernetes-client.io/lighthouse-oauth-token unchanged
deployment.apps/lighthouse-tekton-controller configured
rolebinding.rbac.authorization.k8s.io/lighthouse-tekton-controller unchanged
role.rbac.authorization.k8s.io/lighthouse-tekton-controller unchanged
serviceaccount/lighthouse-tekton-controller unchanged
service/lighthouse-tekton-controller unchanged
deployment.apps/lighthouse-webhooks configured
rolebinding.rbac.authorization.k8s.io/lighthouse-webhooks unchanged
role.rbac.authorization.k8s.io/lighthouse-webhooks unchanged
serviceaccount/lighthouse-webhooks unchanged
configmap/config configured
configmap/jx-install-config unchanged
configmap/plugins configured
configmap/nexus unchanged
deployment.apps/nexus-nexus configured
persistentvolumeclaim/nexus-nexus unchanged
externalsecret.kubernetes-client.io/nexus unchanged
service/nexus unchanged
externalsecret.kubernetes-client.io/tekton-container-registry-auth unchanged
externalsecret.kubernetes-client.io/tekton-container-registry-auth unchanged
service/ingress-nginx-controller-admission unchanged
configmap/ingress-nginx-controller configured
deployment.apps/ingress-nginx-controller configured
service/ingress-nginx-controller-metrics unchanged
poddisruptionbudget.policy/ingress-nginx-controller configured
service/ingress-nginx-controller unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
job.batch/ingress-nginx-admission-create unchanged
job.batch/ingress-nginx-admission-patch unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
serviceaccount/ingress-nginx-admission unchanged
deployment.apps/kubernetes-external-secrets unchanged
serviceaccount/kubernetes-external-secrets unchanged
service/kubernetes-external-secrets unchanged
deployment.apps/pusher-wave-pusher-wave configured
serviceaccount/pusher-wave-pusher-wave unchanged
configmap/config-artifact-bucket unchanged
configmap/config-artifact-pvc unchanged
configmap/config-defaults unchanged
configmap/config-leader-election unchanged
configmap/config-logging unchanged
configmap/config-observability unchanged
configmap/config-registry-cert unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.pipeline.tekton.dev unchanged
configmap/feature-flags unchanged
configmap/pipelines-info unchanged
serviceaccount/tekton-bot configured
deployment.apps/tekton-pipelines-controller unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-controller unchanged
serviceaccount/tekton-pipelines-controller unchanged
service/tekton-pipelines-controller unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-info unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-info unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-leader-election unchanged
podsecuritypolicy.policy/tekton-pipelines configured
deployment.apps/tekton-pipelines-webhook unchanged
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-webhook unchanged
serviceaccount/tekton-pipelines-webhook unchanged
service/tekton-pipelines-webhook unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pipeline.tekton.dev unchanged
secret/webhook-certs unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.pipeline.tekton.dev unchanged
jx gitops postprocess
there is no post processing Secret jx-post-process in namespace default so not performing any additional post processing steps
changing to the jx namespace to verify
jx ns jx --quiet
Now using namespace 'jx' on server ''.
jx verify ingress --ingress-service ingress-nginx-controller
now verifying docker registry ingress setup
jx gitops webhook update --warn-on-fail
Error: failed to find hmac token from secret: could not find lighthouse hmac token lighthouse-hmac-token in namespace jx: secrets "lighthouse-hmac-token" not found
Usage:
  update [flags]

Examples:
  # update all the webhooks for all SourceRepository and Environment resource:
  jx-gitops update

  # only update the webhooks for a given owner
  jx-gitops update --org=mycorp

  # use a custom hook webhook endpoint (e.g. if you are on premise using node ports or something)
  jx-gitops update --endpoint http://mything.com

Flags:
  -b, --batch-mode                 Runs in batch mode without prompting for user input
      --endpoint string            Don't use the endpoint from the cluster, use the provided endpoint
      --exact-hook-url-match       Whether to exactly match the hook based on the URL (default true)
      --git-kind string            the kind of git server to connect to
      --git-server string          the git server URL to create the scm client
      --git-token string           the git token used to operate on the git repository. If not specified it's loaded from the git credentials file
      --git-username string        the git username used to operate on the git repository. If not specified it's loaded from the git credentials file
  -h, --help                       help for update
      --hmac string                Don't use the HMAC token from the cluster, use the provided token
      --log-level string           Sets the logging level. If not specified defaults to $JX_LOG_LEVEL
  -o, --owner string               The name of the git organisation or user to filter on
      --previous-hook-url string   Whether to match based on an another URL
  -r, --repo string                The name of the repository to filter on
      --verbose                    Enables verbose output. The environment variable JX_LOG_LEVEL has precedence over this flag and allows setting the logging level to any value of: panic, fatal, error, warn, info, debug, trace
      --warn-on-fail               If enabled lets just log a warning that we could not update the webhook

error: failed to find hmac token from secret: could not find lighthouse hmac token lighthouse-hmac-token in namespace jx: secrets "lighthouse-hmac-token" not found
make: *** [versionStream/src/Makefile.mk:211: gitops-webhook-update] Error 1
boot Job pod jx-boot-67b58e05-922e-49ef-8bb8-16c08111c174-wsxn2 has Failed
error: boot Job pod jx-boot-67b58e05-922e-49ef-8bb8-16c08111c174-2w5mx has Failed
ERROR: exit status 1

Sema Reaction: :hammer_and_wrench: This code needs a fix

ankitm123 commented 3 years ago

Can you post the output of kubectl get secrets -n jx? As james mentioned and the logs show, ur hmac token secret is missing.

ankitm123 commented 3 years ago

Also I think the issue may have been fixed by the latest terraform module, change the version to 1.15.44 (or the latest at that time) and do a terraform apply, let us know if that fixes your problem. https://github.com/jenkins-x/terraform-aws-eks-jx/releases/

michaelerobertsjr commented 3 years ago

I performed an terraform init -upgrade and terraform apply

here is the output of kubectl get secrets -n jx

NAME                                       TYPE                                  DATA   AGE
default-token-g89tn                        kubernetes.io/service-account-token   3      17m
jenkins-x-controllerbuild-token-g4lj4      kubernetes.io/service-account-token   3      17m
jx-gcactivities-token-b8dzx                kubernetes.io/service-account-token   3      17m
jx-gcpods-token-4lvxh                      kubernetes.io/service-account-token   3      17m
jx-pipelines-visualizer-token-7fxvc        kubernetes.io/service-account-token   3      17m
jx-preview-gc-jobs-token-kpdkv             kubernetes.io/service-account-token   3      17m
jx-webhook-events-sa-token-l7dnf           kubernetes.io/service-account-token   3      17m
jx-webhook-sa-token-hg6r7                  kubernetes.io/service-account-token   3      17m
lighthouse-foghorn-token-hbxzh             kubernetes.io/service-account-token   3      17m
lighthouse-gc-jobs-token-5q45q             kubernetes.io/service-account-token   3      17m
lighthouse-keeper-token-pscb9              kubernetes.io/service-account-token   3      17m
lighthouse-tekton-controller-token-kgdz6   kubernetes.io/service-account-token   3      17m
lighthouse-webhooks-token-x6zxw            kubernetes.io/service-account-token   3      17m
tekton-bot-token-d2k6j                     kubernetes.io/service-account-token   3      17m

Sema Reaction: :hammer_and_wrench: This code needs a fix

michaelerobertsjr commented 3 years ago

I made a commit to my cluster repo and it started a regenerate chore, but when I log the results I see:

jx gitops scheduler
jx gitops hash --pod-spec --kind Deployment -s config-root/namespaces/jx/lighthouse-config/config-cm.yaml -s config-root/namespaces/jx/lighthouse-config/plugins-cm.yaml -d config-root/namespaces/jx/lighthouse
jx gitops label --dir config-root/cluster                   gitops.jenkins-x.io/pipeline=cluster
jx gitops label --dir config-root/customresourcedefinitions gitops.jenkins-x.io/pipeline=customresourcedefinitions
jx gitops label --dir config-root/namespaces                gitops.jenkins-x.io/pipeline=namespaces
jx gitops annotate --dir config-root --selector app=pusher-wave kapp.k14s.io/change-group=apps.jenkins-x.io/pusher-wave
jx gitops annotate --dir config-root --selector app.kubernetes.io/name=ingress-nginx kapp.k14s.io/change-group=apps.jenkins-x.io/ingress-nginx
jx gitops label --dir config-root/cluster --kind=Namespace team=jx
jx gitops annotate --dir  config-root/namespaces --kind Deployment --selector app=pusher-wave --invert-selector wave.pusher.com/update-on-config-change=true
jx gitops git setup
found git user.name ccc-jenkins from requirements
found git user.email  from requirements
setup git user  email jenkins-x@googlegroups.com
generated Git credentials file: /workspace/xdg_config/git/credentials with username: ccc-jenkins email:
git add --all
git commit -m "chore: regenerated" -m "/pipeline cancel"
On branch main
Your branch is ahead of 'origin/main' by 1 commit.
  (use "git push" to publish your local commits)

nothing to commit, working tree clean
make[1]: [versionStream/src/Makefile.mk:323: commit] Error 1 (ignored)
make[1]: Leaving directory '/workspace/source'
make regen-phase-3
make[1]: Entering directory '/workspace/source'
Already up to date.
To https://github.com/SanDiegoCodeSchool/jenkins-x-cluster
   6b69845..c586462  main -> main
VAULT_ADDR=https://vault.jx-vault:8200 VAULT_NAMESPACE=jx-vault jx secret populate --secret-namespace jx-vault
waiting for vault pod vault-0 in namespace jx-vault to be ready...
pod vault-0 in namespace jx-vault is ready
verifying we have vault installed
about to run: /root/.jx/plugins/bin/vault-1.6.1 version
Vault v1.6.1 (6d2db3f033e02e70202bef9ec896360062b88b03)
verifying we can connect to vault...
about to run: /root/.jx/plugins/bin/vault-1.6.1 kv list secret
Keys
----
accounts/
dockerrepo
mysql
vault is setup correctly!

managed to verify we can connect to vault
VAULT_ADDR=https://vault.jx-vault:8200 jx secret wait -n jx
waiting for the mandatory Secrets to be populated from ExternalSecrets...
jenkins-x-chartmuseum: key secret/data/jx/adminUser missing properties: password, username
jx-basic-auth-user-password: key secret/data/jx/basic/auth/user missing properties: password, key secret/data/jx/basic/auth/user/password missing properties: username
lighthouse-hmac-token: key secret/data/lighthouse/hmac missing properties: token
lighthouse-oauth-token: key secret/data/lighthouse/oauth missing properties: token
nexus: key secret/data/nexus missing properties: password
tekton-container-registry-auth: key secret/data/tekton/container/registry/auth missing properties: .dockerconfigjson
tekton-git: key secret/data/jx/pipelineUser missing properties: token, username

Sema Reaction: :hammer_and_wrench: This code needs a fix

ag0783 commented 2 years ago

Hi,

I'm setting up Jenkins X for the first time and I've also run into this issue. This is a brand new EKS cluster being created from scratch following the Quickstart guide.

jx version = 3.2.216 eks-jx version = 1.18.1 kubernetes version = 1.20

▶ jx admin log                       
waiting for the Git Operator to be ready in namespace jx-git-operator...
pod jx-git-operator-7bc44fc4c-vl4zl has status Ready
the Git Operator is running in pod jx-git-operator-7bc44fc4c-vl4zl

waiting for boot Job pod with selector app=jx-boot in namespace jx-git-operator...
waiting for Job jx-boot-f221e719-496e-4c62-95e0-ed62c3b59396 to complete...
pod jx-boot-f221e719-496e-4c62-95e0-ed62c3b59396-264gt has status Ready

tailing boot Job pod jx-boot-f221e719-496e-4c62-95e0-ed62c3b59396-264gt

jx gitops git setup
found git user.name ag from requirements
found git user.email  from requirements
setup git user  email jenkins-x@googlegroups.com
generated Git credentials file: /workspace/xdg_config/git/credentials with username: ag email: 
jx gitops apply
found last commit message: chore: regenerated

/pipeline cancel
last commit disabled further processing
using kubectl to apply resources
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=customresourcedefinitions -R -f config-root/customresourcedefinitions
customresourcedefinition.apiextensions.k8s.io/environments.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/pipelineactivities.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/releases.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/sourcerepositories.jenkins.io configured
customresourcedefinition.apiextensions.k8s.io/previews.preview.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousebreakpoints.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/lighthousejobs.lighthouse.jenkins.io unchanged
customresourcedefinition.apiextensions.k8s.io/externalsecrets.kubernetes-client.io unchanged
customresourcedefinition.apiextensions.k8s.io/clustertasks.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/conditions.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineresources.tekton.dev unchanged
customresourcedefinition.apiextensions.k8s.io/pipelineruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/pipelines.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/runs.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/taskruns.tekton.dev configured
customresourcedefinition.apiextensions.k8s.io/tasks.tekton.dev configured
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=cluster                   -R -f config-root/cluster
namespace/jx-production configured
namespace/jx-staging configured
namespace/jx unchanged
namespace/nginx configured
namespace/secret-infra configured
clusterrole.rbac.authorization.k8s.io/jx-build-controller-jx unchanged
clusterrolebinding.rbac.authorization.k8s.io/jx-build-controller-jx unchanged
clusterrole.rbac.authorization.k8s.io/jx-pipelines-visualizer unchanged
clusterrolebinding.rbac.authorization.k8s.io/jx-pipelines-visualizer unchanged
clusterrole.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
clusterrolebinding.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
clusterrole.rbac.authorization.k8s.io/jenkinsx-aggregate-view unchanged
clusterrole.rbac.authorization.k8s.io/tekton-bot unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-bot-jx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-external-secrets-auth unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-external-secrets unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-external-secrets unchanged
clusterrole.rbac.authorization.k8s.io/pusher-wave-pusher-wave unchanged
clusterrolebinding.rbac.authorization.k8s.io/pusher-wave-pusher-wave unchanged
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-edit unchanged
clusterrole.rbac.authorization.k8s.io/tekton-aggregate-view unchanged
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access unchanged
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access unchanged
namespace/tekton-pipelines unchanged
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access unchanged
clusterrolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access unchanged
kubectl apply --force --prune -l=gitops.jenkins-x.io/pipeline=namespaces                -R -f config-root/namespaces
deployment.apps/jenkins-x-chartmuseum configured
persistentvolumeclaim/jenkins-x-chartmuseum unchanged
externalsecret.kubernetes-client.io/jenkins-x-chartmuseum unchanged
service/jenkins-x-chartmuseum unchanged
serviceaccount/jenkins-x-controllerbuild unchanged
deployment.apps/jx-build-controller configured
rolebinding.rbac.authorization.k8s.io/jx-build-controller unchanged
role.rbac.authorization.k8s.io/jx-build-controller unchanged
kuberhealthycheck.comcast.github.io/jx-bot-token unchanged
rolebinding.rbac.authorization.k8s.io/jx-webhook-check-rb unchanged
rolebinding.rbac.authorization.k8s.io/jx-webhook-events-check-rb unchanged
kuberhealthycheck.comcast.github.io/jx-webhook-events unchanged
serviceaccount/jx-webhook-events-sa unchanged
role.rbac.authorization.k8s.io/jx-webhook-events-service-role unchanged
kuberhealthycheck.comcast.github.io/jx-webhook unchanged
serviceaccount/jx-webhook-sa unchanged
role.rbac.authorization.k8s.io/jx-webhook-service-role unchanged
deployment.apps/jx-pipelines-visualizer configured
ingress.networking.k8s.io/jx-pipelines-visualizer unchanged
serviceaccount/jx-pipelines-visualizer unchanged
service/jx-pipelines-visualizer unchanged
cronjob.batch/jx-preview-gc-jobs unchanged
rolebinding.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
role.rbac.authorization.k8s.io/jx-preview-gc-jobs unchanged
serviceaccount/jx-preview-gc-jobs unchanged
ingress.networking.k8s.io/chartmuseum unchanged
role.rbac.authorization.k8s.io/committer unchanged
environment.jenkins.io/dev unchanged
sourcerepository.jenkins.io/dev unchanged
rolebinding.rbac.authorization.k8s.io/gcactivities unchanged
role.rbac.authorization.k8s.io/gcactivities unchanged
rolebinding.rbac.authorization.k8s.io/gcpods unchanged
role.rbac.authorization.k8s.io/gcpods unchanged
ingress.networking.k8s.io/hook unchanged
configmap/ingress-config unchanged
externalsecret.kubernetes-client.io/jenkins-maven-settings unchanged
configmap/jenkins-x-docker-registry unchanged
configmap/jenkins-x-extensions unchanged
externalsecret.kubernetes-client.io/jx-basic-auth-htpasswd unchanged
externalsecret.kubernetes-client.io/jx-basic-auth-user-password unchanged
cronjob.batch/jx-gcactivities unchanged
serviceaccount/jx-gcactivities unchanged
cronjob.batch/jx-gcpods unchanged
serviceaccount/jx-gcpods unchanged
role.rbac.authorization.k8s.io/jx-pipeline-activity-updater unchanged
role.rbac.authorization.k8s.io/jx-view unchanged
configmap/kapp-config unchanged
ingress.networking.k8s.io/nexus unchanged
role.rbac.authorization.k8s.io/owner unchanged
environment.jenkins.io/production unchanged
environment.jenkins.io/staging unchanged
rolebinding.rbac.authorization.k8s.io/tekton-bot unchanged
role.rbac.authorization.k8s.io/tekton-bot unchanged
serviceaccount/tekton-bot configured
externalsecret.kubernetes-client.io/tekton-container-registry-auth unchanged
externalsecret.kubernetes-client.io/tekton-git unchanged
role.rbac.authorization.k8s.io/viewer unchanged
service/hook unchanged
configmap/lighthouse-external-plugins unchanged
deployment.apps/lighthouse-foghorn configured
rolebinding.rbac.authorization.k8s.io/lighthouse-foghorn unchanged
role.rbac.authorization.k8s.io/lighthouse-foghorn unchanged
serviceaccount/lighthouse-foghorn unchanged
cronjob.batch/lighthouse-gc-jobs unchanged
rolebinding.rbac.authorization.k8s.io/lighthouse-gc-jobs unchanged
role.rbac.authorization.k8s.io/lighthouse-gc-jobs unchanged
serviceaccount/lighthouse-gc-jobs unchanged
externalsecret.kubernetes-client.io/lighthouse-hmac-token unchanged
deployment.apps/lighthouse-keeper configured
rolebinding.rbac.authorization.k8s.io/lighthouse-keeper unchanged
role.rbac.authorization.k8s.io/lighthouse-keeper unchanged
serviceaccount/lighthouse-keeper unchanged
service/lighthouse-keeper unchanged
externalsecret.kubernetes-client.io/lighthouse-oauth-token unchanged
deployment.apps/lighthouse-tekton-controller configured
rolebinding.rbac.authorization.k8s.io/lighthouse-tekton-controller unchanged
role.rbac.authorization.k8s.io/lighthouse-tekton-controller unchanged
serviceaccount/lighthouse-tekton-controller unchanged
service/lighthouse-tekton-controller unchanged
deployment.apps/lighthouse-webhooks configured
rolebinding.rbac.authorization.k8s.io/lighthouse-webhooks unchanged
role.rbac.authorization.k8s.io/lighthouse-webhooks unchanged
serviceaccount/lighthouse-webhooks unchanged
configmap/config configured
configmap/jx-install-config unchanged
configmap/plugins configured
configmap/nexus unchanged
deployment.apps/nexus-nexus configured
persistentvolumeclaim/nexus-nexus unchanged
externalsecret.kubernetes-client.io/nexus unchanged
service/nexus unchanged
externalsecret.kubernetes-client.io/tekton-container-registry-auth unchanged
externalsecret.kubernetes-client.io/tekton-container-registry-auth unchanged
service/ingress-nginx-controller-admission unchanged
configmap/ingress-nginx-controller configured
deployment.apps/ingress-nginx-controller configured
service/ingress-nginx-controller-metrics unchanged
poddisruptionbudget.policy/ingress-nginx-controller unchanged
service/ingress-nginx-controller unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
role.rbac.authorization.k8s.io/ingress-nginx unchanged
serviceaccount/ingress-nginx unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
job.batch/ingress-nginx-admission-create unchanged
job.batch/ingress-nginx-admission-patch unchanged
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
serviceaccount/ingress-nginx-admission unchanged
deployment.apps/kubernetes-external-secrets unchanged
serviceaccount/kubernetes-external-secrets unchanged
service/kubernetes-external-secrets unchanged
deployment.apps/pusher-wave-pusher-wave configured
serviceaccount/pusher-wave-pusher-wave unchanged
configmap/config-artifact-bucket unchanged
configmap/config-artifact-pvc unchanged
configmap/config-defaults unchanged
configmap/config-leader-election unchanged
configmap/config-logging unchanged
configmap/config-observability unchanged
configmap/config-registry-cert unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/config.webhook.pipeline.tekton.dev unchanged
configmap/feature-flags unchanged
configmap/pipelines-info unchanged
serviceaccount/tekton-bot configured
deployment.apps/tekton-pipelines-controller unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller-leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-controller unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-controller unchanged
serviceaccount/tekton-pipelines-controller unchanged
service/tekton-pipelines-controller unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-info unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-info unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-leader-election unchanged
podsecuritypolicy.policy/tekton-pipelines configured
deployment.apps/tekton-pipelines-webhook unchanged
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook-leaderelection unchanged
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-webhook unchanged
role.rbac.authorization.k8s.io/tekton-pipelines-webhook unchanged
serviceaccount/tekton-pipelines-webhook unchanged
service/tekton-pipelines-webhook unchanged
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.pipeline.tekton.dev unchanged
secret/webhook-certs unchanged
mutatingwebhookconfiguration.admissionregistration.k8s.io/webhook.pipeline.tekton.dev unchanged
jx gitops postprocess
there is no post processing Secret jx-post-process in namespace default so not performing any additional post processing steps
changing to the jx namespace to verify
jx ns jx --quiet
Now using namespace 'jx' on server ''.
jx verify ingress --ingress-service ingress-nginx-controller
now verifying docker registry ingress setup
jx gitops webhook update --warn-on-fail
Error: failed to find hmac token from secret: could not find lighthouse hmac token lighthouse-hmac-token in namespace jx: secrets "lighthouse-hmac-token" not found
Usage:
  update [flags]

Examples:
  # update all the webhooks for all SourceRepository and Environment resource:
  jx-gitops update

  # only update the webhooks for a given owner
  jx-gitops update --org=mycorp

  # use a custom hook webhook endpoint (e.g. if you are on premise using node ports or something)
  jx-gitops update --endpoint http://mything.com

Flags:
  -b, --batch-mode                 Runs in batch mode without prompting for user input
      --endpoint string            Don't use the endpoint from the cluster, use the provided endpoint
      --exact-hook-url-match       Whether to exactly match the hook based on the URL (default true)
      --git-kind string            the kind of git server to connect to
      --git-server string          the git server URL to create the scm client
      --git-token string           the git token used to operate on the git repository. If not specified it's loaded from the git credentials file
      --git-username string        the git username used to operate on the git repository. If not specified it's loaded from the git credentials file
  -h, --help                       help for update
      --hmac string                Don't use the HMAC token from the cluster, use the provided token
      --log-level string           Sets the logging level. If not specified defaults to $JX_LOG_LEVEL
  -o, --owner string               The name of the git organisation or user to filter on
      --previous-hook-url string   Whether to match based on an another URL
  -r, --repo string                The name of the repository to filter on
      --verbose                    Enables verbose output. The environment variable JX_LOG_LEVEL has precedence over this flag and allows setting the logging level to any value of: panic, fatal, error, warn, info, debug, trace
      --warn-on-fail               If enabled lets just log a warning that we could not update the webhook

error: failed to find hmac token from secret: could not find lighthouse hmac token lighthouse-hmac-token in namespace jx: secrets "lighthouse-hmac-token" not found
make: *** [versionStream/src/Makefile.mk:212: gitops-webhook-update] Error 1
boot Job pod jx-boot-f221e719-496e-4c62-95e0-ed62c3b59396-264gt has Failed
...
...

and

▶ kubectl get secrets -n jx
NAME                                       TYPE                                  DATA   AGE
default-token-8hp2r                        kubernetes.io/service-account-token   3      21m
jenkins-x-controllerbuild-token-n67kg      kubernetes.io/service-account-token   3      21m
jx-gcactivities-token-b9sbb                kubernetes.io/service-account-token   3      21m
jx-gcpods-token-5kj2d                      kubernetes.io/service-account-token   3      21m
jx-pipelines-visualizer-token-qzfc8        kubernetes.io/service-account-token   3      21m
jx-preview-gc-jobs-token-zlbmt             kubernetes.io/service-account-token   3      21m
jx-webhook-events-sa-token-nm2f6           kubernetes.io/service-account-token   3      21m
jx-webhook-sa-token-sg6lr                  kubernetes.io/service-account-token   3      21m
lighthouse-foghorn-token-qmsgp             kubernetes.io/service-account-token   3      21m
lighthouse-gc-jobs-token-kw8pr             kubernetes.io/service-account-token   3      21m
lighthouse-keeper-token-rr8tj              kubernetes.io/service-account-token   3      21m
lighthouse-tekton-controller-token-pj7gh   kubernetes.io/service-account-token   3      21m
lighthouse-webhooks-token-th4xb            kubernetes.io/service-account-token   3      21m
tekton-bot-token-92bgm                     kubernetes.io/service-account-token   3      21m

Is there a fix for this yet or do I need to change the configuration in either of the infrastructure or cluster repositories?

ankitm123 commented 2 years ago

Are you using vault (internal vs external)? What is the output of kubectl get es -A, jx secret verify and the output from the vault pods if jx created the vault pods for you?

If they are both working, just make a dummy commit directly to master, and check in the jx admin log that jx secret populate is working.

ag0783 commented 2 years ago

Hi,

I started off by using Vault (internal) but soon ran into issues because the AWS environment that I need to use has a number of restrictions and I wasn't able to create a new IAM user so the Terraform failed. I tried providing my own user for Vault but that also failed for reasons I can't quite remember. So rather than try to setup my own Vault instance I thought or hoped it would be easier to use AWS Secrets Manager instead since the Quickstart guide listed that as an option. So that's what I'm using now.

The output I get from the commands above aren't promising

▶ kubectl get es -A
NAMESPACE       NAME                             LAST SYNC   STATUS                                                                                                                                                                                                                                                                                                            AGE
jx-production   tekton-container-registry-auth   0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: tekton-container-registry-auth because no identity-based policy allows the secretsmanager:GetSecretValue action   3m37s
jx-staging      tekton-container-registry-auth   0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: tekton-container-registry-auth because no identity-based policy allows the secretsmanager:GetSecretValue action   3m37s
jx              jenkins-maven-settings           0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: jx-maven-settings because no identity-based policy allows the secretsmanager:GetSecretValue action                3m39s
jx              jenkins-x-chartmuseum            0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: jx-admin-user because no identity-based policy allows the secretsmanager:GetSecretValue action                    3m41s
jx              jx-basic-auth-htpasswd           0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: jx-basic-auth-htpasswd because no identity-based policy allows the secretsmanager:GetSecretValue action           3m39s
jx              jx-basic-auth-user-password      0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: jx-basic-auth-user because no identity-based policy allows the secretsmanager:GetSecretValue action               3m39s
jx              lighthouse-hmac-token            0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: lighthouse-hmac because no identity-based policy allows the secretsmanager:GetSecretValue action                  3m39s
jx              lighthouse-oauth-token           0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: lighthouse-oauth because no identity-based policy allows the secretsmanager:GetSecretValue action                 3m38s
jx              nexus                            0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: jx-admin-user because no identity-based policy allows the secretsmanager:GetSecretValue action                    3m38s
jx              tekton-container-registry-auth   0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: tekton-container-registry-auth because no identity-based policy allows the secretsmanager:GetSecretValue action   3m39s
jx              tekton-git                       0s          ERROR, User: < trimmed > is not authorized to perform: secretsmanager:GetSecretValue on resource: jx-pipeline-user because no identity-based policy allows the secretsmanager:GetSecretValue action                 3m39s

Which I guess explains why the next command provides this:

▶ jx secret verify
SECRET                                       STATUS
jx-production/tekton-container-registry-auth key tekton-container-registry-auth missing properties: 
jx-staging/tekton-container-registry-auth    key tekton-container-registry-auth missing properties: 
jx/jenkins-maven-settings                    key jx-maven-settings missing properties: settingsXml, securityXml
jx/jenkins-x-chartmuseum                     key jx-admin-user missing properties: password, username
jx/jx-basic-auth-htpasswd                    key jx-basic-auth-htpasswd missing properties: 
jx/jx-basic-auth-user-password               key jx-basic-auth-user missing properties: password, username
jx/lighthouse-hmac-token                     key lighthouse-hmac missing properties: 
jx/lighthouse-oauth-token                    key lighthouse-oauth missing properties: 
jx/nexus                                     key jx-admin-user missing properties: 
jx/tekton-container-registry-auth            key tekton-container-registry-auth missing properties: 
jx/tekton-git                                key jx-pipeline-user missing properties: token, username

A lot of this is new to me but it feels like Jenkins X expects to be installed in an environment that has fewer restrictions. Is that the case here or is there something I can tweak in my configuration to work around this issue?

Thanks!

ankitm123 commented 2 years ago

The iam role attached to kubernetes external secret service account, needs to have atleast the following actions: https://github.com/jenkins-x/terraform-aws-eks-jx/blob/master/modules/cluster/irsa.tf#L389-L398

ag0783 commented 2 years ago

Hi,

Thanks for pointing me to that code. I'm now getting my head around AWS roles (which is pretty new to me) - I had assumed that policies would be inherited from the AWS profile I used to trigger the Terraform but it seems this isn't the case.

If my understanding is incorrect, please do correct me... Jenkins X creates a number of IAM roles when executing the Terraform - these are applied to EC2 instances. E.g. tf-jx-huge-aphid2021112xxxxxxxxxxxxxxxxxxxx

The policies that this role is given are:

Between them they grant access to the services:

None of them grant access to 'Secrets Manager', which my role (the one I passed into Terraform) does have.

Is this perhaps an issue with the Terraform configuration? I would expect the roles to be created with the necessary policies.

Thanks again!

ankitm123 commented 2 years ago

Are you using the eks-jx module? https://github.com/jenkins-x/terraform-aws-eks-jx The https://github.com/jx3-gitops-repositories/jx3-terraform-eks quickstart is based on that. If yes, then the creation of the asm role is managed by create_asm_role variable By default it is set to false, as we install vault by default in jx. If you add create_asm_role = true in ur main.tf, it should create that role, and then you should not see that issue. See also: https://github.com/jenkins-x/terraform-aws-eks-jx#secrets-management

So, there are different ways to give access to resources in eks. The ones you pointed out is the node iam role (iam policies directly attached to eks worker nodes) In case of secret manager, we dont rely on node iam role. Instead we use irsa (IAM role for service account) - which basically means that we attach an iam role to the external secrets service account, which gives it access to aws secrets manager. See: https://github.com/jenkins-x/terraform-aws-eks-jx/blob/master/modules/cluster/irsa.tf#L385-L420

ag0783 commented 2 years ago

Hi,

Yes, I am using the eks-jx module based on the Quickstart you pointed to. Thanks for pointing out the create_asm_role variable! I had missed that one and had only set:

use_vault            = false
use_asm              = true

Re-running the Terraform still produces the same error messages as before, but this time I can at least see the new IAM role xxx-external-secrets-secrets-manager. The associated policy has the service "Secrets Manager" which lists only 7 of the 8 actions in the iras.tf file you linked earlier. The action ListSecrets is annotated with "No access" and provides the following message:

This action does not support resource-level permissions. This requires a wildcard (*) for the resource.

and points to this AWS page.

The following snippet should fix the this particular issue (it seems to work on my setup), but it doesn't fix my original issue with secretsmanager:GetSecretValue.

data "aws_iam_policy_document" "secrets-manager-policy" {
  count = var.create_asm_role ? 1 : 0
  statement {
    effect = "Allow"
    actions = [
      "secretsmanager:CreateSecret",
      "secretsmanager:DescribeSecret",
      "secretsmanager:GetResourcePolicy",
      "secretsmanager:GetSecretValue",
      "secretsmanager:ListSecretVersionIds",
      "secretsmanager:PutSecretValue",
      "secretsmanager:UpdateSecret",
    ]
    resources = [
      "arn:${data.aws_partition.current.partition}:secretsmanager:${var.region}:${local.project}:secret:secret/data/lighthouse/*",
      "arn:${data.aws_partition.current.partition}:secretsmanager:${var.region}:${local.project}:secret:secret/data/jx/*",
      "arn:${data.aws_partition.current.partition}:secretsmanager:${var.region}:${local.project}:secret:secret/data/nexus/*"
    ]
  }
  statement {
    effect = "Allow"
    actions = ["secretsmanager:ListSecrets"]
    resources = ["*"]
  }
}

I don't know how to interrogate or debug the association of iras with a Kubernetes service account. Can you offer any pointers?

For clarity (in this longer than expected reply), even with the addition of create_asm_role = true I still get the same errors as above.

Thanks again!

ankitm123 commented 2 years ago

Few things to check:

ag0783 commented 2 years ago

Thanks for the suggestions! Here's what I found...

I appear to have an OIDC issuer:

▶ aws eks describe-cluster --name tf-jx-on-swine --query "cluster.identity.oidc.issuer" --output text
https://oidc.eks.eu-west-2.amazonaws.com/id/76FE93544B26F23CD2C97732F7D37B38

The IAM role that was created after setting create_asm_role = true is (on this latest instance): tf-jx-on-swine-external-secrets-secrets-manager The user that was previously < trimmed > (when running kubectl get es -A) is: arn:aws:sts::1172xxxxxxxx:assumed-role/tf-jx-on-swine20211126082652997200000013/i-06447ae16cac3e5b4 From my small understanding of AWS, that user is assuming permissions based on the role tf-jx-on-swine20211126082652997200000013. This felt odd to me (as I had expected it to be linked to tf-jx-on-swine-external-secrets-secrets-manager but assumed there was some AWS magic happening in the background based on your explanation that there are multiple ways to give access to resources in EKS (i.e. this use irsa rather than node IAM roles). Is there something not quite right here?

TBH I'm not totally sure what the third bullet point is asking (sorry - this is all new to me!) but I've run this:

▶ kubectl describe -n secret-infra serviceaccounts                             
Name:                default
Namespace:           secret-infra
Labels:              <none>
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   default-token-hrphq
Tokens:              default-token-hrphq
Events:              <none>

Name:                kubernetes-external-secrets
Namespace:           secret-infra
Labels:              app.kubernetes.io/instance=kubernetes-external-secrets
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=kubernetes-external-secrets
                     gitops.jenkins-x.io/pipeline=namespaces
                     helm.sh/chart=kubernetes-external-secrets-8.3.0
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::1172xxxxxxxx:role/tf-jx-fluent-cheetah-external-secrets-secrets-manager
                     meta.helm.sh/release-name: kubernetes-external-secrets
Image pull secrets:  <none>
Mountable secrets:   kubernetes-external-secrets-token-jpjhg
Tokens:              kubernetes-external-secrets-token-jpjhg
Events:              <none>
...

There is an annotation but it seems to reference an IAM role that I can't find in the AWS console: tf-jx-fluent-cheetah-external-secrets-secrets-manager. Why would that be? I do feel like I've seen fluent-cheetah before on a previous instance.

As it happens I've been running terraform destroy at the end of each day to tidy up since Jenkins X wasn't functioning and I didn't want to rack up costs in AWS (hence my references to this/previous instances). Thus each time I rerun terraform apply I get resources with slightly different names/IDs.

ag0783 commented 2 years ago

I've just discovered that Jenkins X had successfully merged a couple commits to the cluster repository (a week ago), which has hard-coded a bunch of configuration such as resources that include the fluent-cheetah reference. I feel that these ought to be reverted and Jenkins should merge new commits that reference a new instance - hopefully one that may work.

As a side question...if Jenkins X had fully installed correctly and was subsequently uninstalled, would I find myself in a similar situation? Or would Jenkins pull everything from the cluster repository such that it could be stood up exactly the same and this type of naming conflict wouldn't occur?

ag0783 commented 2 years ago

Doing the above has (I believe) fixed one of the issues that confused me, the role seen in the annotation now matches the role in the error message. E.g.

▶ kubectl describe -n secret-infra serviceaccounts       
...
Name:                kubernetes-external-secrets
Namespace:           secret-infra
Labels:              app.kubernetes.io/instance=kubernetes-external-secrets
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=kubernetes-external-secrets
                     gitops.jenkins-x.io/pipeline=namespaces
                     helm.sh/chart=kubernetes-external-secrets-8.3.0
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::1172xxxxxxxx:role/tf-jx-hip-mastiff-external-secrets-secrets-manager
                     meta.helm.sh/release-name: kubernetes-external-secrets
Image pull secrets:  <none>
Mountable secrets:   kubernetes-external-secrets-token-88pkq
Tokens:              kubernetes-external-secrets-token-88pkq
Events:              <none>
...

and

NAMESPACE       NAME                             LAST SYNC   STATUS                                                                                                                                                                                                                                                                                                                    AGE
jx-production   tekton-container-registry-auth   15s         ERROR, User: arn:aws:sts::1172xxxxxxxx:assumed-role/tf-jx-hip-mastiff-external-secrets-secrets-manager/token-file-web-identity is not authorized to perform: secretsmanager:GetSecretValue on resource: tekton-container-registry-auth because no identity-based policy allows the secretsmanager:GetSecretValue action   4h2m
...

I feel that's progress, albeit a little! I've also manually fixed the secretsmanager:ListSecrets issue in the AWS IAM Console and pushed an empty commit to the cluster repository to see if that makes anything jump into life but sadly not.

ankitm123 commented 2 years ago

As a side question...if Jenkins X had fully installed correctly and was subsequently uninstalled, would I find myself in a similar situation

Dont think so.

Is that the only secret (tekton-container-registry-auth) which has issues? Are you still seeing the same error with kubectl get es -A and jx secret verify? You may want to try to change the policy to access resources * for the secret manager policy. We can then remove the access and debug ..

ag0783 commented 2 years ago

No, I'm having issues with a number of (maybe all?) secrets.

I've changed the list of resources to "*" as suggested and that has possibly improved things. I'm no longer getting an authorisation issue, but instead the secret doesn't exist.

Now I get:

▶ kubectl get es -A                               
NAMESPACE       NAME                             LAST SYNC   STATUS                                                    AGE
jx-production   tekton-container-registry-auth   7s          ERROR, Secrets Manager can't find the specified secret.   3m29s
jx-staging      tekton-container-registry-auth   7s          ERROR, Secrets Manager can't find the specified secret.   3m29s
jx              jenkins-maven-settings           7s          ERROR, Secrets Manager can't find the specified secret.   3m30s
jx              jenkins-x-chartmuseum            7s          ERROR, Secrets Manager can't find the specified secret.   3m32s
jx              jx-basic-auth-htpasswd           8s          ERROR, Secrets Manager can't find the specified secret.   3m30s
jx              jx-basic-auth-user-password      7s          ERROR, Secrets Manager can't find the specified secret.   3m30s
jx              lighthouse-hmac-token            7s          ERROR, Secrets Manager can't find the specified secret.   3m29s
jx              lighthouse-oauth-token           8s          ERROR, Secrets Manager can't find the specified secret.   3m29s
jx              nexus                            7s          ERROR, Secrets Manager can't find the specified secret.   3m29s
jx              tekton-container-registry-auth   7s          ERROR, Secrets Manager can't find the specified secret.   3m29s
jx              tekton-git                       7s          ERROR, Secrets Manager can't find the specified secret.   3m29s

So it seems I am now allowed to access the secrets store but there is nothing in there.

ankitm123 commented 2 years ago

U may have to generate the secrets? Since it never went to the secret generation phase before ... Make a direct push to master branch of the cluster git repo (not via PR merging to master), and check ur boot log (jx admin log) that it has the line make regen-phase-3 (it should do make regen-phase-1 and make regen-phase-2 first).

ag0783 commented 2 years ago

I pushed an empty commit to the cluster repository and it doesn't seem to have had any affect.

Your references to regen-phase-x are spot on - it has run both regen-phase-1 and regen-phase-2 successfully but failed to run regen-phase-3. Here is a portion of the logs from jx admin log:

...
make regen-phase-3
make[1]: Entering directory '/workspace/source'
Already up to date.
To https://bitbucket/scm/cicd/jx3-eks-asm.git
   6350ef9..b5ea3ac  master -> master
VAULT_ADDR=https://vault.jx-vault:8200 VAULT_NAMESPACE=jx-vault EXTERNAL_VAULT=false jx secret populate --secret-namespace jx-vault
Error: failed to populate secrets: failed to save properties key: jx-admin-user properties: password, username on ExternalSecret jenkins-x-chartmuseum: error creating new secret for aws secret manager: : MissingRegion: could not find region configuration
Usage:
  populate [flags]

Examples:
  jx-secret populate

Flags:
  -b, --batch-mode                     Runs in batch mode without prompting for user input
      --boot-secret-namespace string   the namespace to that contains the boot secret used to populate git secrets from
  -d, --dir string                     the directory to look for the .jx/secret/mapping/secret-mappings.yaml file (default ".")
  -f, --filter string                  the filter to filter on ExternalSecret names
      --helm-secrets-dir string        the directory where the helm secrets live with a folder per namespace and a file with a '.yaml' extension for each secret name. Defaults to $JX_HELM_SECRET_FOLDER
  -h, --help                           help for populate
      --log-level string               Sets the logging level. If not specified defaults to $JX_LOG_LEVEL
      --no-wait                        disables waiting for the secret store (e.g. vault) to be available
  -n, --ns string                      the namespace to filter the ExternalSecret resources
      --secret-namespace string        the namespace in which secret infrastructure resides such as Hashicorp Vault (default "jx-vault")
  -s, --source string                  the source location for the ExternalSecrets, valid values include filesystem or kubernetes (default "kubernetes")
      --verbose                        Enables verbose output. The environment variable JX_LOG_LEVEL has precedence over this flag and allows setting the logging level to any value of: panic, fatal, error, warn, info, debug, trace
  -w, --wait duration                  the maximum time period to wait for the vault pod to be ready if using the vault backendType (default 2h0m0s)

error: failed to populate secrets: failed to save properties key: jx-admin-user properties: password, username on ExternalSecret jenkins-x-chartmuseum: error creating new secret for aws secret manager: : MissingRegion: could not find region configuration
VAULT_ADDR=https://vault.jx-vault:8200 jx secret wait -n jx
make[1]: [versionStream/src/Makefile.mk:226: secrets-populate] Error 1 (ignored)
waiting for the mandatory Secrets to be populated from ExternalSecrets...
jenkins-x-chartmuseum: key jx-admin-user missing properties: password, username
jx-basic-auth-user-password: key jx-basic-auth-user missing properties: password, username
lighthouse-hmac-token: key lighthouse-hmac missing properties: 
lighthouse-oauth-token: key lighthouse-oauth missing properties: 
nexus: key jx-admin-user missing properties: 
tekton-container-registry-auth: key tekton-container-registry-auth missing properties: 
tekton-git: key jx-pipeline-user missing properties: token, username

I found references to secrets-populate in a couple Makefiles, not sure if this is the correct one. Even though I was able to trace it back to a Makefile I couldn't find what called make regen-phase-3, so struggled to figure out why it "could not find region configuration". I have definitely provided a region here.

ankitm123 commented 2 years ago

Try setting the region in the defaults section, and it should not error out: https://github.com/jx3-gitops-repositories/jx3-eks-asm/blob/main/.jx/secret/mapping/secret-mappings.yaml#L5

defaults:
    backendType: secretsManager
    region: us-east-2 (or any aws region) 

And make another direct commit to the master branch ... Another thing, there is a bug with the way secrets get populated for lighthouse token, I would love to see how it gets populated, so that I can issue a fix. If you can paste it, that would be wonderful ... Once everything works, in AWS secrets manager, search for lighthouse-oauth-token and post the format of the secret (redact the actual value) ... it needs to be in a format so that u can do git clone https://$GIT_USER:$GIT_TOKEN@github.com/, but I think GIT_TOKEN gets set in the pod as GIT_TOKEN={"oauth":"ghp_somestring"} instead of GIT_TOKEN=ghp_somestring

ag0783 commented 2 years ago

Following the steps above I get the following output when running jx admin log:

make regen-phase-3
make[1]: Entering directory '/workspace/source'
Already up to date.
To https://bitbucket/scm/cicd/jx3-eks-asm.git
   ff29be6..787a649  master -> master
VAULT_ADDR=https://vault.jx-vault:8200 VAULT_NAMESPACE=jx-vault EXTERNAL_VAULT=false jx secret populate --secret-namespace jx-vault
Error: failed to populate secrets: failed to save properties key: jx-admin-user properties: password, username on ExternalSecret jenkins-x-chartmuseum: error creating new secret for aws secret manager: : AccessDeniedException: User: arn:aws:sts::1172xxxxxxxx:assumed-role/jx-test20211201114109921600000015/i-0b6c4daa5d8fbfb64 is not authorized to perform: secretsmanager:CreateSecret on resource: jx-admin-user because no identity-based policy allows the secretsmanager:CreateSecret action
    status code: 400, request id: 425a4e97-793c-44a0-aa09-d96d20c43630

It looks like it's using the wrong IAM role again (jx-test20211201114109921600000015) since the role with the necessary policies is jx-test-external-secrets-secrets-manager. This was previously fixed by setting create_asm_role = true. It is still set true so I'm going to try destroying everything and re-creating again from scratch.

ag0783 commented 2 years ago

Perhaps unexpectedly, a rebuild from scratch produced the same result - fuller error message below:

make regen-phase-3
 17 files changed, 18 insertions(+), 18 deletions(-)
make[1]: Leaving directory '/workspace/source'
make[1]: Entering directory '/workspace/source'
Already up to date.
To https://bitbucket/scm/cicd/jx3-eks-asm.git
   53256ae..da47e50  master -> master
VAULT_ADDR=https://vault.jx-vault:8200 VAULT_NAMESPACE=jx-vault EXTERNAL_VAULT=false jx secret populate --secret-namespace jx-vault
Error: failed to populate secrets: failed to save properties key: jx-admin-user properties: password, username on ExternalSecret jenkins-x-chartmuseum: error creating new secret for aws secret manager: : AccessDeniedException: User: arn:aws:sts::1172xxxxxxxx:assumed-role/jx-test20211201171330382900000015/i-0d681a80dcbc149ea is not authorized to perform: secretsmanager:CreateSecret on resource: jx-admin-user because no identity-based policy allows the secretsmanager:CreateSecret action
    status code: 400, request id: 9e8697aa-9445-4297-aefa-db2cf5877743
Usage:
  populate [flags]

Examples:
  jx-secret populate

Flags:
  -b, --batch-mode                     Runs in batch mode without prompting for user input
      --boot-secret-namespace string   the namespace to that contains the boot secret used to populate git secrets from
  -d, --dir string                     the directory to look for the .jx/secret/mapping/secret-mappings.yaml file (default ".")
  -f, --filter string                  the filter to filter on ExternalSecret names
      --helm-secrets-dir string        the directory where the helm secrets live with a folder per namespace and a file with a '.yaml' extension for each secret name. Defaults to $JX_HELM_SECRET_FOLDER
  -h, --help                           help for populate
      --log-level string               Sets the logging level. If not specified defaults to $JX_LOG_LEVEL
      --no-wait                        disables waiting for the secret store (e.g. vault) to be available
  -n, --ns string                      the namespace to filter the ExternalSecret resources
      --secret-namespace string        the namespace in which secret infrastructure resides such as Hashicorp Vault (default "jx-vault")
  -s, --source string                  the source location for the ExternalSecrets, valid values include filesystem or kubernetes (default "kubernetes")
      --verbose                        Enables verbose output. The environment variable JX_LOG_LEVEL has precedence over this flag and allows setting the logging level to any value of: panic, fatal, error, warn, info, debug, trace
  -w, --wait duration                  the maximum time period to wait for the vault pod to be ready if using the vault backendType (default 2h0m0s)

error: failed to populate secrets: failed to save properties key: jx-admin-user properties: password, username on ExternalSecret jenkins-x-chartmuseum: error creating new secret for aws secret manager: : AccessDeniedException: User: arn:aws:sts::1172xxxxxxxx:assumed-role/jx-test20211201171330382900000015/i-0d681a80dcbc149ea is not authorized to perform: secretsmanager:CreateSecret on resource: jx-admin-user because no identity-based policy allows the secretsmanager:CreateSecret action
    status code: 400, request id: 9e8697aa-9445-4297-aefa-db2cf5877743
make[1]: [versionStream/src/Makefile.mk:226: secrets-populate] Error 1 (ignored)
VAULT_ADDR=https://vault.jx-vault:8200 jx secret wait -n jx
waiting for the mandatory Secrets to be populated from ExternalSecrets...
jenkins-x-chartmuseum: key jx-admin-user missing properties: password, username
jx-basic-auth-user-password: key jx-basic-auth-user missing properties: password, username
lighthouse-hmac-token: key lighthouse-hmac missing properties: 
lighthouse-oauth-token: key lighthouse-oauth missing properties: 
nexus: key jx-admin-user missing properties: 
tekton-container-registry-auth: key tekton-container-registry-auth missing properties: 
tekton-git: key jx-pipeline-user missing properties: token, username

Would you expect this role to be used or would you expect jx-test-external-secrets-secrets-manager (which is what I expected based on this)?

ankitm123 commented 2 years ago

It might be easier to debug this in slack, join the jx-user slack community: https://kubernetes.slack.com/messages/C9MBGQJRH

It should have an assumed role though, so I dont think that's an issue, can u try the same thing u tried before, and change it to "*" for resources? After u have made the change, push a direct commit to master (remember to set the region in defaults), hopefully that will fix it ...

ankitm123 commented 2 years ago

It should have an assumed role though, so I dont think that's an issue

On second thought, is it matching the IAM role attached to ur EC2 worker node by any chance?

ag0783 commented 2 years ago

I'm on corporate IT and will need to check policy on Slack...

I had already included that change - to use "*" for the resources:

...
    actions = [
      "secretsmanager:CreateSecret",
      "secretsmanager:DescribeSecret",
      "secretsmanager:GetResourcePolicy",
      "secretsmanager:GetSecretValue",
      "secretsmanager:ListSecretVersionIds",
      "secretsmanager:PutSecretValue",
      "secretsmanager:UpdateSecret",
    ]
    resources = [
      "*",
    ]
...

My intention was to debug that later once I was able to get Jenkins X installed.

Yes, the IAM role jx-test20211201114109921600000015 was associated with the EC2 instances.

ankitm123 commented 2 years ago

Yes, the IAM role jx-test20211201114109921600000015 was associated with the EC2 instances.

Well that explains the problem.

A bit confused why it wont pick up the iam role attached to the external secrets service account. When u do a describe of the external secrets service account in the secret infra namespace, do u not see the iam role attached as an annotation? In jx3-eks-asm, I see that we do set it here: https://github.com/jx3-gitops-repositories/jx3-eks-asm/blob/main/versionStream/charts/external-secrets/kubernetes-external-secrets/values.yaml.gotmpl#L25 which then gets passed to the external secrets helm chart here: https://github.com/external-secrets/kubernetes-external-secrets/blob/master/charts/kubernetes-external-secrets/templates/serviceaccount.yaml#L7-L9

ankitm123 commented 2 years ago

Also check the trust relationship on the external secrets IAM role. Under conditions, do u see something weird? The value should be system:serviceaccount:secret-infra:kubernetes-external-secrets (system:serviceaccount:<ns where the service account is>:<service-account-name>)

ankitm123 commented 2 years ago

Perhaps unexpectedly, a rebuild from scratch produced the same result - fuller error message below:

Completely new cluster? Did you follow this guide for uninstalling? https://jenkins-x.io/v3/admin/uninstall/

ag0783 commented 2 years ago

Completely new cluster? Did you follow this guide for uninstalling? https://jenkins-x.io/v3/admin/uninstall/

Yes, specifically this page: https://jenkins-x.io/v3/admin/uninstall/delete-jx-cluster/

I've even gone further and created a new cluster repository to remove any state added by jx in case that caused any confusion.

Looking at the AWS console IAM page, it shows the following condition for the role jx-test-external-secrets-secrets-manager (just as you expected): system:serviceaccount:secret-infra:kubernetes-external-secrets

The annotation attached to the service account description looks good:

▶ kubectl describe -n secret-infra serviceaccounts
Name:                kubernetes-external-secrets
Namespace:           secret-infra
Labels:              app.kubernetes.io/instance=kubernetes-external-secrets
                     app.kubernetes.io/managed-by=Helm
                     app.kubernetes.io/name=kubernetes-external-secrets
                     gitops.jenkins-x.io/pipeline=namespaces
                     helm.sh/chart=kubernetes-external-secrets-8.3.0
Annotations:         eks.amazonaws.com/role-arn: arn:aws:iam::117248621883:role/jx-test-external-secrets-secrets-manager
                     meta.helm.sh/release-name: kubernetes-external-secrets
Image pull secrets:  <none>
Mountable secrets:   kubernetes-external-secrets-token-dfvt9
Tokens:              kubernetes-external-secrets-token-dfvt9
Events:              <none>

But jx admin log still reports this issue (note the assumed role is jx-test20211202104840341300000015):

...
make regen-phase-3
Already up to date.
To https://bitbucket/scm/cicd/jx3-eks-asm.git
   53256ae..746aa97  master -> master
VAULT_ADDR=https://vault.jx-vault:8200 VAULT_NAMESPACE=jx-vault EXTERNAL_VAULT=false jx secret populate --secret-namespace jx-vault
Error: failed to populate secrets: failed to save properties key: jx-admin-user properties: password, username on ExternalSecret jenkins-x-chartmuseum: error creating new secret for aws secret manager: : AccessDeniedException: User: arn:aws:sts::1172xxxxxxxx:assumed-role/jx-test20211202104840341300000015/i-0ba9e33f6c5e9ca4e is not authorized to perform: secretsmanager:CreateSecret on resource: jx-admin-user because no identity-based policy allows the secretsmanager:CreateSecret action
    status code: 400, request id: 2a31d917-0898-4ea3-a84d-211557157271
Usage:
  populate [flags]

Examples:
  jx-secret populate
...
ankitm123 commented 2 years ago

Ok, then something weird is going on with networking stuff it seems, may be ur pods are not able to hit the STS end point. This may be related? https://github.com/external-secrets/kubernetes-external-secrets/issues/597#issuecomment-760338671

So 2 things to try:

See this: https://github.com/external-secrets/kubernetes-external-secrets/blob/master/charts/kubernetes-external-secrets/values.yaml#L11

Edit the helmfile.yaml for external secrets and add that file here: https://github.com/jx3-gitops-repositories/jx3-eks-asm/blob/main/helmfiles/secret-infra/helmfile.yaml#L20

Commit these changes and push to master. After that, the external secret pods should be much more verbose. May be they will say what is going on 🤷

I am out of ideas, and the only thing I can think of is sts end point being unreachable because of some network policy ...

ankitm123 commented 2 years ago

If nothing comes out weird, then I would advise to may be set the secret manager policy to the worker node, and see if that unblocks u for now.

I took a look at this again, and this is the right solution for now.

The issue is that the jx secret populate is running under a different service account, and not external secrets (Not sure how I missed this detail), so I need to modify the eks-jx module to add an iam role to the service account under which jx boot runs. But for now, adding an ec2 role with access to ASM is the fastest way to debug.

ag0783 commented 2 years ago

Magic - it sounds like you've found the source of the issue!

I did change the log level and give it another try but there wasn't anything new in the logs. They continued to point to the same problem.

I'll keep an eye out for an update and give it another try then. Thanks for your time and help to get to the bottom of this one!

ankitm123 commented 2 years ago

So, try the latest jx version (https://github.com/jenkins-x/jx/releases/tag/v3.2.236), and the latest eks-jx module version (https://github.com/jenkins-x/terraform-aws-eks-jx/releases/tag/v1.18.7), and I believe the issues with ASM should be gone.

Once you verify that the issues are fixed, I will go ahead and close this issue.

FYI, If you see this error:

secret already exists, and marked for deletion etc ...

Just run this (for all the secrets):

aws secretsmanager delete-secret --secret-id jx-basic-auth-user-password --force-delete-without-recovery --profile <insert-profile> --region <insert-region>

Dont omit the region.

ag0783 commented 2 years ago

I can confirm that has fixed the issue.

Thank you!

ankitm123 commented 2 years ago

/close closing this as the issue with ASM has been fixed for now.

jenkins-x-bot commented 2 years ago

@ankitm123: Closing this issue.

In response to [this](https://github.com/jenkins-x/jx/issues/7941#issuecomment-999027818): >/close >closing this as the issue with ASM has been fixed for now. Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [jenkins-x/lighthouse](https://github.com/jenkins-x/lighthouse/issues/new?title=Command%20issue:) repository.