Closed maiakhoa closed 3 years ago
So this issue is related to not having an open ID connect provider for ur cluster. Normally when you create an eks cluster, it should get created alongside the cluster. Once that OIDC provider is up, it's used by the pods to assume iam roles associated with their service account. See: https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html
thank @ankitm123
I can pass that step. But it stuck at build image for both pipelines and Jenkins Server
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: repository can only contain the runes
abcdefghijklmnopqrstuvwxyz0123456789_-./``
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: checking push permission for "": unsupported status code 401; body: Not Authorized
do you have any idea that help me to fix that?
do u have push access to ecr repo and also create it? Does the repo exist? This may be related: https://github.com/tektoncd/pipeline/issues/992
it is existed, I used that for a long time ago with Jenkins X 2
can you paste your pipeline step where you are building/pushing images to ecr?
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
creationTimestamp: null
name: release
spec:
pipelineSpec:
tasks:
- name: from-build-pack
resources: {}
taskSpec:
metadata: {}
stepTemplate:
env:
- name: _JAVA_OPTIONS
value: -XX:+UnlockExperimentalVMOptions -Dsun.zip.disableMemoryMapping=true
-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4
-XX:AdaptiveSizePolicyWeight=90 -Xms10m -Xmx1024m
- name: MAVEN_OPTS
value: -Dhttp.keepAlive=false -Dmaven.wagon.http.pool=false -Dmaven.wagon.httpconnectionManager.ttlSeconds=25 -Dmaven.wagon.http.retryHandler.count=3
image: uses:jenkins-x/jx3-pipeline-catalog/tasks/maven-java11/release.yaml@versionStream
name: ""
resources:
requests:
cpu: 400m
memory: 1024Mi
volumeMounts:
- mountPath: /root/.m2/
name: maven-settings
- mountPath: /root/.gnupg
name: release-gpg
workingDir: /workspace/source
steps:
- image: uses:jenkins-x/jx3-pipeline-catalog/tasks/git-clone/git-clone.yaml@versionStream
name: ""
resources: {}
- name: next-version
resources: {}
- name: jx-variables
resources: {}
- name: build-mvn-deploy
resources: {}
- name: check-registry
resources: {}
- name: build-container-build
resources: {}
volumes:
- name: maven-settings
secret:
secretName: jenkins-maven-settings
- name: release-gpg
secret:
optional: true
secretName: jenkins-release-gpg
podTemplate: {}
serviceAccountName: tekton-bot
timeout: 12h0m0s
status: {}
This is the one I copied from quickstart template
I'm also facing the same issue when using Kaniko to build image from Jenkins Server. Do you have any solution to enable Kanito support for the Jenkins X 3?
So I guess this step is where ur pipeline fails: build-container-build
? Can you print out the environment variables at that step, specifically PUSH_CONTAINER_REGISTRY
and DOCKER_REGISTRY_ORG
. I believe they are not correctly set.
FYI, this is the place where this pipeline catalog comes from: https://github.com/jenkins-x/jx3-pipeline-catalog/blob/master/tasks/maven-java11/pullrequest.yaml#L61-L68
Do you have any solution to enable Kanito support for the Jenkins X 3?
There is no extra step to enable kaniko for JX3. It's just a binary inside a docker container. The build-container-build step already uses kaniko. I will make a PR to update the version of kaniko from 1.3.0 to 1.6.0.
I see tekton-container-registry-auth is empty
What should I do to fill that?
I got this message from secret-infra
"status update failed for externalsecret jx-staging/tekton-container-registry-auth, due to modification, new poller should start"}
EDIT: do we have any way to repopulate secret?
do we have any way to repopulate secret?
You can make a dummy commit in the cluster git repository (the one generated from jx3-eks-vault), and it should regenerate external secrets. Would be good to have a gist of the logs from the PR you open.
Regarding the message from secret-infra, I see this: https://kubernetes.slack.com/archives/C017BF84G2Y/p1597264180001500
I see tekton-container-registry-auth is empty
Again, not sure why that is the case, something is off with external secrets in ur cluster it seems ...
FWIW,
it normally has
{
"credHelpers":{
"insert-aws-account-id": "ecr-login"
}
}
This gets eventually used by kaniko as it has ecr cred helpers inside the kaniko image: https://github.com/GoogleContainerTools/kaniko/blob/master/deploy/Dockerfile#L39-L40
{
"credHelpers": {
"225394301252.dkr.ecr.us-east-1.amazonaws.com": "ecr-login"
},
"auths":{
"ghcr.io": {
"auth": "Z3QtamVua2luc1gtYm90OmdocF96bUhnbURLaGhPTEdsa1o3Rk9PSm5hTWxYU0Rxa0oyNVPXTVA="
}
}
}
it is showing like above now. But I still got the error
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: repository can only contain the runes
Should I wait the new image Kaniko
it is showing like above now
That's correct.
Should I wait the new image Kaniko
No, an upgrade for kaniko image wont fix the problem. 1.3.0 will work for your usecase.
But I still got the error
https://github.com/jx3-gitops-repositories/jx3-terraform-eks/issues/24#issuecomment-921633962
could you please guide me how to print that value? I tried a few way but got the error The execution of the pipeline has stopped.
Which step does your pipeline fail at? check registry
or build-container-build
If it's build-container-build
, then replace
- name: build-container-build
resources: {}
with
- image: gcr.io/kaniko-project/executor:debug-v1.3.0
name: build-container-build
resources: {}
script: |
#!/busybox/sh
source .jx/variables.sh
echo $PUSH_CONTAINER_REGISTRY
echo $DOCKER_REGISTRY_ORG
cp /tekton/creds-secrets/tekton-container-registry-auth/.dockerconfigjson /kaniko/.docker/config.json
/kaniko/executor $KANIKO_FLAGS --context=/workspace/source --dockerfile=${DOCKERFILE_PATH:-Dockerfile} --destination=$PUSH_CONTAINER_REGISTRY/$DOCKER_REGISTRY_ORG/$APP_NAME:$VERSION
Redact any sensitive information. Source: https://github.com/jenkins-x/jx3-pipeline-catalog/blob/master/tasks/maven-java11/pullrequest.yaml#L61-L68
thank @ankitm123
225394301252.dkr.ecr.us-east-1.amazonaws.com
demo-repo
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: repository can only contain the runes `abcdefghijklmnopqrstuvwxyz0123456789_-./`
it is showing correct.
can you try to manually do a push to that ecr repository (docker push) and verify it works. This is not a JX issue, but something is off with the repository name or something it seems ...
Also can you print out:
$APP_NAME
and $VERSION
yes currently I'm manually pushing the build after Jenkins die. So I'm sure it's workingfield. It is showing correct. Should we about capitalization in project name field?
225394301253.dkr.ecr.us-east-1.amazonaws.com
demo-repo
Backend-Project
1.0.81
So, try all small for Backend-Project
:
repository can only contain the runes `abcdefghijklmnopqrstuvwxyz0123456789_-./
According to the error message above, I dont see caps being allowed by docker registry/kaniko/ecr validation logic...
it should work I think.
ah thank, let me figure out how to make it small. I just switched to Pipelines for the first time. should I change in cluster git?
Is that the name of the git repository? To check if that works, you can manually set that value in your pipeline.
export APP_name="backend-project"
before kaniko pushes the image.
I just thought there is an official way to do that thank you so much! It is working perfectly. It is painful working as manual Jenkins builder for my team.
I just thought there is an official way to do that
Not sure what u mean by official way.
JenkinsX just tries to find that value from the git repository name you have. So, if your repository is named Backend-Project
, it uses that. As I mentioned, you can over write it using APP_NAME
.
This is the code where it sets those values: https://github.com/jenkins-x-plugins/jx-gitops/blob/main/pkg/cmd/variables/variables.go#L198-L200 which in turn gets it's value from: https://github.com/jenkins-x/jx-helpers/blob/6eb9664b305cfe79adad52bd7ad65811347359ca/pkg/scmhelpers/discover.go#L161
Having said that, I think we could add validation around git repository names, to help the user debug the issues. Or we could just internally change the git repository name to all small here: https://github.com/jenkins-x-plugins/jx-gitops/blob/main/pkg/cmd/variables/variables.go#L200
Will open a PR tonight for this.
Btw, it would be awesome if you could all this information in the faq section of the docs to help other users :)
Also for the sake of completeness, could you try and see if this issue persists (without setting the environment variable for APP_NAME) in the latest kaniko version (1.6.0)?
yes @ankitm123 let me do that
sorry I got another issue when trying to build the new release. It's lacking hard disk of pod. I tried to increase in Cluster Git but it is resetting to 8Gi as default. So I can not try kaniko version 1.6.0 yet
Let me play around first
I tried with gcr.io/kaniko-project/executor:v1.6.0-debug but still got the same error if it is not set environment
ok that is good to know. I think JX can show some warning to end users or convert the repo name to all small. I will open a PR addressing this issue soon. For now, we can close this issueas it has been resolved.
does anyone face this issue while running pipeline?