Closed dacleyra closed 5 years ago
The problem seems to be limited to Openshift. The pipeline works fine "as is" on minikube. On minikube, the secrets associated with the appsody-sa
service account are available throughout the pipeline steps. On Openshift, the secrets are not made available to the build step (Kaniko).
Using the debug image I can see the credentials in /builder/home/.docker/config.json But kaniko is not consuming them for some reason If I cp /builder/home/.docker/config.json /kaniko/.docker/config.json It works
image: gcr.io/kaniko-project/executor:debug
command: ['/busybox/sh']
args: ['-c', 'cp /builder/home/.docker/config.json /kaniko/.docker/config.json && /kaniko/executor --dockerfile=${inputs.params.pathToDockerFile} --destination=${outputs.resources.docker-image.url} --context=${inputs.params.pathToContext} --skip-tls-verify']
This appears to be a manifestation of https://github.com/GoogleContainerTools/kaniko/issues/507
To workaround the issue, we can set in the pipeline task for the kaniko build-push-step container
env:
- name: DOCKER_CONFIG
value: /builder/home/.docker
@chilanti I have merged the PR to update the documentation to reflect this additional configuration.
@dacleyra Can you please confirm that the documentation is not accurate to make this work with OpenShift. Given, this is just an example to show integration with tekton. I'd like to close this issue and just have the additional steps documented.
When we have proper integration with Tekton and other CI systems, we might consider providing more out-of-the-box experience.
I'm closing this issue based on this comment. Please re-open if you believe, we can do more on this.
I am seeing this exact same problem with the latest Kabanero foundation installation: https://kabanero.io/docs/ref/general/#scripted-kabanero-foundation-setup.html.
See output:
oc logs $(oc get pods -l tekton.dev/pipelineRun=appsody-manual-pipeline-run -n kabanero --output=jsonpath={.items[0].metadata.name}) -n kabanero --all-containers > ~/tmp/tekton-issue-6.log
tekton-issue-6.log
@dacleyra , I also verified that the task run already contained the potential workaround mentioned in: https://github.com/appsody/tekton-example/issues/6#issuecomment-507802937 appsody-build-task.json.txt
"env": [ { "name": "DOCKER_CONFIG", "value": "/builder/home/.docker" } ],
There are two problems here when trying to run the image in minishift, due to minishift using an registry over http instead of https.
kaniko has a known issue with "docker push" always assuming "https" protocol, see https://github.com/GoogleContainerTools/kaniko/issues/702.
Once that fix is released, we still need the appsody sample to pass the "--insecure" flag so that kaniko will use "http" instead of "https". Since kaniko cannot downgrade security automatically, we would need either a way to modify the task definition of "appsody-build-task" for usage with minishift. I am currently having some trouble coaxing openshift to modify it on the fly, but that is a much smaller problem.
Tekton is deployed from https://github.com/openshift/tektoncd-pipeline-operator version 0.4.0-1 operator
First, set
--skip-tls-verify
for kaniko executor in build-task build-push-stepThe openshift container registry automatically associates service accounts with a secret for the registry
builder sa has push ability https://docs.openshift.com/container-platform/3.11/dev_guide/service_accounts.html#default-service-accounts-and-roles
We can add the role to the appsody-sa service account as well
oc policy add-role-to-user system:image-builder system:serviceaccount:dacleyra:appsody-sa
With these credentials, kaniko is not making use of them correctly
Neither does completely elevating the priveledge of the appsody-sa service account to cluster-admin
oc adm policy add-cluster-role-to-user cluster-admin -z appsody-sa -n dacleyra
If I try to switch the pipeline-run service account to builder, the same error occurs kaniko is not making use of the credential
If I take builer's token and create a new secret: regsecret kubernetes.io/dockerconfigjson
and then force mount that into kaniko, push is successful
To try with docker hub also results in the same
secret & sa
kubectl create secret docker-registry regcred --docker-server=docker.io --docker-username=dacleyra --docker-password=PASSWORD --docker-email=dacleyra@us.ibm.com