Open bilalbokharee opened 2 years ago
@bilalbokharee I stumbled on this error using this in GitLab.com CI Pipeline, for me this is the workaround that made it work (for me anyways) incase anyone else stumbles upon this.
I unset the variables then had to set a local hosts entry in the container in order to avoid this error Cannot connect to the Docker daemon at tcp://docker:2375. Is the docker daemon running
although you might have better luck tweaking it to suit your own needs....
smoketest_pull_container_central_ecr_devops:
image: jansauer/dockercli-plus-awscli
variables:
DOCKER_HOST: tcp://docker:2375
stage: smoketest_pull_container_central_ecr
services:
- name: docker:stable-dind
alias: docker
before_script:
- unset DOCKER_TLS_VERIFY DOCKER_TLS_CERTDIR DOCKER_CERT_PATH
- echo "'127.0.0.1 docker' >> /etc/hosts"
script:
- docker pull ...
It's worth noting for anyone that comes across this, you can't do this:
before_script:
- unset DOCKER_TLS_VERIFY DOCKER_TLS_CERTDIR DOCKER_CERT_PATH
stages:
- build
build:
stage: build
before_script:
- echo "before"
script:
- echo "docker_tls_verify = $DOCKER_TLS_VERIFY"
I'm not completely sure what the order of operations is, but I found that in order for that unset
to work properly it had to be in each job's before_script
block rather than a top-level one by itself...but it might also have been that my deploy
stage had its own before_script
and I think that was overriding my unset command. Anyway, that only took five days to figure out. 😅 Because I also need to use aws sts
in the before_script
part of my deploy
pipeline I had to repeat my unset command on each stage.
Here's my working pipeline; note that i have separate build
and deploy
stages and that may be overkill, and i may have created problems for myself in doing so, but it is working and I'm planning on keeping it split up like this. If you chose not to split it up you could probably get away with not temporarily storing the image in the gitlab container registry and instead just build it, tag it, and push it in a single stage.
image:
name: jansauer/dockercli-plus-awscli
entrypoint:
- /usr/bin/env
variables:
AWS_DEFAULT_REGION: ''
AWS_ACCOUNT_ID: ''
IAM_ROLE: ''
ECR_REPOSITORY: ''
ROLE_ARN: arn:aws:iam::${AWS_ACCOUNT_ID}:role/${IAM_ROLE}
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
ECR_LOCATION: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com
DOCKER_HOST: tcp://docker:2375
.docker_service_template: &docker_service
services:
- name: docker:stable-dind
alias: docker
.unset_command_template: &unset_command
- unset DOCKER_TLS_VERIFY DOCKER_TLS_CERTDIR DOCKER_CERT_PATH
.aws_command_template: &aws_command
- >
STS=($(aws sts assume-role-with-web-identity
--role-arn ${ROLE_ARN}
--role-session-name "GitLabRunner-${CI_PROJECT_ID}-${CI_PIPELINE_ID}"
--web-identity-token ${GITLAB_OIDC_TOKEN}
--duration-seconds 3600
--query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]'
--output text))
- export AWS_ACCESS_KEY_ID="${STS[0]}"
- export AWS_SECRET_ACCESS_KEY="${STS[1]}"
- export AWS_SESSION_TOKEN="${STS[2]}"
.gitlab_registry_login_template: &gitlab_registry_login
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
stages:
- build
- deploy
build:
stage: build
<<: *docker_service
before_script:
- *unset_command
script:
- *gitlab_registry_login
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
deploy:
stage: deploy
<<: *docker_service
id_tokens:
GITLAB_OIDC_TOKEN:
aud: https://gitlab.com
before_script:
- *unset_command
- *aws_command
script:
- *gitlab_registry_login
- docker pull $IMAGE_TAG
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_LOCATION
- docker tag $IMAGE_TAG $ECR_LOCATION/$ECR_REPOSITORY:latest
- docker push $ECR_LOCATION/$ECR_REPOSITORY:latest
when: manual
aws CLI works fine but docker CLI show this error when a docker command is run
unable to resolve docker endpoint: open /certs/client/ca.pem: no such file or directory