Open rcollette opened 4 years ago
I've run into this same issue, needing to extract the results of junit tests for reporting. This would be extremely useful.
Same thing here. Currently I have to develop ugly hacks to build the project into a local tar file, then extracting artifacts to it, instead of simply telling kaniko to store artifacts in a directory that stays available after the build has finished.
All the artifacts built inside the Dockerfile are stored in / folder of the kaniko container. To reach your goal you can configure the CI in this way:
step:
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
stage: build
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$DOCKER_REPO\":{\"username\":\"$DOCKER_USER\",\"password\":\"$DOCKER_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR/app --dockerfile $CI_PROJECT_DIR/app/Dockerfile --destination registry.local/image:latest --cache=true
- cp -rf /app/* $CI_PROJECT_DIR/
artifacts:
when: always
reports:
junit: ./**/target/**/TEST-*.xml
@andrea-borraccetti - Thank you for the tip. I see that the WORKDIR specified in my Dockerfile is created inside the kaniko debug container. I was playing with it like this.
docker run -it --entrypoint sh gcr.io/kaniko-project/executor:debug-v1.3.0
(I'm used to running /bin/sh in most images, note the difference here)
using vi
create a Dockerfile with the contents
FROM hello-world
WORKDIR myapp
COPY example.txt example.txt
create a file named example.txt
run the executor
/kaniko/executor --context . --dockerfile ./Dockerfile --destination test.tar --cache=false --no-push
Observe that the directory myapp has been created in the root of the debug container and that example.txt has been copied into it.
@andrea-borraccetti
For some reason, when I run it in gitlab with the entry point as either "" or "sh", I get the error.
/busybox/sh: eval: line 144: cp: not found
$ cp -rf /build/reports $CI_PROJECT_DIR/public
Running after_script
00:00
time="2021-02-05T22:49:57Z" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"sh\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"sh\": executable file not found in $PATH"
I also tried invoking cp with an explicit path but no luck with that either.
$ /busybox/cp -rf /build/reports $CI_PROJECT_DIR/public
/busybox/sh: eval: line 144: /busybox/cp: not found
Running after_script
00:01
time="2021-02-05T23:00:09Z" level=error msg="exec failed: container_linux.go:349: starting container process caused \"exec: \\\"sh\\\": executable file not found in $PATH\""
exec failed: container_linux.go:349: starting container process caused "exec: \"sh\": executable file not found in $PATH"
gitlab-ci.yml contains:
image:
name: gcr.io/kaniko-project/executor:debug-v1.3.0
entrypoint: [""]
script:
- echo "$DOCKER_AUTH_CONFIG" > /kaniko/.docker/config.json
- /kaniko/executor
--context $CI_PROJECT_DIR
--no-push
--skip-unused-stages=true
--cache=true
--cache-repo=${CI_REGISTRY_IMAGE}/cache
--log-timestamp=true
--log-format=text
--target build-and-test
--verbosity=debug
--build-arg NPM_TOKEN=${NPM_TOKEN}
# Executor will create the WORKDIRs of the Dockerfile in its own container.
- /busybox/cp -rf /build/reports $CI_PROJECT_DIR/public
I tried to execute cp and ls commands in my custom runner using the gitlab company instance and using gitlab.com with the standard runner and in each cases all works and i see the files outside the context with the configuration like this:
build:
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
stage: build
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY_IMAGE\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR--dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE/image:latest --cache=true
- ls /
INFO[0024] No files changed in this command, skipping snapshotting.
$ ls /
bin
boot
builds
busybox
certs
dev
etc
home
kaniko
lib
lib64
media
mnt
opt
proc
requirements.txt
root
run
sbin
srv
sys
tmp
usr
var
In your case, it looks more like a problem of gitlab runner version or configuration
Actually running an ls
command before running kaniko executor works fine. Running the ls
command after kaniko executor runs results in the error. So I'm not entirely sure this is something related to my environment.
variables:
#CI_DEBUG_TRACE: "true"
DOCKER_TAG_NAME: pdx-pdt
DOCKER_IMAGE_VERSION: $CI_PIPELINE_ID.$CI_COMMIT_SHORT_SHA
KANIKO_EXECUTOR_VERSION: debug-v1.3.0
stages:
- build
# TEMPLATES
.runner_tags_template: &runners
tags:
- pdx
- dind
.except_master_and_prodfix_template: &except_master_and_prodfix
except:
- /^prodfix\/.*$/
- master
- tags
# BUILD
build:
stage: build
<<: *runners
<<: *except_master_and_prodfix
variables:
AWS_ACCESS_KEY_ID: $DEV_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $DEV_AWS_SECRET_ACCESS_KEY
image:
name: gcr.io/kaniko-project/executor:$KANIKO_EXECUTOR_VERSION
entrypoint: ["sh"]
script:
- echo "$DOCKER_AUTH_CONFIG" > /kaniko/.docker/config.json
- ls
- /kaniko/executor
--context $CI_PROJECT_DIR
--no-push
--skip-unused-stages=true
--cache=true
--cache-repo=${CI_REGISTRY_IMAGE}/cache
--log-timestamp=true
--log-format=text
--target build-and-test
--verbosity=debug
--build-arg NPM_TOKEN=${NPM_TOKEN}
- ls
Build log with executor debug output is attached. GitlabBuildLog.txt
I can cd
to the /busybox directory before running executor. After running executor, if I try to cd
to the directory, it says the directory is not found.
time="2021-02-06T19:23:59Z" level=info msg="Skipping push to container registry due to --no-push flag"
$ cd /busybox
/busybox/sh: cd: line 158: can't cd to /busybox: No such file or directory
@andrea-borraccetti
I am able to duplicate this issue with a minimal Dockerfile. It seems to be related to multi-stage builds. If I target the base stage with executor, there are no issues. If I target a second stage that relies on the first stage, the busybox directory is "gone" after the executor runs.
Dockerfile
#This target builds the API server and runs unit tests
FROM node:14.15-alpine3.12 AS base
LABEL type="build"
FROM base as build-and-test
LABEL type="build-and-test"
.gitlab-ci.yml
variables:
#CI_DEBUG_TRACE: "true"
DOCKER_TAG_NAME: pdx-pdt
DOCKER_IMAGE_VERSION: $CI_PIPELINE_ID.$CI_COMMIT_SHORT_SHA
KANIKO_EXECUTOR_VERSION: debug-v1.3.0
stages:
- build
# TEMPLATES
.runner_tags_template: &runners
tags:
- pdx
- dind
.except_master_and_prodfix_template: &except_master_and_prodfix
except:
- /^prodfix\/.*$/
- master
- tags
# BUILD
build:
stage: build
<<: *runners
<<: *except_master_and_prodfix
variables:
AWS_ACCESS_KEY_ID: $DEV_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $DEV_AWS_SECRET_ACCESS_KEY
image:
name: gcr.io/kaniko-project/executor:$KANIKO_EXECUTOR_VERSION
entrypoint: ["sh"]
script:
- echo "$DOCKER_AUTH_CONFIG" > /kaniko/.docker/config.json
- pwd
- cd /busybox
- pwd
- cd $CI_PROJECT_DIR
- pwd
- env
- ls
- /kaniko/executor
--context $CI_PROJECT_DIR
--no-push
--target build-and-test
#cd /busybox fails when the target is something after the first stage of the Dockerfile
- cd /busybox
- pwd
- ./env
- ./ls
When I execute it in the kaniko container (not using gitlab), the context directory (which is my current working directory in gitlab), is getting deleted when doing a two stage build, and that is why command pathing is not working.
/ # mkdir builds
/ # cd builds
/builds # vi Dockerfile
/builds # /kaniko/executor --context /builds --no-push --target build-and-test
INFO[0000] Resolved base name node:14.15-alpine3.12 to base
INFO[0000] Resolved base name base to build-and-test
INFO[0000] Retrieving image manifest node:14.15-alpine3.12
INFO[0000] Retrieving image node:14.15-alpine3.12
INFO[0000] Retrieving image manifest node:14.15-alpine3.12
INFO[0000] Retrieving image node:14.15-alpine3.12
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest node:14.15-alpine3.12
INFO[0001] Retrieving image node:14.15-alpine3.12
INFO[0002] Retrieving image manifest node:14.15-alpine3.12
INFO[0002] Retrieving image node:14.15-alpine3.12
INFO[0002] Executing 0 build triggers
INFO[0002] Skipping unpacking as no commands require it.
INFO[0002] LABEL type="build"
INFO[0002] Applying label type=build
INFO[0002] Storing source image from stage 0 at path /kaniko/stages/0
INFO[0006] Deleting filesystem...
INFO[0006] Base image from previous stage 0 found, using saved tar at path /kaniko/stages/0
INFO[0006] Executing 0 build triggers
INFO[0006] Skipping unpacking as no commands require it.
INFO[0006] LABEL type="build-and-test"
INFO[0006] Applying label type=build-and-test
INFO[0006] Skipping push to container registry due to --no-push flag
sh: getcwd: No such file or directory
(unknown) # ls
sh: getcwd: No such file or directory
(unknown) # /busybox/ls
sh: getcwd: No such file or directory
(unknown) #
I had the same issue and developed a workaround:
I could not get caching to work, but at least artifacts work now.
Example
build:
stage: build
image:
name: "gcr.io/kaniko-project/executor:debug"
entrypoint: [""]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64)\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR
--dockerfile $CI_PROJECT_DIR/Dockerfile
--destination registry.local/image:latest
--no-push
--ignore-path /MY-APP-ARTIFACT-*
- cp /MY-APP-ARTIFACT-**.tar.bz2 $CI_PROJECT_DIR
artifacts:
when: always
paths:
- "MY-APP-ARTIFACT-*.tar.bz2"
@alastairtree What's the line in your dockerfile that results in the artifact being on the GitLab root and is is there any restriction on putting it elsewhere? My understanding was that kaniko runs in a folder and that folder was considered the build environment root folder so I'm not sure how you would generate an artifact outside of it. Thanks
I just copy it there at the end of the docker build, but using scratch so as not to include the whole image.
FROM debian:10.9 as base
RUN apt update \
# Install package dependencies
&& apt install -y build-essential wget m4 libc6-i386 zlib1g-dev lib32z1 pv \
# do my build stuff... this outputs a tarball in the workdir root MY-APP-ARTIFACT-*.tar.bz2
# export just the tarball
FROM scratch AS export-stage
COPY --from=base /MY-APP-ARTIFACT-*.tar.bz2 /
This feels like thread resurrection, but since this feature request is still open, I wanted to point out that when I went to look into this, I discovered the --no-push and --tarPath options, that I plan to use with the above two-stage method of copying the artifact from the build stage into a scratch container.
It took me a while to come back to it but I can confirm that the solution from @alastairtree does work.
From the Dockerfile, copy the file to a root relative path From the gitlab script copy the artifact to a CI_PROJECT_DIR relative path Reference the artifact from gitlab-ci.yml, using a CI_PROJECT_DIR relative path
It is "interesting" to me that the gitlab runner allows kaniko to create files that are outside of the CI_PROJECT_DIR at all, seems like it would potentially be a security concern except in the case where the runners are dynamically created pods (which they are in my case). This concern is what has me thinking this feature request is still valid. At some point the runner might lock access to files outside of the CI_PROJECT_DIR down and the workaround would fail.
One caveat to the solution from @alastairtree. It seems like this only works when a RUN command succeeds. If I am running a coverage test, if the exit status of the RUN command is non-zero, then the move to root does not work.
For example, this does not work
RUN ./coverage.sh || flag=1 ; \
mv reports / ; \
exit $flag
If anyone has suggestions... much appreciated.
After some try and error I got it working in a configurable way:
.build_template:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
DOCKERFILE: $CI_PROJECT_DIR/Dockerfile
CONTEXT: $CI_PROJECT_DIR
ARTIFACT_DIR: none
script:
- mkdir -p /kaniko/.docker && mkdir build-artifacts
- echo $KANIKO_AUTH > /kaniko/.docker/config.json
- /kaniko/executor
--context $CONTEXT
--dockerfile $DOCKERFILE
--destination $BUILD_IMAGE_TAG
- if [ "$ARTIFACT_DIR" != "none" ]; then cp -r $ARTIFACT_DIR build-artifacts/; fi
artifacts:
name: "$CI_JOB_NAME-$CI_COMMIT_REF_NAME"
paths:
- build-artifacts/**
expire_in: 1 week
Everyone extending this job can override the variable ARTIFACT_DIR: /my/absolute/artifact/path/*
The approach that worked for me uses the fact that Kaniko mounts ${CI_PROJECT_DIR}
inside the container. So, if we know inside the image/container what ${CI_PROJECT_DIR}
is then we can easily copy/create/write files there! The simplest examples:
Dockerfile
:
# We need to know, what is the shared/mounted ${CI_PROJECT_DIR} !
ARG PROJECT_DIR
FROM ubuntu:latest
ARG PROJECT_DIR
WORKDIR ${PROJECT_DIR}
RUN touch file.txt
.gitlab-ci.yml
:
stages:
- build
.kaniko-image-build:
# This is just some generic kaniko setup
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
when: manual
before_script:
- mkdir -p /kaniko/.docker
- echo "${CI_JOB_NAME}"
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"auth\":\"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')\"}}}" > /kaniko/.docker/config.json
test_kaniko_artifacts:
extends: .kaniko-image-build
stage: build
script:
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}/"
--dockerfile "${CI_PROJECT_DIR}/Dockerfile"
--destination "${CI_REGISTRY_IMAGE}/test-image"
--build-arg PROJECT_DIR=${CI_PROJECT_DIR}
artifacts:
paths:
- file.txt
Prior to using Kaniko, I was doing builds on a Gitlab Shell Executor on a dedicated VM. In that mode I was able to extract build artifacts from a "build and test" target prior to generating the final "application" target.
Now that I am building on OpenShift with Kaniko I don't seem to have a way to get artifacts out of the built image. I can't mount a volume to the kaniko image because Gitlab doesn't support running an image with parameters. I have to run the debug image and script the executor like:
If there isn't currently a way to "mount" a file path into the executor it would be helpful to be able to do so, such that generated artifacts can be copied to the mounted path (ex. /artifacts) and they would be written to the file system captured by Gitlab.