Open james-crowley opened 3 years ago
From playing around with Kaniko, I can confirm that adding the binaries into another container does work, at least for a simple Docker build.
Here is the Dockerfile I am working with:
# Extend CircleCI's Runner with Kaniko
# https://github.com/GoogleContainerTools/kaniko/issues/1757
FROM circleci/runner:launch-agent
# Setting Enviroment Variables for Kaniko
ENV HOME /root
ENV USER root
ENV PATH="${PATH}:/kaniko"
ENV SSL_CERT_DIR=/kaniko/ssl/certs
ENV DOCKER_CONFIG /kaniko/.docker/
ENV DOCKER_CREDENTIAL_GCR_CONFIG /kaniko/.config/gcloud/docker_credential_gcr_config.json
# Copy Needed Files from Kaniko Image
COPY --from=gcr.io/kaniko-project/executor:v1.6.0 /kaniko/executor /kaniko/executor
COPY --from=gcr.io/kaniko-project/executor:v1.6.0 /kaniko/docker-credential-gcr /kaniko/docker-credential-gcr
COPY --from=gcr.io/kaniko-project/executor:v1.6.0 /kaniko/docker-credential-ecr-login /kaniko/docker-credential-ecr-login
COPY --from=gcr.io/kaniko-project/executor:v1.6.0 /kaniko/docker-credential-acr /kaniko/docker-credential-acr
COPY --from=gcr.io/kaniko-project/executor:v1.6.0 /kaniko/.docker /kaniko/.docker
# Generate latest ca-certificates
RUN apt-get update && \
apt-get install -y ca-certificates && \
mkdir -p /kaniko/ssl/certs/ && \
cat /etc/ssl/certs/* > /kaniko/ssl/certs/ca-certificates.crt
Using that Dockerfile, I was able to build and publish an image to DockerHub. @tejal29 Any idea on why this isn't a support use case of Kaniko?
Hey @james-crowley -- glad it worked for you. The reason we say that is because there are additional files needed for kaniko to work, so on its own the binary will not work. Since you copied those in, it seems fine.
There is also a slight risk that files in the new image may end up in the image you are trying to build, which aren't supposed to be there. kaniko knows to exclude adding volume mounts and anything in the /kaniko
directory to the final image, but it might be adding in some files from the new base image you selected (i'm not 100% sure though as I haven't looked at this code in a while)
@priyawadhwa Thanks for the quick response. Seems like we got passed the first blocker in terms of getting all the files kaniko needs to work.
There is also a slight risk that files in the new image may end up in the image you are trying to build, which aren't supposed to be there. kaniko knows to exclude adding volume mounts and anything in the
/kaniko
directory to the final image, but it might be adding in some files from the new base image you selected (i'm not 100% sure though as I haven't looked at this code in a while)
This seems concerning. Is there any open bugs/issues for this? Is kaniko can ignore the /kaniko
directory, is there away to tell it to ignore other directories?
Is there any open bugs/issues for this? Is kaniko can ignore the /kaniko directory, is there away to tell it to ignore other directories?
I believe the answer to both these questions is no -- is there a reason you need to use the other base image? Would it be feasible for you to move the files you need from that image into the /kaniko
directory to make sure they don't end up in the final image?
CircleCI has a self hosted runner. You are able to run the self hosted runner in a couple different ways. Both the Docker and Kubernetes offerings need the ability to build Docker images.
I wanted to extend the runner image with Kaniko to add the functionality for users to build Docker images while having the runner installed on Kubernetes.
I could use Kaniko's base image, gcr.io/kaniko-project/executor:v1.6.0
, and extend that image with the runner agent config. But I think we would be end up in the same situation as me adding Kaniko to the runner base image.
I can shift the runner agent config files to be inside of the /kaniko
directory, but I can't control what users do with the self hosted runner. They might create files or folders outside the /kaniko
directory.
Why does kaniko not exclude files outside the build context of where the image is being built? Normally, if we use Docker to build an image we can define a build context and limit the scope of what the build can see/utilize.
My understanding is that this is not supported (running kaniko in another docker container) because kaniko unpacks base image into / , so that will cause files and directories in / of image that runs kaniko, to be overwritten with unpacked files and directories. And this can lead to unexpected results. I was trying something silimar, to run kaniko in other docker image with chroot, but there were some issues (kaniko needs /dev, /proc and /sys to be mounted). And I found proot (which does same thing as chroot but much easier to run), and managed to run kaniko in chroot environment. Only downside of approach is that container needs to be run with SYS_PTRACE privilege (proot needs it).
@james-crowley Have you tried to build multi-stage dockerfile with your image?
I'm trying to implement a similar thing as you and I faced some issues. After success build, executor container breaks down when trying to call simple commands as for example git status
(it works if it get called before actual build)
@meskill I think that happens because, in multistage builds with kaniko, filesystem is deleted after each stage (kaniko output shows that).
@vladaurosh Thanks for pointing that out.
Is there any workaround for this? As I understand we can put every required tool in the protected directory /kaniko
to prevent it from deleting, but such tools should be built as static executables and should not use any shared system libraries
@meskill I have not tried a multi-stage Dockerfile. Do you have an example you want me to try.
As far files being included into the built Docker image, I am not seeing that be the case. When I built my simple nginx test Dockerfile, none of the additional files I added to the base Docker container were added.
As @priyawadhwa mentioned, this might have been the case at some point but my testing shows, at least for my example, no additional files were added to the built Docker image.
@james-crowley Simple build like this
FROM node:16 as build
FROM build
WORKDIR /app
RUN npm --version
CMD [ "node" ]
After a success build try to call any command inside your container: curl, git etc.
@meskill For me, sometimes it worked sometimes it didn't. I had simple dockerfile that uses centos as base image with RUN directive that installs couple of packages using yum. That was running inside alpine based docker container, and failed on unpacking step. After that error, I couldn't execute anything in terminal (I've started kaniko from terminal).
On the other hand, I was able to build docker image based on alpine, but I guess that was because running container was based on alpine as well.
Personally, I think this is not reliable even with single stage dockerfiles.
There were couple of similar discussions here, and suggestion to make kaniko do its work in some temp directory, not / . But I guess that will require a lot of changes, and having in mind that development of kaniko is slow lately, who knows when and if that will happen.
As for workaround, for me it works when using proot (chroot alternative) to create chroot-ed environment where kaniko will unpack base image(s). So far, it works good even with multistage Dockerfiles. But it requires container to be started with SYS_PTRACE capability.
I've used a custom Kaniko image for years, built off the debug
tag after some trial and error about what could be added where (the /kaniko
dir from memory). Initially the modifications were just scripts and certificates but the current version has a custom Go binary, I wouldn't want to do much more with the current design and it works perfectly for GitLab where the workflow is container native with the Kubernetes operator.
@priyawadhwa I'm interested if Kaniko could be updated to work correctly on CI/CD systems where the runners are containers expected to run the whole workflow (e.g. GitHub actions) without DinD?
@stevehipwell You can already use kaniko in GitLab CI with the Docker executor without DinD. I'm not sure what you mean by "runners are containers expected to run the whole workflow"?
@dHannasch I know it works fine in GitLab, that was my example of a working cloud native solution (I've been using it for years). What I want is to be able to use Kaniko in a GitHub Actions self hosted runner running on Kubernetes. Unlike the GitLab runner which is an orchestrator that creates containers to run stages, the GitHub self hosted runner is a container that is expected to run a whole workflow; as it stands without DinD this can't run Kaniko.
I want same thing: add kaniko to other docker image. For now I can successfully build multistage builds inside other docker container using chroot+kaniko binaries.
tree kaniko -a
kaniko
├── .docker
│ └── config.json
├── docker-credential-ecr-login
├── docker-credential-gcr
└── executor
and created bash-script kaniko-build
# cat kaniko-build
#!/bin/bash
# dockerfile path relatively to current directory
dockerfile="$1"
#
destination="$2"
context="$(dirname ${dockerfile})"
# prepare chroot
mkdir workdir
cp -r kaniko workdir
# assuming you have .docker/config.json inside kaniko directory
export DOCKER_CONFIG=/kaniko/.docker/
mkdir -p workdir/kaniko/workspace
cd workdir
mkdir dev
mknod -m 666 dev/null c 1 3
mknod -m 666 dev/zero c 1 5
mkdir -p proc/self
cp /proc/self/mountinfo proc/self/
mkdir etc
cp /etc/resolv.conf etc/
cp /etc/nsswitch.conf etc
mkdir -p etc/ssl/certs
cat /etc/ssl/certs/* > etc/ssl/certs/sa-certificates.crt
cp -r "../${context}/." kaniko/workspace
# or make hardlink to each file if on same fs to speedup this (and same trees)
chroot . ./kaniko/executor -f /kaniko/workspace/Dockerfile --context=/kaniko/workspace/ --force --destination="$destination" --cleanup
usage: let's assume you have directory with lot of projects with dockerfiles inside it and kaniko binaries near. you already build it using some external tools like sbt/etc and want just assemble docker containers
# ls
project1 project2 project3 kaniko kaniko-build
build:
./kaniko-build project1/Dockerfile repo/name:0.0.1
For me this works so this poped up for me question from #107: why not chroot? Loks like it don't need full content of proc,dev,sys directories.
And of course script kaniko-build
is just my first naive attempt to build docker inside docker
Hey @james-crowley -- glad it worked for you. The reason we say that is because there are additional files needed for kaniko to work, so on its own the binary will not work. Since you copied those in, it seems fine.
There is also a slight risk that files in the new image may end up in the image you are trying to build, which aren't supposed to be there. kaniko knows to exclude adding volume mounts and anything in the
/kaniko
directory to the final image, but it might be adding in some files from the new base image you selected (i'm not 100% sure though as I haven't looked at this code in a while)
I can confirm this, it seems that a lot of files which are contained in my custom builder image also get copied into the target image - which is very concerning and shouldn't happen in my opinion. However I doubt we'll receive any official support on this because its an edge case and not recommended.
FROM circleci/runner:launch-agent ... RUN apt-get update && \ apt-get install -y ca-certificates && \ mkdir -p /kaniko/ssl/certs/ && \ cat /etc/ssl/certs/* > /kaniko/ssl/certs/ca-certificates.crt
Can this be done when using Kaniko? That is, use Kaniko to use Kaniko. I think not. I just tried it and it fails at the first copy command with the following error:
INFO[0178] COPY --from=gcr.io/kaniko-project/executor:debug /kaniko/executor /kaniko/executor error building image: error building stage: failed to execute command: copying file: creating file: open /kaniko/executor: text file busy
It doesn't matter which version of the image is used but rather the names of the executable. This means that one needs to use something other than Kaniko to build an image that uses Kaniko executor (probably the other COPY
s fail too with a similar error).
The reason I need a custom Kaniko image is the poor integration with GitLab. Currently I am unable to find a way to build an image using Kaniko from a repo that has submodules, requiring credentials. For that I need to manual do the credentials setup as well as the cloning. All of this requires calling git
as a command (e.g. git config
for the credentials store), which is not available in Kaniko. I am actually still surprised how large the Kaniko image is given the lack of any sort of common tools inside. I don't know why that is (Go Lang environment perhaps?) but it would be nice to have a smaller image and a package manager (unless the base image is based on something like BusyBox).
Through trial and error, I found that the error occurs when I copy from the kaniko image to the same path that is /kaniko/executor
This is the failed version.
FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM alpine:3
COPY --from=kaniko /kaniko/executor /kaniko/executor
COPY --from=kaniko /kaniko/warmer kaniko/warmer
COPY --from=kaniko /kaniko/docker-credential-gcr kaniko/docker-credential-gcr
COPY --from=kaniko /kaniko/docker-credential-ecr-login kaniko/docker-credential-ecr-login
COPY --from=kaniko /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env
COPY --from=kaniko /kaniko/.docker /kaniko/.docker
ENV PATH $PATH:/usr/local/bin:/kaniko
ENV DOCKER_CONFIG /kaniko/.docker/
ENV SSL_CERT_DIR /kaniko/ssl/certs
CMD ["/bin/bash"]
Got this error in the GitLab:
#...snipped
INFO[0028] COPY --from=kaniko /kaniko/executor /kaniko/executor
error building image: error building stage: failed to execute command: copying file: creating file: open /kaniko/executor: text file busy
I tried to build an image (above Dockerfile) by the kaniko image on my laptop and after 2nd time I built I noticed that the /kaniko/executor
binary size becomes zero bytes and the process stuck.
I tried with this command and run /kaniko/executor
inside:
docker run --rm --entrypoint sh -v ./:/workspace -it gcr.io/kaniko-project/executor:debug
So my workaround is to copy the executor
to other path.
FROM gcr.io/kaniko-project/executor:debug AS kaniko
FROM alpine:3
COPY --from=kaniko /kaniko/executor /opt/kaniko/executor
COPY --from=kaniko /kaniko/warmer /opt/kaniko/warmer
COPY --from=kaniko /kaniko/docker-credential-gcr /opt/kaniko/docker-credential-gcr
COPY --from=kaniko /kaniko/docker-credential-ecr-login /opt/kaniko/docker-credential-ecr-login
COPY --from=kaniko /kaniko/docker-credential-acr-env /opt/kaniko/docker-credential-acr-env
COPY --from=kaniko /kaniko/.docker /opt/kaniko/.docker
ENV PATH $PATH:/usr/local/bin:/opt/kaniko
ENV DOCKER_CONFIG /opt/kaniko/.docker/
ENV SSL_CERT_DIR /opt/kaniko/ssl/certs
CMD ["/bin/bash"]
The job in the .gitlab-ci.yml
is
build:
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- mkdir -p /opt/kaniko/.docker
# Write credentials to access Gitlab Container Registry within the runner/ci
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64 | tr -d '\n')\"}}}" > /opt/kaniko/.docker/config.json
- |
executor \
--dockerfile "Dockerfile" \
--destination "${CI_REGISTRY_IMAGE}/kaniko:latest"
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64 | tr -d '\n')\"}}}" > /opt/kaniko/.docker/config.json
Using this same path for my config.json
, but it's not picked up by the executor for /opt/kaniko/executor
error pushing image: failed to push to destination *****.local/.....:latest: POST http://*****.local/..../blobs/uploads/: DENIED: Anonymous is not permitted to perform the Feeds_AddPackage task for the current scope.; []
Any suggestion on how to circumvent that issue would be much appreciated.
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"auth\":\"$(echo -n ${CI_REGISTRY_USER}:${CI_REGISTRY_PASSWORD} | base64 | tr -d '\n')\"}}}" > /opt/kaniko/.docker/config.json
Using this same path for my
config.json
, but it's not picked up by the executor for /opt/kaniko/executorerror pushing image: failed to push to destination *****.local/.....:latest: POST http://*****.local/..../blobs/uploads/: DENIED: Anonymous is not permitted to perform the Feeds_AddPackage task for the current scope.; []
Any suggestion on how to circumvent that issue would be much appreciated.
All I can recommend is to use the {"auth": {"username" : ".....", "password" : "....."}, .... }
in a proper text editor that supports automatic JSON formatting. I have found numerous errors in my echo
call. Once you get that working, you can go with the base64 encoded version. For security just use a disposable access token. Ther error message says "anonymous", which means that somehow the credentials are either not being sent or (upon being received) processed properly. Doing stuff in plaintext can at least give you a chance to fix such issues. You can also upload the config.json
as an artifact. Make sure you either point at the path where it is or simply add untracked: true
. Also when: on_failure
is essential since the upload of artifacts will not occur otherwise (since the job is failing).
One last thing to check is whether you are pushing to the right repo. Using a group or personal token you can also push to other repos. But you cannot PUSH to a group container registry as far as a know. All the images there are accumulated from all the projects in that group. I do believe you can do a PULL.
In the
README
it states, https://github.com/GoogleContainerTools/kaniko/blob/7e3954ac734534ce5ce68ad6300a2d3143d82f40/README.md#L14Is there any reason for this statement? I am not sure why copying the compiled binaries and correct folders would not cause Kaniko to not work.
@priyawadhwa I saw you made this addition a while back. Do you have an insights on to why this might not work?
I was hoping to use Kaniko in another Docker container with out having to extend
gcr.io/kaniko-project/executor
.