Open dsalaza4 opened 5 years ago
Oh, also, I forgot to mention that everything seems to work when using docker build.
Build:
$ docker build -t fluidattacks/alpine-kaniko:latest ./
Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM gcr.io/kaniko-project/executor:debug as kaniko
debug: Pulling from kaniko-project/executor
6dfefd07c40d: Pull complete
d33249f1f03a: Pull complete
6ef9abcb05b7: Pull complete
281bfbd3d741: Pull complete
ebc3c294331c: Pull complete
97b3d499de22: Pull complete
2b01356084ff: Pull complete
Digest: sha256:7587952834538c83a73b881def2f1bbb8ad73d545699105a96a2a5e370fa56bc
Status: Downloaded newer image for gcr.io/kaniko-project/executor:debug
---> 60ef6732686c
Step 2/6 : FROM alpine:latest
latest: Pulling from library/alpine
050382585609: Pull complete
Digest: sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998
Status: Downloaded newer image for alpine:latest
---> b7b28af77ffe
Step 3/6 : ENV DOCKER_CONFIG='/kaniko/.docker'
---> Running in e3a10cfc2165
Removing intermediate container e3a10cfc2165
---> 0dd6eb2fd280
Step 4/6 : ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json'
---> Running in c9ba82a7fcbb
Removing intermediate container c9ba82a7fcbb
---> 34e938e9b479
Step 5/6 : RUN apk update && apk upgrade && apk add --no-cache bash git
---> Running in b67d524beefb
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
v3.10.1-2-gbc3922e64b [http://dl-cdn.alpinelinux.org/alpine/v3.10/main]
v3.10.1-1-gb7bbae6e40 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community]
OK: 10327 distinct packages available
OK: 6 MiB in 14 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/11) Installing ncurses-terminfo-base (6.1_p20190518-r0)
(2/11) Installing ncurses-terminfo (6.1_p20190518-r0)
(3/11) Installing ncurses-libs (6.1_p20190518-r0)
(4/11) Installing readline (8.0.0-r0)
(5/11) Installing bash (5.0.0-r0)
Executing bash-5.0.0-r0.post-install
(6/11) Installing ca-certificates (20190108-r0)
(7/11) Installing nghttp2-libs (1.38.0-r0)
(8/11) Installing libcurl (7.65.1-r0)
(9/11) Installing expat (2.2.7-r0)
(10/11) Installing pcre2 (10.33-r0)
(11/11) Installing git (2.22.0-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 30 MiB in 25 packages
Removing intermediate container b67d524beefb
---> 7c47c2ce1b24
Step 6/6 : COPY --from=kaniko /kaniko /kaniko
---> ef3990fc68e8
Successfully built ef3990fc68e8
Successfully tagged fluidattacks/alpine-kaniko:latest
Push:
$ docker push fluidattacks/alpine-kaniko:latest
The push refers to repository [docker.io/fluidattacks/alpine-kaniko]
df9c5a60ac77: Pushed
e43a8db04466: Pushed
1bfeebd65323: Layer already exists
latest: digest: sha256:be4ac8a388b571288a2d20fd4ec7f79c8292677514ecc297b25c19d95857b3aa size: 952
Pull:
$ docker pull fluidattacks/alpine-kaniko:latest
latest: Pulling from fluidattacks/alpine-kaniko
050382585609: Pull complete
ae7a169c0dac: Pull complete
5a3363d5820b: Pull complete
Digest: sha256:be4ac8a388b571288a2d20fd4ec7f79c8292677514ecc297b25c19d95857b3aa
Status: Downloaded newer image for fluidattacks/alpine-kaniko:latest
docker.io/fluidattacks/alpine-kaniko:latest
@dsalaza4 I've been facing the same issue, but here's how i worked around it.
For my use-case, i'm kind of doing the same thing, but not with GCR, but rather with AWS ECR.
I noticed that this usually happens with the config.json
file where kaniko somehow saves this file with a long recursive filename (which i think is probably a bug, but not so sure). To avoid that, i tried copying the config.json
file separately from the base /kaniko image and other relevant files separately instead of copying the whole /kaniko
folder (which i think causes this issue) so that these files are not affected when executor builds and make changes to the filesystem.
In my case (with ECR), i'm doing this with my custom config.json. For you, i guess something like this would work as well (you just need to use google api credentials for your use-case):
# This image builds alpine with bash, git and kaniko executor
FROM gcr.io/kaniko-project/executor:debug as kaniko
# Do this if you have a custom config.json to auth with docker registry
COPY config.json /kaniko/.docker/
FROM alpine:latest
ENV DOCKER_CONFIG /kaniko/.docker/
RUN apk update && \
apk upgrade && \
apk add --no-cache \
bash \
git
# Copying complete /kaniko folder generates long filenames which
# fails in pulling the docker image after it's built. So [WARNING] don't do this.
# COPY --from=kaniko /kaniko /kaniko
# Copy relevant files from /kaniko separately instead
COPY --from=kaniko /kaniko/executor /kaniko/
COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/
COPY --from=kaniko /kaniko/.docker/config.json /kaniko/.docker/
ENV PATH=/kaniko:$PATH
Build and run this image to check if this works:
>> docker build -t test .
>> docker run -it -v /path/to/workspace:/workspace --entrypoint="" test /bin/sh
/workspace # executor --dockerfile=./Dockerfile --context=dir://`pwd` --force --no-push
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug
INFO[0000] Resolved base name alpine:latest to alpine:latest
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug
INFO[0000] Resolved base name alpine:latest to alpine:latest
INFO[0000] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:a54d167d7c4b7ce0c7a622f17dcf473c652b29341b321ca507425c8fa3525842: no such file or directory
INFO[0001] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0002] Downloading base image alpine:latest
INFO[0004] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory
INFO[0004] Downloading base image alpine:latest
INFO[0005] Built cross stage deps: map[0:[/kaniko/executor /kaniko/docker-credential-ecr-login /kaniko/.docker/config.json]]
INFO[0005] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0006] Error while retrieving image from cache: getting file info: stat /cache/sha256:a54d167d7c4b7ce0c7a622f17dcf473c652b29341b321ca507425c8fa3525842: no such file or directory
INFO[0006] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0014] Taking snapshot of full filesystem...
INFO[0015] Using files from context: [/workspace/config.json]
INFO[0015] COPY config.json /kaniko/.docker/
INFO[0015] Taking snapshot of files...
INFO[0015] Saving file /kaniko/executor for later use.
INFO[0015] Saving file /kaniko/docker-credential-ecr-login for later use.
INFO[0015] Saving file /kaniko/.docker/config.json for later use.
INFO[0015] Deleting filesystem...
INFO[0016] Downloading base image alpine:latest
INFO[0017] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory
INFO[0017] Downloading base image alpine:latest
INFO[0018] Unpacking rootfs as cmd RUN apk update && apk upgrade && apk add --no-cache bash git requires it.
INFO[0018] Taking snapshot of full filesystem...
INFO[0019] ENV DOCKER_CONFIG /kaniko/.docker/
INFO[0019] RUN apk update && apk upgrade && apk add --no-cache bash git
INFO[0019] cmd: /bin/sh
INFO[0019] args: [-c apk update && apk upgrade && apk add --no-cache bash git]
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
v3.10.1-96-g031621e2cf [http://dl-cdn.alpinelinux.org/alpine/v3.10/main]
v3.10.1-99-gcf78a82040 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community]
OK: 10337 distinct packages available
(1/2) Upgrading musl (1.1.22-r2 -> 1.1.22-r3)
(2/2) Upgrading musl-utils (1.1.22-r2 -> 1.1.22-r3)
Executing busybox-1.30.1-r2.trigger
OK: 6 MiB in 14 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/11) Installing ncurses-terminfo-base (6.1_p20190518-r0)
(2/11) Installing ncurses-terminfo (6.1_p20190518-r0)
(3/11) Installing ncurses-libs (6.1_p20190518-r0)
(4/11) Installing readline (8.0.0-r0)
(5/11) Installing bash (5.0.0-r0)
Executing bash-5.0.0-r0.post-install
(6/11) Installing ca-certificates (20190108-r0)
(7/11) Installing nghttp2-libs (1.38.0-r0)
(8/11) Installing libcurl (7.65.1-r0)
(9/11) Installing expat (2.2.7-r0)
(10/11) Installing pcre2 (10.33-r0)
(11/11) Installing git (2.22.0-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 30 MiB in 25 packages
INFO[0023] Taking snapshot of full filesystem...
INFO[0025] COPY --from=kaniko /kaniko/executor /kaniko/
INFO[0025] Taking snapshot of files...
INFO[0026] COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/
INFO[0026] Taking snapshot of files...
INFO[0026] COPY --from=kaniko /kaniko/.docker/config.json /kaniko/.docker/
INFO[0026] Taking snapshot of files...
INFO[0026] ENV PATH=/kaniko:$PATH
INFO[0026] WORKDIR /workspace
INFO[0026] cmd: workdir
INFO[0026] Changed working directory to /workspace
INFO[0026] Skipping push to container registry due to --no-push flag
/workspace # find / -name config.json
/workspace/config.json
/kaniko/.docker/config.json
/kaniko/0/kaniko/.docker/config.json
However if you copy the entire folder of /kaniko during build, the find
command would reveal the following results.
/workspace # find / -name config.json
/workspace/config.json
/kaniko/.docker/config.json
/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
...
...
...
You can read this more to see what more you need to copy w.r.t GCR and Google APIs.
Hopefully it works for you as well since it works for me.
Wow! looks like this actually fixed the problem. Thank you very much!
I'm reopening this issue as I just found out that although copying the executor and other needed files fixes the initial problem, it seems like we're still having some serious issues when it comes to what we're actually copying.
Let me explain in more detail.
When executing:
COPY --from=kaniko /kaniko/executor /kaniko/
COPY --from=kaniko /kaniko/.docker/config.json /kaniko/.docker/
one might expect that such files would come from the specified stage:
FROM gcr.io/kaniko-project/executor:debug as kaniko
That is not the case with Kaniko. If the container you're using for building the image is also gcr.io/kaniko-project/executor:debug, what happens is that the files get copied over from the container that is building the image, instead of the stage included in the Dockerfile. I know it's a little confusing, I will show the particular example that made me realize of this.
Right after I built the container with kaniko, I went in to check the files within the /kaniko folder. There, I found out that my config.json file looked like this:
{"auths":{"index.docker.io": {"username":"MY_USER", "password":"MY_PASS"}}}
where both MY_USER and MY_PASS were visible. This was due to the fact that when building the image, in order to be able to push it to the registry, I logged in first:
echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
{\"username\":\"${DOCKER_HUB_USER}\",\
\"password\":\"${DOCKER_HUB_PASS}\"}}}" > /kaniko/.docker/config.json
/kaniko/executor \
--cleanup \
--context "dockerfiles/public/$1/" \
--dockerfile "dockerfiles/public/$1/Dockerfile" \
--destination "fluidattacks/$1:$2" \
--snapshotMode time
I tried in many ways to reference the files from the stage instead of the files from the building container with no success.
I also tried building the image with docker, and voila, it worked as expected. My config.json looked like this (the default kaniko config.json):
{
"auths": {},
"credHelpers": {
"asia.gcr.io": "gcr",
"eu.gcr.io": "gcr",
"gcr.io": "gcr",
"staging-k8s.gcr.io": "gcr",
"us.gcr.io": "gcr"
}
}
What a bummer :disappointed:
@dsalaza4 Could you share with me your Dockerfile
? If possible, strip down everything from it with a minimal configuration so it's simple for me to replicate and test on my end. I could have done it myself, but i wasn't sure how you were running the following echo
and redirect '>'
command:
echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
{\"username\":\"${DOCKER_HUB_USER}\",\
\"password\":\"${DOCKER_HUB_PASS}\"}}}" > /kaniko/.docker/config.json
I think i know what the issue is, but i need to test it first on my end. Will get back to you when you share with me your minimal Dockerfile configuration.
Thanks!
Also, if you could try the following:
Instead of writing your credentials to /kaniko/.docker/config.json
, you can write it to a file in a whitelisted directory /home/jenkins/agent
like so:
# Write credentials to file in whitelisted directory
>> mkdir -p /home/jenkins/agent/.docker
>> echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
{\"username\":\"${DOCKER_HUB_USER}\",\
\"password\":\"${DOCKER_HUB_PASS}\"}}}" > /home/jenkins/agent/.docker/config.json
# Call kaniko executor to build and push docker image
>> /kaniko/executor \
--cleanup \
--context "dockerfiles/public/$1/" \
--dockerfile "dockerfiles/public/$1/Dockerfile" \
--destination "fluidattacks/$1:$2" \
--snapshotMode time
For this to work, in your Dockerfile, set the BUILD_CONFIG
environment variable to also point to the whitelisted directory. So something like this:
ENV DOCKER_CONFIG /home/jenkins/agent/.docker/
This will tell the kaniko executor to read the docker credentials from $DOCKER_CONFIG environment variable in order to push the image.
From what i understand, since /home/jenkins/agent/
is already a whitelisted directory in kaniko, it won't delete the directory when deleting the filesystem, and also it won't save the file to the final image.
Give this a try and let me know if it works. I tried it on my end, it seemed to work.
Hi, the Dockerfile is the same I provided in the original post. There, I provide three things:
Did you try updating the build-public.sh
to write docker credentials to /home/jenkins/agent/.docker/
or some other kaniko whitelisted directory and update the DOCKER_CONFIG
environment variable accordingly? Did it work for you?
Ok, so here's what I did:
GitlabCI job (Did not change):
alpine-kaniko:
stage: kaniko-setup
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
retry: 1
script:
- sh ./ci-scripts/build-public.sh alpine-kaniko test
build-public.sh:
export DOCKER_CONFIG='/home/jenkins/agent/.docker'
mkdir -p "$DOCKER_CONFIG"
echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
{\"username\":\"${DOCKER_HUB_USER}\",\
\"password\":\"${DOCKER_HUB_PASS}\"}}}" > "$DOCKER_CONFIG/config.json"
/kaniko/executor \ --cleanup \ --context "dockerfiles/public/$1/" \ --dockerfile "dockerfiles/public/$1/Dockerfile" \ --destination "fluidattacks/$1:$2" \ --snapshotMode time
3. Dockerfile (I simplified it for debug purposes, it isn't even multi-stage anymore):
FROM alpine:latest
RUN apk update && \ apk upgrade && \ apk add --no-cache \ bash \ git
ENV PATH=/kaniko:$PATH
When running the job, I get this error:
error pushing image: failed to push to destination index.docker.io/fluidattacks/alpine-kaniko: UNAUTHORIZED: authentication required; [map[Type:repository Class: Name:fluidattacks/alpine-kaniko Action:pull] map[Type:repository Class: Name:fluidattacks/alpine-kaniko Action:push]]
It looks like kaniko cleans the config.json file before pushing to the repo. Not sure
if the folder is actually being whitelisted.
Here's an interesting thing I just discovered and don't know how to explain.
I found out that the .docker folder doesn't really have anything valuable inside at building time.
What I really want to take from kaniko to the alpine container is the binary/executor.
I also want to have a .docker folder, even if it's empty, so when I run the echo $BLAH > /kaniko/.docker/config.json
in build-public.sh
, command won't fail because such folder does not exist.
Which lead me to this dockerfile:
FROM gcr.io/kaniko-project/executor:latest as kaniko
FROM alpine:latest
ENV DOCKER_CONFIG='/kaniko/.docker'
RUN apk update && \
apk upgrade && \
apk add --no-cache \
bash \
git
COPY --from=kaniko /kaniko/executor /kaniko/
RUN mkdir -p /kaniko/.docker
ENV PATH=/kaniko:$PATH
Well, it turns out that now kaniko completely removes the folder. When in built container:
/ # ls
bin dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
The kaniko folder just disappeared
Okay, let's step back a little and try to figure out what we were trying to solve in the second place. i.e. your credentials were being saved and visible in the resulting docker image. Prior to that, with the solution i proposed before, it worked and you were able to build a docker image.
From what i understand, although you don't require the docker credentials at build time, you still require those credentials at runtime i.e. in the live kaniko executor container from the resulting image i.e. when you run executor to build and push images. The good thing is, you don't need to save the credentials to the image but you can inject it in a running executor container with the echo blah
.
The problem with this approach is that, the config where the docker credentials were stored got wiped out when the executor deleted the filesystem on building the image. I said that it had a default whitelist directory that prevents these directories from being deleted, however i was wrong. By default it doesn't whitelist the /home/jenkins/agent
directory, but you can instruct kaniko to whitelist it by setting a VOLUME directive in the Dockerfile as in (let's go with /build
now instead of /home/jenkins/agent
for simplicity):
VOLUME /build
So can you try with the following Dockerfile:
FROM gcr.io/kaniko-project/executor:v0.10.0 as kaniko
FROM alpine:latest
RUN apk update && \
apk upgrade && \
apk add --no-cache \
bash \
git
# Should be put after above installations
COPY --from=kaniko /kaniko/executor /kaniko/
ENV PATH=/kaniko:$PATH
# Path to docker credentials
ENV DOCKER_CONFIG /build/.docker
# Do this to whitelist directory, store your docker credentials here
VOLUME /build
# Optional
WORKDIR /build
Then you can echo your credentials into the /build/.docker/config.json
file.
The way i'm doing it doing my builds is the following:
>> docker build -f Dockerfile.test -t my-registry/my-repo:pr179 .
>> docker push my-registry/my-repo:pr179 .
Now i use my-registry/my-repo:pr179
inside my CI pipeline to build images with kaniko going forward. This can also be used to build the same image recursively with kaniko. So think of this image as the seed image built by docker instead of kaniko and then from here on, all images would be built inside this container with kaniko executor as in the following:
>> mkdir -p /build/.docker
>> echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
{\"username\":\"${DOCKER_HUB_USER}\",\
\"password\":\"${DOCKER_HUB_PASS}\"}}}" > "/build/.docker/config.json"
>> /kaniko/executor \
--cleanup \
--context "dockerfiles/public/$1/" \
--dockerfile "dockerfiles/public/$1/Dockerfile" \
--destination "fluidattacks/$1:$2" \
--snapshotMode time
The reason it worked for me before was because i've been using the jenkins/jnlp-slave:alpine
base image instead of the pure alpine image and that already had the VOLUME
directive set to /home/jenkins/agent
which made kaniko automatically whitelist the directory. So in this case, the /build
directory should now be whitelisted and your credentials shouldn't be wiped out when building and pushing the image.
I've tried it on my end, hopefully it works for you too.
I tried with the volume directive, also using the --single-snapshot flag. None of the attempts worked. The issue seems related to the fact that kaniko protects the /kaniko folder in the first stage, but deletes it in the second. I decided to build these images with docker, as Kaniko seems to be not capable of doing it :disappointed:
Did you put the VOLUME
directive on the /kaniko
folder? That i believe you shouldn't do. Follow the Dockerfile i've presented and try putting the VOLUME directive on the /build
directory and then create it in your bash and store your docker credentials there and then try. Let me know how it works with that setup.
As a note; using kaniko in a non-official image or copying the binaries to a non-official image is not supported. There may be a way to make it work, but YMMV
@dsalaza4 Sorry for the late response.
/kaniko
is a special directory and the tools uses it as a build workspace. The workaround mentioned where you only add specific files you need is the right thing to do.
@tejal29 @cvgw Any progress on this?
As a note; using kaniko in a non-official image or copying the binaries to a non-official image is not supported. There may be a way to make it work, but YMMV
The problem is that the official kaniko images are FROM scratch
, so if you want to use e.g. Git to extract useful building/tagging info from revision control, you're SOL. kaniko:debug
just adds Busybox, and at least in the case of Git there are no official static binaries made available that could easily be added to the kaniko image.
TBH it would be much better if there were e.g. a kaniko:alpine
image available.
The problem is that the official kaniko images are
FROM scratch
, so if you want to use e.g. Git to extract useful building/tagging info from revision control, you're SOL.kaniko:debug
just adds Busybox, and at least in the case of Git there are no official static binaries made available that could easily be added to the kaniko image.
@stephen-dexda is it possible for your build process to compute the digest using git tags before spinning a kaniko pod? https://github.com/GoogleContainerTools/skaffold does it. Would it possible to use skaffold to spin up a kaniko pod?
I've also run into this issue.
In my case, I'm attempting to work around a bug where I need to run git lfs pull
prior to using kaniko
to build the image.
Since the kaniko
image does not have git lfs
installed, I have to fall back on trying to pull the materials out of the image and stuff them into one that has git lfs
support.
It would be nice if there were a supported method for running kaniko
in more flexible environments.
Based on my own testing, this issue appears to be specific to v1.7.0
and possibly older. I would suggest trying to pin to a specific version rather than using the reusable debug
tag- i.e. v1.8.1-debug
Reproduce the error:
docker run --rm \
-v $(pwd):/workspace \
gcr.io/kaniko-project/executor:v1.7.0-debug \
--force \
--dockerfile=/workspace/Dockerfile \
--context=dir:///workspace \
--no-push
With Dockerfile:
FROM gcr.io/kaniko-project/executor:v1.7.0-debug as kaniko
FROM alpine
COPY --from=kaniko /kaniko/executor /kaniko/executor
COPY --from=kaniko /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env
Gives an error like:
INFO[0014] COPY --from=kaniko /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env
error building image: error building stage: failed to execute command: resolving src: failed to get fileinfo for /kaniko/0/kaniko/docker-credential-acr-env: lstat /kaniko/0/kaniko/docker-credential-acr-env: no such file or directory
If you change the docker tag to v1.8.1-debug
, the build passes.
Actual behavior I get a
file name too long
from lstat when pulling my docker image built with kaniko. The error is:My logs after building the container and pushing it are:
Expected behavior Image should pull correctly
To Reproduce Steps to reproduce the behavior:
Additional Information
FROM gcr.io/kaniko-project/executor:debug as kaniko FROM alpine:latest
ENV DOCKER_CONFIG='/kaniko/.docker' ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json'
RUN apk update && \ apk upgrade && \ apk add --no-cache \ bash \ git
COPY --from=kaniko /kaniko /kaniko
alpine-kaniko: stage: setup image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] retry: 1 script:
echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\ {\"username\":\"${DOCKER_HUB_USER}\",\ \"password\":\"${DOCKER_HUB_PASS}\"}}}" > /kaniko/.docker/config.json
/kaniko/executor \ --cleanup \ --context "dockerfiles/public/$1/" \ --dockerfile "dockerfiles/public/$1/Dockerfile" \ --destination "fluidattacks/$1:$2" \ --snapshotMode time