GoogleContainerTools / kaniko

Build Container Images In Kubernetes
Apache License 2.0
14.85k stars 1.44k forks source link

Layer fails to extract when creating multistage image #716

Open dsalaza4 opened 5 years ago

dsalaza4 commented 5 years ago

Actual behavior I get a file name too long from lstat when pulling my docker image built with kaniko. The error is:

$ docker pull fluidattacks/alpine-kaniko:latest

latest: Pulling from fluidattacks/alpine-kaniko
050382585609: Already exists 
97b8426a6f54: Pull complete 
2d05ad4487d2: Extracting [==================================================>]  211.2kB/211.2kB
failed to register layer: lstat /var/lib/docker/overlay2/6c957bc6aa940ccff95bea6b9bbaa45d6561e1179a085bfa1d344c1e826e127a/diff/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.config/gcloud/docker_credential_gcr_config.json: file name too long

My logs after building the container and pushing it are:

Running with gitlab-runner 12.1.0-rc1 (6da35412)
  on docker-auto-scale 72989761
Using Docker executor with image gcr.io/kaniko-project/executor:debug ...
Pulling docker image gcr.io/kaniko-project/executor:debug ...
Using docker image sha256:60ef6732686c9655a6c28a3d2d805f4f0642d5e403c7c24ffc79e0c8d00bd0a0 for gcr.io/kaniko-project/executor:debug ...
Running on runner-72989761-project-10466586-concurrent-0 via runner-72989761-srm-1563296684-cd599c9c...
Fetching changes...
Initialized empty Git repository in /builds/fluidattacks/default/.git/
Created fresh repository.
From https://gitlab.com/fluidattacks/default
 * [new branch]      dsalazaratfluid -> origin/dsalazaratfluid
 * [new branch]      master          -> origin/master
Checking out 0b9a7e74 as dsalazaratfluid...

Skipping Git submodules setup
$ sh ./ci-scripts/build-public.sh alpine-kaniko latest
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug 
INFO[0000] Resolved base name alpine:latest to alpine:latest 
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug 
INFO[0000] Resolved base name alpine:latest to alpine:latest 
INFO[0000] Downloading base image gcr.io/kaniko-project/executor:debug 
2019/07/16 17:06:06 No matching credentials were found, falling back on anonymous
INFO[0000] Error while retrieving image from cache: getting file info: stat /cache/sha256:7587952834538c83a73b881def2f1bbb8ad73d545699105a96a2a5e370fa56bc: no such file or directory 
INFO[0000] Downloading base image gcr.io/kaniko-project/executor:debug 
2019/07/16 17:06:06 No matching credentials were found, falling back on anonymous
INFO[0000] Downloading base image alpine:latest         
INFO[0000] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory 
INFO[0000] Downloading base image alpine:latest         
INFO[0001] Built cross stage deps: map[0:[/kaniko]]     
INFO[0001] Downloading base image gcr.io/kaniko-project/executor:debug 
2019/07/16 17:06:07 No matching credentials were found, falling back on anonymous
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:7587952834538c83a73b881def2f1bbb8ad73d545699105a96a2a5e370fa56bc: no such file or directory 
INFO[0001] Downloading base image gcr.io/kaniko-project/executor:debug 
2019/07/16 17:06:07 No matching credentials were found, falling back on anonymous
INFO[0001] Only file modification time will be considered when snapshotting 
INFO[0002] Taking snapshot of full filesystem...        
INFO[0002] Saving file /kaniko for later use.           
INFO[0005] Deleting filesystem...                       
INFO[0005] Downloading base image alpine:latest         
INFO[0005] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory 
INFO[0005] Downloading base image alpine:latest         
INFO[0005] Only file modification time will be considered when snapshotting 
INFO[0005] Unpacking rootfs as cmd RUN apk update &&   apk upgrade &&   apk add --no-cache     bash     git requires it. 
INFO[0006] Taking snapshot of full filesystem...        
INFO[0006] ENV DOCKER_CONFIG='/kaniko/.docker'          
INFO[0006] ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json' 
INFO[0006] RUN apk update &&   apk upgrade &&   apk add --no-cache     bash     git 
INFO[0006] cmd: /bin/sh                                 
INFO[0006] args: [-c apk update &&   apk upgrade &&   apk add --no-cache     bash     git] 
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
v3.10.1-2-gbc3922e64b [http://dl-cdn.alpinelinux.org/alpine/v3.10/main]
v3.10.1-1-gb7bbae6e40 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community]
OK: 10327 distinct packages available
OK: 6 MiB in 14 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/11) Installing ncurses-terminfo-base (6.1_p20190518-r0)
(2/11) Installing ncurses-terminfo (6.1_p20190518-r0)
(3/11) Installing ncurses-libs (6.1_p20190518-r0)
(4/11) Installing readline (8.0.0-r0)
(5/11) Installing bash (5.0.0-r0)
Executing bash-5.0.0-r0.post-install
(6/11) Installing ca-certificates (20190108-r0)
(7/11) Installing nghttp2-libs (1.38.0-r0)
(8/11) Installing libcurl (7.65.1-r0)
(9/11) Installing expat (2.2.7-r0)
(10/11) Installing pcre2 (10.33-r0)
(11/11) Installing git (2.22.0-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 30 MiB in 25 packages
INFO[0007] Taking snapshot of full filesystem...        
INFO[0009] COPY --from=kaniko /kaniko /kaniko           
INFO[0012] Taking snapshot of files...                  
INFO[0027] Deleting filesystem...                       
2019/07/16 17:06:33 existing blob: sha256:0503825856099e6adb39c8297af09547f69684b7016b7f3680ed801aa310baaa
2019/07/16 17:06:34 pushed blob: sha256:2d05ad4487d2149a48e960513b97f95007da6709b7ffbf1dbbe9f7ac64b840fc
2019/07/16 17:06:34 pushed blob: sha256:a2f5d816c3ee3bc28b743061365a37ebe07ae65a61bd7f38de402e721d6d0881
2019/07/16 17:06:35 pushed blob: sha256:97b8426a6f549d1f8d6aecd69aa80b89736e3f739a3d1a0b8e12857a373bf68f
2019/07/16 17:06:36 index.docker.io/fluidattacks/alpine-kaniko:latest: digest: sha256:39f2d5410b9b1c2d6e9e8acbdce66064226731fe45af7a2cf37aa35db56717cc size: 756
Job succeeded

Expected behavior Image should pull correctly

To Reproduce Steps to reproduce the behavior:

  1. Build dockerfile with Kaniko
  2. Push it to registry
  3. Try to download it

Additional Information

FROM gcr.io/kaniko-project/executor:debug as kaniko FROM alpine:latest

ENV DOCKER_CONFIG='/kaniko/.docker' ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json'

RUN apk update && \ apk upgrade && \ apk add --no-cache \ bash \ git

COPY --from=kaniko /kaniko /kaniko

- Gitlab-CI script

alpine-kaniko: stage: setup image: name: gcr.io/kaniko-project/executor:debug entrypoint: [""] retry: 1 script:

/kaniko/executor \ --cleanup \ --context "dockerfiles/public/$1/" \ --dockerfile "dockerfiles/public/$1/Dockerfile" \ --destination "fluidattacks/$1:$2" \ --snapshotMode time

dsalaza4 commented 5 years ago

Oh, also, I forgot to mention that everything seems to work when using docker build.

Build:

$ docker build -t fluidattacks/alpine-kaniko:latest ./

Sending build context to Docker daemon  2.048kB
Step 1/6 : FROM gcr.io/kaniko-project/executor:debug as kaniko
debug: Pulling from kaniko-project/executor
6dfefd07c40d: Pull complete 
d33249f1f03a: Pull complete 
6ef9abcb05b7: Pull complete 
281bfbd3d741: Pull complete 
ebc3c294331c: Pull complete 
97b3d499de22: Pull complete 
2b01356084ff: Pull complete 
Digest: sha256:7587952834538c83a73b881def2f1bbb8ad73d545699105a96a2a5e370fa56bc
Status: Downloaded newer image for gcr.io/kaniko-project/executor:debug
 ---> 60ef6732686c
Step 2/6 : FROM alpine:latest
latest: Pulling from library/alpine
050382585609: Pull complete 
Digest: sha256:6a92cd1fcdc8d8cdec60f33dda4db2cb1fcdcacf3410a8e05b3741f44a9b5998
Status: Downloaded newer image for alpine:latest
 ---> b7b28af77ffe
Step 3/6 : ENV DOCKER_CONFIG='/kaniko/.docker'
 ---> Running in e3a10cfc2165
Removing intermediate container e3a10cfc2165
 ---> 0dd6eb2fd280
Step 4/6 : ENV GOOGLE_APPLICATION_CREDENTIALS='/kaniko/.docker/config.json'
 ---> Running in c9ba82a7fcbb
Removing intermediate container c9ba82a7fcbb
 ---> 34e938e9b479
Step 5/6 : RUN apk update &&   apk upgrade &&   apk add --no-cache     bash     git
 ---> Running in b67d524beefb
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
v3.10.1-2-gbc3922e64b [http://dl-cdn.alpinelinux.org/alpine/v3.10/main]
v3.10.1-1-gb7bbae6e40 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community]
OK: 10327 distinct packages available
OK: 6 MiB in 14 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/11) Installing ncurses-terminfo-base (6.1_p20190518-r0)
(2/11) Installing ncurses-terminfo (6.1_p20190518-r0)
(3/11) Installing ncurses-libs (6.1_p20190518-r0)
(4/11) Installing readline (8.0.0-r0)
(5/11) Installing bash (5.0.0-r0)
Executing bash-5.0.0-r0.post-install
(6/11) Installing ca-certificates (20190108-r0)
(7/11) Installing nghttp2-libs (1.38.0-r0)
(8/11) Installing libcurl (7.65.1-r0)
(9/11) Installing expat (2.2.7-r0)
(10/11) Installing pcre2 (10.33-r0)
(11/11) Installing git (2.22.0-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 30 MiB in 25 packages
Removing intermediate container b67d524beefb
 ---> 7c47c2ce1b24
Step 6/6 : COPY --from=kaniko /kaniko /kaniko
 ---> ef3990fc68e8
Successfully built ef3990fc68e8
Successfully tagged fluidattacks/alpine-kaniko:latest

Push:

$ docker push fluidattacks/alpine-kaniko:latest

The push refers to repository [docker.io/fluidattacks/alpine-kaniko]
df9c5a60ac77: Pushed 
e43a8db04466: Pushed 
1bfeebd65323: Layer already exists 
latest: digest: sha256:be4ac8a388b571288a2d20fd4ec7f79c8292677514ecc297b25c19d95857b3aa size: 952

Pull:

$ docker pull fluidattacks/alpine-kaniko:latest

latest: Pulling from fluidattacks/alpine-kaniko
050382585609: Pull complete 
ae7a169c0dac: Pull complete 
5a3363d5820b: Pull complete 
Digest: sha256:be4ac8a388b571288a2d20fd4ec7f79c8292677514ecc297b25c19d95857b3aa
Status: Downloaded newer image for fluidattacks/alpine-kaniko:latest
docker.io/fluidattacks/alpine-kaniko:latest
dsouzajude commented 5 years ago

@dsalaza4 I've been facing the same issue, but here's how i worked around it.

For my use-case, i'm kind of doing the same thing, but not with GCR, but rather with AWS ECR.

I noticed that this usually happens with the config.json file where kaniko somehow saves this file with a long recursive filename (which i think is probably a bug, but not so sure). To avoid that, i tried copying the config.json file separately from the base /kaniko image and other relevant files separately instead of copying the whole /kaniko folder (which i think causes this issue) so that these files are not affected when executor builds and make changes to the filesystem.

In my case (with ECR), i'm doing this with my custom config.json. For you, i guess something like this would work as well (you just need to use google api credentials for your use-case):

# This image builds alpine with bash, git and kaniko executor

FROM gcr.io/kaniko-project/executor:debug as kaniko

# Do this if you have a custom config.json to auth with docker registry
COPY config.json /kaniko/.docker/

FROM alpine:latest

ENV DOCKER_CONFIG /kaniko/.docker/

RUN apk update && \
  apk upgrade && \
  apk add --no-cache \
    bash \
    git

# Copying complete /kaniko folder generates long filenames which 
# fails in pulling the docker image after it's built. So [WARNING] don't do this.
# COPY --from=kaniko /kaniko /kaniko

# Copy relevant files from /kaniko separately instead
COPY --from=kaniko /kaniko/executor /kaniko/
COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/
COPY --from=kaniko /kaniko/.docker/config.json /kaniko/.docker/
ENV PATH=/kaniko:$PATH

Build and run this image to check if this works:

>> docker build -t test .
>> docker run -it -v /path/to/workspace:/workspace --entrypoint="" test /bin/sh 
/workspace # executor --dockerfile=./Dockerfile --context=dir://`pwd` --force --no-push

INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug
INFO[0000] Resolved base name alpine:latest to alpine:latest
INFO[0000] Resolved base name gcr.io/kaniko-project/executor:debug to gcr.io/kaniko-project/executor:debug
INFO[0000] Resolved base name alpine:latest to alpine:latest
INFO[0000] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0001] Error while retrieving image from cache: getting file info: stat /cache/sha256:a54d167d7c4b7ce0c7a622f17dcf473c652b29341b321ca507425c8fa3525842: no such file or directory
INFO[0001] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0002] Downloading base image alpine:latest
INFO[0004] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory
INFO[0004] Downloading base image alpine:latest
INFO[0005] Built cross stage deps: map[0:[/kaniko/executor /kaniko/docker-credential-ecr-login /kaniko/.docker/config.json]]
INFO[0005] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0006] Error while retrieving image from cache: getting file info: stat /cache/sha256:a54d167d7c4b7ce0c7a622f17dcf473c652b29341b321ca507425c8fa3525842: no such file or directory
INFO[0006] Downloading base image gcr.io/kaniko-project/executor:debug
INFO[0014] Taking snapshot of full filesystem...
INFO[0015] Using files from context: [/workspace/config.json]
INFO[0015] COPY config.json /kaniko/.docker/
INFO[0015] Taking snapshot of files...
INFO[0015] Saving file /kaniko/executor for later use.
INFO[0015] Saving file /kaniko/docker-credential-ecr-login for later use.
INFO[0015] Saving file /kaniko/.docker/config.json for later use.
INFO[0015] Deleting filesystem...
INFO[0016] Downloading base image alpine:latest
INFO[0017] Error while retrieving image from cache: getting file info: stat /cache/sha256:57334c50959f26ce1ee025d08f136c2292c128f84e7b229d1b0da5dac89e9866: no such file or directory
INFO[0017] Downloading base image alpine:latest
INFO[0018] Unpacking rootfs as cmd RUN apk update &&   apk upgrade &&   apk add --no-cache     bash     git requires it.
INFO[0018] Taking snapshot of full filesystem...
INFO[0019] ENV DOCKER_CONFIG /kaniko/.docker/
INFO[0019] RUN apk update &&   apk upgrade &&   apk add --no-cache     bash     git
INFO[0019] cmd: /bin/sh
INFO[0019] args: [-c apk update &&   apk upgrade &&   apk add --no-cache     bash     git]
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
v3.10.1-96-g031621e2cf [http://dl-cdn.alpinelinux.org/alpine/v3.10/main]
v3.10.1-99-gcf78a82040 [http://dl-cdn.alpinelinux.org/alpine/v3.10/community]
OK: 10337 distinct packages available
(1/2) Upgrading musl (1.1.22-r2 -> 1.1.22-r3)
(2/2) Upgrading musl-utils (1.1.22-r2 -> 1.1.22-r3)
Executing busybox-1.30.1-r2.trigger
OK: 6 MiB in 14 packages
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/11) Installing ncurses-terminfo-base (6.1_p20190518-r0)
(2/11) Installing ncurses-terminfo (6.1_p20190518-r0)
(3/11) Installing ncurses-libs (6.1_p20190518-r0)
(4/11) Installing readline (8.0.0-r0)
(5/11) Installing bash (5.0.0-r0)
Executing bash-5.0.0-r0.post-install
(6/11) Installing ca-certificates (20190108-r0)
(7/11) Installing nghttp2-libs (1.38.0-r0)
(8/11) Installing libcurl (7.65.1-r0)
(9/11) Installing expat (2.2.7-r0)
(10/11) Installing pcre2 (10.33-r0)
(11/11) Installing git (2.22.0-r0)
Executing busybox-1.30.1-r2.trigger
Executing ca-certificates-20190108-r0.trigger
OK: 30 MiB in 25 packages
INFO[0023] Taking snapshot of full filesystem...
INFO[0025] COPY --from=kaniko /kaniko/executor /kaniko/
INFO[0025] Taking snapshot of files...
INFO[0026] COPY --from=kaniko /kaniko/docker-credential-ecr-login /kaniko/
INFO[0026] Taking snapshot of files...
INFO[0026] COPY --from=kaniko /kaniko/.docker/config.json /kaniko/.docker/
INFO[0026] Taking snapshot of files...
INFO[0026] ENV PATH=/kaniko:$PATH
INFO[0026] WORKDIR /workspace
INFO[0026] cmd: workdir
INFO[0026] Changed working directory to /workspace
INFO[0026] Skipping push to container registry due to --no-push flag

/workspace # find / -name config.json

/workspace/config.json
/kaniko/.docker/config.json
/kaniko/0/kaniko/.docker/config.json

However if you copy the entire folder of /kaniko during build, the find command would reveal the following results.

/workspace # find / -name config.json

/workspace/config.json
/kaniko/.docker/config.json
/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/0/kaniko/.docker/config.json
...
...
...

You can read this more to see what more you need to copy w.r.t GCR and Google APIs.

Hopefully it works for you as well since it works for me.

dsalaza4 commented 5 years ago

Wow! looks like this actually fixed the problem. Thank you very much!

dsalaza4 commented 5 years ago

I'm reopening this issue as I just found out that although copying the executor and other needed files fixes the initial problem, it seems like we're still having some serious issues when it comes to what we're actually copying.

Let me explain in more detail.

When executing:

COPY --from=kaniko /kaniko/executor /kaniko/
COPY --from=kaniko /kaniko/.docker/config.json /kaniko/.docker/

one might expect that such files would come from the specified stage:

FROM gcr.io/kaniko-project/executor:debug as kaniko

That is not the case with Kaniko. If the container you're using for building the image is also gcr.io/kaniko-project/executor:debug, what happens is that the files get copied over from the container that is building the image, instead of the stage included in the Dockerfile. I know it's a little confusing, I will show the particular example that made me realize of this.

Right after I built the container with kaniko, I went in to check the files within the /kaniko folder. There, I found out that my config.json file looked like this:

{"auths":{"index.docker.io":  {"username":"MY_USER",  "password":"MY_PASS"}}}

where both MY_USER and MY_PASS were visible. This was due to the fact that when building the image, in order to be able to push it to the registry, I logged in first:

echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
  {\"username\":\"${DOCKER_HUB_USER}\",\
  \"password\":\"${DOCKER_HUB_PASS}\"}}}" > /kaniko/.docker/config.json

/kaniko/executor \
  --cleanup \
  --context "dockerfiles/public/$1/" \
  --dockerfile "dockerfiles/public/$1/Dockerfile" \
  --destination "fluidattacks/$1:$2" \
  --snapshotMode time

I tried in many ways to reference the files from the stage instead of the files from the building container with no success.

I also tried building the image with docker, and voila, it worked as expected. My config.json looked like this (the default kaniko config.json):

{
        "auths": {},
        "credHelpers": {
                "asia.gcr.io": "gcr",
                "eu.gcr.io": "gcr",
                "gcr.io": "gcr",
                "staging-k8s.gcr.io": "gcr",
                "us.gcr.io": "gcr"
        }
}

What a bummer :disappointed:

dsouzajude commented 5 years ago

@dsalaza4 Could you share with me your Dockerfile? If possible, strip down everything from it with a minimal configuration so it's simple for me to replicate and test on my end. I could have done it myself, but i wasn't sure how you were running the following echo and redirect '>' command:

echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
  {\"username\":\"${DOCKER_HUB_USER}\",\
  \"password\":\"${DOCKER_HUB_PASS}\"}}}" > /kaniko/.docker/config.json

I think i know what the issue is, but i need to test it first on my end. Will get back to you when you share with me your minimal Dockerfile configuration.

Thanks!

dsouzajude commented 5 years ago

Also, if you could try the following:

Instead of writing your credentials to /kaniko/.docker/config.json, you can write it to a file in a whitelisted directory /home/jenkins/agent like so:

# Write credentials to file in whitelisted directory
>> mkdir -p /home/jenkins/agent/.docker
>> echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
  {\"username\":\"${DOCKER_HUB_USER}\",\
  \"password\":\"${DOCKER_HUB_PASS}\"}}}" > /home/jenkins/agent/.docker/config.json

# Call kaniko executor to build and push docker image
>> /kaniko/executor \
  --cleanup \
  --context "dockerfiles/public/$1/" \
  --dockerfile "dockerfiles/public/$1/Dockerfile" \
  --destination "fluidattacks/$1:$2" \
  --snapshotMode time

For this to work, in your Dockerfile, set the BUILD_CONFIG environment variable to also point to the whitelisted directory. So something like this:

ENV DOCKER_CONFIG /home/jenkins/agent/.docker/

This will tell the kaniko executor to read the docker credentials from $DOCKER_CONFIG environment variable in order to push the image.

From what i understand, since /home/jenkins/agent/ is already a whitelisted directory in kaniko, it won't delete the directory when deleting the filesystem, and also it won't save the file to the final image.

Give this a try and let me know if it works. I tried it on my end, it seemed to work.

dsalaza4 commented 5 years ago

Hi, the Dockerfile is the same I provided in the original post. There, I provide three things:

  1. The Gitlab CI script I use (It runs a bash script called build-public.sh from a kaniko:debug container).
  2. build-public.sh. It provides dockerhub credentials to kaniko by writing them to the config.json file and builds the container image using Kaniko. This is the one that seems to be confusing. It's just a script that is being called from (1).
  3. The dockerfile used for building the image
dsouzajude commented 5 years ago

Did you try updating the build-public.sh to write docker credentials to /home/jenkins/agent/.docker/ or some other kaniko whitelisted directory and update the DOCKER_CONFIG environment variable accordingly? Did it work for you?

dsalaza4 commented 5 years ago

Ok, so here's what I did:

  1. GitlabCI job (Did not change):

    alpine-kaniko:
    stage: kaniko-setup
    image:
    name: gcr.io/kaniko-project/executor:debug
    entrypoint: [""]
    retry: 1
    script:
    - sh ./ci-scripts/build-public.sh alpine-kaniko test
  2. build-public.sh:

    
    export DOCKER_CONFIG='/home/jenkins/agent/.docker'
    mkdir -p "$DOCKER_CONFIG"
    echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
    {\"username\":\"${DOCKER_HUB_USER}\",\
    \"password\":\"${DOCKER_HUB_PASS}\"}}}" > "$DOCKER_CONFIG/config.json"

/kaniko/executor \ --cleanup \ --context "dockerfiles/public/$1/" \ --dockerfile "dockerfiles/public/$1/Dockerfile" \ --destination "fluidattacks/$1:$2" \ --snapshotMode time


3. Dockerfile (I simplified it for debug purposes, it isn't even multi-stage anymore):

FROM alpine:latest

RUN apk update && \ apk upgrade && \ apk add --no-cache \ bash \ git

ENV PATH=/kaniko:$PATH


When running the job, I get this error:

error pushing image: failed to push to destination index.docker.io/fluidattacks/alpine-kaniko: UNAUTHORIZED: authentication required; [map[Type:repository Class: Name:fluidattacks/alpine-kaniko Action:pull] map[Type:repository Class: Name:fluidattacks/alpine-kaniko Action:push]]



It looks like kaniko cleans the config.json file before pushing to the repo. Not sure
if the folder is actually being whitelisted.
dsalaza4 commented 5 years ago

Here's an interesting thing I just discovered and don't know how to explain.

I found out that the .docker folder doesn't really have anything valuable inside at building time. What I really want to take from kaniko to the alpine container is the binary/executor. I also want to have a .docker folder, even if it's empty, so when I run the echo $BLAH > /kaniko/.docker/config.json in build-public.sh, command won't fail because such folder does not exist.

Which lead me to this dockerfile:

FROM gcr.io/kaniko-project/executor:latest as kaniko

FROM alpine:latest

ENV DOCKER_CONFIG='/kaniko/.docker'

RUN apk update && \
  apk upgrade && \
  apk add --no-cache \
    bash \
    git

COPY --from=kaniko /kaniko/executor /kaniko/
RUN mkdir -p /kaniko/.docker

ENV PATH=/kaniko:$PATH

Well, it turns out that now kaniko completely removes the folder. When in built container:

/ # ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var

The kaniko folder just disappeared

dsouzajude commented 5 years ago

Okay, let's step back a little and try to figure out what we were trying to solve in the second place. i.e. your credentials were being saved and visible in the resulting docker image. Prior to that, with the solution i proposed before, it worked and you were able to build a docker image.

From what i understand, although you don't require the docker credentials at build time, you still require those credentials at runtime i.e. in the live kaniko executor container from the resulting image i.e. when you run executor to build and push images. The good thing is, you don't need to save the credentials to the image but you can inject it in a running executor container with the echo blah.

The problem with this approach is that, the config where the docker credentials were stored got wiped out when the executor deleted the filesystem on building the image. I said that it had a default whitelist directory that prevents these directories from being deleted, however i was wrong. By default it doesn't whitelist the /home/jenkins/agent directory, but you can instruct kaniko to whitelist it by setting a VOLUME directive in the Dockerfile as in (let's go with /build now instead of /home/jenkins/agent for simplicity):

VOLUME /build

So can you try with the following Dockerfile:

FROM gcr.io/kaniko-project/executor:v0.10.0 as kaniko

FROM alpine:latest

RUN apk update && \
  apk upgrade && \
  apk add --no-cache \
    bash \
    git

# Should be put after above installations
COPY --from=kaniko /kaniko/executor /kaniko/
ENV PATH=/kaniko:$PATH

# Path to docker credentials
ENV DOCKER_CONFIG /build/.docker

# Do this to whitelist directory, store your docker credentials here
VOLUME /build

# Optional
WORKDIR /build

Then you can echo your credentials into the /build/.docker/config.json file.

The way i'm doing it doing my builds is the following:

>> docker build -f Dockerfile.test -t my-registry/my-repo:pr179 .

>> docker push my-registry/my-repo:pr179 .

Now i use my-registry/my-repo:pr179 inside my CI pipeline to build images with kaniko going forward. This can also be used to build the same image recursively with kaniko. So think of this image as the seed image built by docker instead of kaniko and then from here on, all images would be built inside this container with kaniko executor as in the following:

>> mkdir -p /build/.docker

>> echo "{\"auths\":{\"${DOCKER_HUB_URL}\":\
  {\"username\":\"${DOCKER_HUB_USER}\",\
  \"password\":\"${DOCKER_HUB_PASS}\"}}}" > "/build/.docker/config.json"

>> /kaniko/executor \
  --cleanup \
  --context "dockerfiles/public/$1/" \
  --dockerfile "dockerfiles/public/$1/Dockerfile" \
  --destination "fluidattacks/$1:$2" \
  --snapshotMode time

The reason it worked for me before was because i've been using the jenkins/jnlp-slave:alpine base image instead of the pure alpine image and that already had the VOLUME directive set to /home/jenkins/agent which made kaniko automatically whitelist the directory. So in this case, the /build directory should now be whitelisted and your credentials shouldn't be wiped out when building and pushing the image.

I've tried it on my end, hopefully it works for you too.

dsalaza4 commented 5 years ago

I tried with the volume directive, also using the --single-snapshot flag. None of the attempts worked. The issue seems related to the fact that kaniko protects the /kaniko folder in the first stage, but deletes it in the second. I decided to build these images with docker, as Kaniko seems to be not capable of doing it :disappointed:

dsouzajude commented 5 years ago

Did you put the VOLUME directive on the /kaniko folder? That i believe you shouldn't do. Follow the Dockerfile i've presented and try putting the VOLUME directive on the /build directory and then create it in your bash and store your docker credentials there and then try. Let me know how it works with that setup.

cvgw commented 4 years ago

As a note; using kaniko in a non-official image or copying the binaries to a non-official image is not supported. There may be a way to make it work, but YMMV

tejal29 commented 4 years ago

@dsalaza4 Sorry for the late response. /kaniko is a special directory and the tools uses it as a build workspace. The workaround mentioned where you only add specific files you need is the right thing to do.

samheutmaker commented 4 years ago

@tejal29 @cvgw Any progress on this?

stephen-dexda commented 4 years ago

As a note; using kaniko in a non-official image or copying the binaries to a non-official image is not supported. There may be a way to make it work, but YMMV

The problem is that the official kaniko images are FROM scratch, so if you want to use e.g. Git to extract useful building/tagging info from revision control, you're SOL. kaniko:debug just adds Busybox, and at least in the case of Git there are no official static binaries made available that could easily be added to the kaniko image.

TBH it would be much better if there were e.g. a kaniko:alpine image available.

tejal29 commented 4 years ago

The problem is that the official kaniko images are FROM scratch, so if you want to use e.g. Git to extract useful building/tagging info from revision control, you're SOL. kaniko:debug just adds Busybox, and at least in the case of Git there are no official static binaries made available that could easily be added to the kaniko image.

@stephen-dexda is it possible for your build process to compute the digest using git tags before spinning a kaniko pod? https://github.com/GoogleContainerTools/skaffold does it. Would it possible to use skaffold to spin up a kaniko pod?

trevor-vaughan commented 2 years ago

I've also run into this issue.

In my case, I'm attempting to work around a bug where I need to run git lfs pull prior to using kaniko to build the image.

Since the kaniko image does not have git lfs installed, I have to fall back on trying to pull the materials out of the image and stuff them into one that has git lfs support.

It would be nice if there were a supported method for running kaniko in more flexible environments.

robbydyer commented 2 years ago

Based on my own testing, this issue appears to be specific to v1.7.0 and possibly older. I would suggest trying to pin to a specific version rather than using the reusable debug tag- i.e. v1.8.1-debug

Reproduce the error:

docker run --rm \
  -v $(pwd):/workspace \
  gcr.io/kaniko-project/executor:v1.7.0-debug \
  --force \
  --dockerfile=/workspace/Dockerfile \
  --context=dir:///workspace \
  --no-push

With Dockerfile:

FROM gcr.io/kaniko-project/executor:v1.7.0-debug as kaniko

FROM alpine

COPY --from=kaniko /kaniko/executor /kaniko/executor
COPY --from=kaniko /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env

Gives an error like:

INFO[0014] COPY --from=kaniko /kaniko/docker-credential-acr-env /kaniko/docker-credential-acr-env
error building image: error building stage: failed to execute command: resolving src: failed to get fileinfo for /kaniko/0/kaniko/docker-credential-acr-env: lstat /kaniko/0/kaniko/docker-credential-acr-env: no such file or directory

If you change the docker tag to v1.8.1-debug, the build passes.