Closed devent closed 2 years ago
I now have the Dockfile just apt-get install
and the binaries are still missing.
FROM debian:bullseye-20220328
LABEL maintainer "Erwin Müller <erwin@muellerpublic.de>"
RUN set -x; \
apt-get update && \
apt-get install -y curl openssh-client gnupg gpg-agent git make && \
rm -rf /var/lib/apt/lists/*
Log output: https://jenkins.anrisoftware.com/job/robobeerun-jenkins-helm-docker/job/Feature_4572/19/console Here is the image: https://harbor.anrisoftware.com/harbor/projects/2/repositories/jenkins-helm/artifacts/sha256:d00901453c28964a8d0388008ee206237c905914c6b84c1e7795987893a70d5f
Confirm related bug with missing in several cases only binaries or whole data directory in builded image in Kubernetes with Containerd runtime
Versions:
KUBERNETES-VERSION OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
v1.23.4 CentOS Stream 8 4.18.0-365.el8.x86_64 containerd://1.6.1
Manifest to reproduce missing whole data directory in Kubernetes
apiVersion: v1
kind: ConfigMap
metadata:
name: test-files
data:
Dockerfile: |
FROM node:16.14.0-alpine
RUN apk add --no-cache python3
WORKDIR /home/node/app
COPY . ./
RUN ls -laht /home/node/app
env: |
test=test
---
apiVersion: batch/v1
kind: Job
metadata:
name: front-test-job
spec:
template:
metadata:
name: front-dev-init
spec:
restartPolicy: Never
containers:
- name: buildimage
image: gcr.io/kaniko-project/executor:latest
args:
- "--dockerfile=/Dockerfile"
- "--context=dir:///home/node/app"
- "--destination=myregistry:443/testrepo:test"
resources: {}
workingDir: /home/node/app
volumeMounts:
- name: code-volume
mountPath: /home/node
- name: test-files
mountPath: /Dockerfile
subPath: Dockerfile
initContainers:
- name: clone-repo
command:
- sh
- -c
- git clone https://github.com/Azure-Samples/html-docs-hello-world.git /home/node/app && cp /env /home/node/app/.env
workingDir: /home/node/app
image: alpine/git
imagePullPolicy: Always
resources: {}
volumeMounts:
- mountPath: /home/node
name: code-volume
- name: test-files
mountPath: /env
subPath: env
volumes:
- name: code-volume
emptyDir: {}
- name: test-files
configMap:
name: test-files
Check if data missed:
docker run myregistry:443/testrepo:test ls /home/node/app
BUT if remove RUN apk add --no-cache python3
from Dockerfile and try to reproduce again, files will there:
docker run myregistry:443/testrepo:test ls /home/node/app
LICENSE
README.md
css
fonts
img
index.html
js
but if i will build something for example with yarn some files could be missing too, but not whole directory
Pod log when files are missing:
INFO[0000] Retrieving image manifest node:16.14.0-alpine
INFO[0000] Retrieving image node:16.14.0-alpine from registry index.docker.io
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest node:16.14.0-alpine
INFO[0001] Returning cached image manifest
INFO[0001] Executing 0 build triggers
INFO[0001] Unpacking rootfs as cmd RUN apk add --no-cache python3 requires it.
INFO[0004] RUN apk add --no-cache python3
INFO[0004] Taking snapshot of full filesystem...
INFO[0005] cmd: /bin/sh
INFO[0005] args: [-c apk add --no-cache python3]
INFO[0005] Running: [/bin/sh -c apk add --no-cache python3]
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
(1/11) Installing libbz2 (1.0.8-r1)
(2/11) Installing expat (2.4.7-r0)
(3/11) Installing libffi (3.4.2-r1)
(4/11) Installing gdbm (1.22-r0)
(5/11) Installing xz-libs (5.2.5-r1)
(6/11) Installing mpdecimal (2.5.1-r1)
(7/11) Installing ncurses-terminfo-base (6.3_p20211120-r0)
(8/11) Installing ncurses-libs (6.3_p20211120-r0)
(9/11) Installing readline (8.1.1-r0)
(10/11) Installing sqlite-libs (3.36.0-r0)
(11/11) Installing python3 (3.9.7-r4)
Executing busybox-1.34.1-r4.trigger
OK: 56 MiB in 27 packages
INFO[0006] Taking snapshot of full filesystem...
INFO[0008] WORKDIR /home/node/app
INFO[0008] cmd: workdir
INFO[0008] Changed working directory to /home/node/app
INFO[0008] No files changed in this command, skipping snapshotting.
INFO[0008] COPY . ./
INFO[0008] Taking snapshot of files...
INFO[0008] RUN ls -laht /home/node/app
INFO[0008] cmd: /bin/sh
INFO[0008] args: [-c ls -laht /home/node/app]
INFO[0008] Running: [/bin/sh -c ls -laht /home/node/app]
total 52K
drwxr-xr-x 7 root root 4.0K Apr 25 09:33 .
-rw-r--r-- 1 root root 10 Apr 25 09:33 .env
drwxr-xr-x 8 root root 4.0K Apr 25 09:33 .git
-rw-r--r-- 1 root root 4.1K Apr 25 09:33 .gitignore
-rw-r--r-- 1 root root 1.2K Apr 25 09:33 LICENSE
-rw-r--r-- 1 root root 608 Apr 25 09:33 README.md
drwxr-xr-x 2 root root 4.0K Apr 25 09:33 css
drwxr-xr-x 2 root root 4.0K Apr 25 09:33 fonts
drwxr-xr-x 2 root root 4.0K Apr 25 09:33 img
-rw-r--r-- 1 root root 1.7K Apr 25 09:33 index.html
drwxr-xr-x 2 root root 4.0K Apr 25 09:33 js
drwxrwxrwx 3 root root 4.0K Apr 25 09:33 ..
INFO[0008] Taking snapshot of full filesystem...
INFO[0008] Pushing image to myregistry/testrepo:test
INFO[0009] Pushed myregistry/testrepo@sha256:add254c2334297fbf12e26d9d5b7c6547010fc9a4a044f284a7e2d9bab56fb72
Reproducing only in Kubernetes, with docker it's okay Script example of the same Dockerfile with docker
docker run -d -p 5001:5000 --restart=always --name registry registry:2
mkdir ~/testkaniko/
echo test > ~/testkaniko/test
cat > ~/testkaniko/Dockerfile << 'EOL'
FROM node:16.14.0-alpine
RUN apk add --no-cache python3
WORKDIR /home/node/app
COPY . ./
RUN ls -laht /home/node/app
EOL
docker run --network="host" -v ~/testkaniko:/test gcr.io/kaniko-project/executor:latest --insecure --skip-tls-verify --dockerfile=/test/Dockerfile --context=dir:///test --destination=127.0.0.1:5001/testrepo:test
docker run 127.0.0.1:5001/testrepo:test ls -laht /home/node/app
Docker kaniko log:
INFO[0000] Retrieving image manifest node:16.14.0-alpine
INFO[0000] Retrieving image node:16.14.0-alpine from registry index.docker.io
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest node:16.14.0-alpine
INFO[0001] Returning cached image manifest
INFO[0001] Executing 0 build triggers
INFO[0001] Unpacking rootfs as cmd RUN apk add --no-cache python3 requires it.
INFO[0005] RUN apk add --no-cache python3
INFO[0005] Taking snapshot of full filesystem...
INFO[0006] cmd: /bin/sh
INFO[0006] args: [-c apk add --no-cache python3]
INFO[0006] Running: [/bin/sh -c apk add --no-cache python3]
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.15/community/x86_64/APKINDEX.tar.gz
(1/11) Installing libbz2 (1.0.8-r1)
(2/11) Installing expat (2.4.7-r0)
(3/11) Installing libffi (3.4.2-r1)
(4/11) Installing gdbm (1.22-r0)
(5/11) Installing xz-libs (5.2.5-r1)
(6/11) Installing mpdecimal (2.5.1-r1)
(7/11) Installing ncurses-terminfo-base (6.3_p20211120-r0)
(8/11) Installing ncurses-libs (6.3_p20211120-r0)
(9/11) Installing readline (8.1.1-r0)
(10/11) Installing sqlite-libs (3.36.0-r0)
(11/11) Installing python3 (3.9.7-r4)
Executing busybox-1.34.1-r4.trigger
OK: 56 MiB in 27 packages
INFO[0008] Taking snapshot of full filesystem...
INFO[0010] WORKDIR /home/node/app
INFO[0010] cmd: workdir
INFO[0010] Changed working directory to /home/node/app
INFO[0010] Creating directory /home/node/app
INFO[0010] Taking snapshot of files...
INFO[0010] COPY . ./
INFO[0010] Taking snapshot of files...
INFO[0010] RUN ls -laht /home/node/app
INFO[0010] cmd: /bin/sh
INFO[0010] args: [-c ls -laht /home/node/app]
INFO[0010] Running: [/bin/sh -c ls -laht /home/node/app]
total 16K
drwxr-xr-x 2 root root 4.0K Apr 25 09:51 .
drwxr-sr-x 3 node node 4.0K Apr 25 09:51 ..
-rw-r--r-- 1 root root 117 Apr 25 09:51 Dockerfile
-rw-r--r-- 1 root root 5 Apr 25 09:51 test
INFO[0010] Taking snapshot of full filesystem...
INFO[0010] No files were changed, appending empty layer to config. No layer added to image.
INFO[0010] Pushing image to 127.0.0.1:5001/testrepo:test
INFO[0014] Pushed 127.0.0.1:5001/testrepo@sha256:ca91b9fcfdf84dd59d2b989c1b14c303e134b7270405f371ed228c1e3bd5a977
docker run 127.0.0.1:5001/testrepo:test ls -laht /home/node/app
total 16K
drwxr-xr-x 1 root root 4.0K Apr 25 09:51 .
drwxr-sr-x 1 node node 4.0K Apr 25 09:51 ..
-rw-r--r-- 1 root root 117 Apr 25 09:51 Dockerfile
-rw-r--r-- 1 root root 5 Apr 25 09:51 test
Files ok.
Succeeded to avoid this bug with changing context dir of Kaniko executor to make it different than WORKDIR in Dockerfile
Related to this issue: https://github.com/GoogleContainerTools/kaniko/issues/2021
Working manifest:
apiVersion: v1
kind: ConfigMap
metadata:
name: test-files
data:
Dockerfile: |
FROM node:16.14.0-alpine
RUN apk add --no-cache python3
WORKDIR /home/node/app
COPY . ./
RUN ls -laht /home/node/app
env: |
test=test
---
apiVersion: batch/v1
kind: Job
metadata:
name: front-test-job
spec:
template:
metadata:
name: front-dev-init
spec:
restartPolicy: Never
containers:
- name: buildimage
image: gcr.io/kaniko-project/executor:latest
args:
- "--dockerfile=/Dockerfile"
- "--context=dir:///code"
- "--destination=myregistry:443/testrepo:test"
resources: {}
workingDir: /
volumeMounts:
- name: code-volume
mountPath: /code
- name: test-files
mountPath: /Dockerfile
subPath: Dockerfile
initContainers:
- name: clone-repo
command:
- sh
- -c
- git clone https://github.com/Azure-Samples/html-docs-hello-world.git /code && /bin/cp /env /code/.env
workingDir: /code
image: alpine/git
imagePullPolicy: Always
resources: {}
volumeMounts:
- name: code-volume
mountPath: /code
- name: test-files
mountPath: /env
subPath: env
volumes:
- name: code-volume
emptyDir: {}
- name: test-files
configMap:
name: test-files
Same issue here, also with nodejs alpine based image. But unfortunately changing the WORKDIR does not fix the issue
Is there any plan to fix this issue? this is quite annoying and unpredictable.
EDIT: I finally switched from an alpine-based to a debian-based nodejs image and it worked, but it's more a workaround than a solution...
@achendev: Can you test with latest image on main:
use the following instead of the ...:1.8.1
...:1c0e5a0aca7f40f7a747cc89a365d0300015b06d
I think this is fixed.
We had the same issue too with missing binaries in /usr/bin
when using a node alpine image, issue was not present in a node-lts image.
Have tested with the image mentioned above ...:1c0e5a0aca7f40f7a747cc89a365d0300015b06d
and can confirm it's fixed for us.
Are you able to give an indication of how far away we are from getting it in a published kaniko release?
Actual behavior
Binaries are missing from
/usr/bin
Expected behavior
ssh-agent
,git
,gpg
, etc. are all present in/usr/bin
To Reproduce
Additional Information
--cache
flagHere is the log output that clearly shows the binaries installed on the last layer:
But if I try to run the image the binaries are no longer there: