GoogleContainerTools / kaniko

Build Container Images In Kubernetes
Apache License 2.0
14.88k stars 1.44k forks source link

COPY failed with "lstat /workspace/xxx: no such file or directory" #1723

Open aaronkan007 opened 3 years ago

aaronkan007 commented 3 years ago

Actual behavior

I'm using kaniko to test the copy used in dockerfile. Both executor:latest and executor:debug image face the same error. So I'm confused about how the COPY in dockerfile should be used in kaniko.

Expected behavior

How to use the COPY in dockerfile with kaniko.

To Reproduce Steps to reproduce the behavior:

  1. My Dockerfile:
    FROM alpine:latest as builder
    RUN mkdir /data/
    WORKDIR /data/
    COPY . .
  2. Here is the command: /kaniko/executor --dockerfile=/workspace/Dockerfile --context=/workspace --destination=xxx --skip-tls-verify
  3. Here is my output:
    INFO[0000] Resolved base name alpine:latest to builder
    INFO[0000] Retrieving image manifest alpine:latest
    INFO[0000] Retrieving image alpine:latest from registry index.docker.io
    INFO[0016] Built cross stage deps: map[]
    INFO[0016] Retrieving image manifest alpine:latest
    INFO[0016] Returning cached image manifest
    INFO[0016] Executing 0 build triggers
    INFO[0016] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
    INFO[0029] RUN mkdir /data/
    INFO[0029] Taking snapshot of full filesystem...
    INFO[0029] cmd: /bin/sh
    INFO[0029] args: [-c mkdir /data/]
    INFO[0029] Running: [/bin/sh -c mkdir /data/]
    INFO[0029] Taking snapshot of full filesystem...
    INFO[0030] WORKDIR /data/
    INFO[0030] cmd: workdir
    INFO[0030] Changed working directory to /data
    INFO[0030] No files changed in this command, skipping snapshotting.
    error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory
  4. I tried another one with base directory in '/workspace' and it worked
    FROM alpine:latest
    RUN echo 'test for copy'> /workspace/test.log
    RUN mkdir -p /workspace/opt/
    COPY ./test.log /workspace/opt/
  5. The output using '/workspace':
    INFO[0000] Retrieving image manifest alpine:latest
    INFO[0000] Retrieving image alpine:latest from registry index.docker.io
    INFO[0003] Built cross stage deps: map[]
    INFO[0003] Retrieving image manifest alpine:latest
    INFO[0003] Returning cached image manifest
    INFO[0003] Executing 0 build triggers
    INFO[0003] Unpacking rootfs as cmd RUN echo 'test for copy'> /workspace/test.log requires it.
    INFO[0012] RUN echo 'test for copy'> /workspace/test.log
    INFO[0012] Taking snapshot of full filesystem...
    INFO[0012] cmd: /bin/sh
    INFO[0012] args: [-c echo 'test for copy'> /workspace/test.log]
    INFO[0012] Running: [/bin/sh -c echo 'test for copy'> /workspace/test.log]
    INFO[0012] Taking snapshot of full filesystem...
    INFO[0012] No files were changed, appending empty layer to config. No layer added to image.
    INFO[0012] RUN mkdir -p /workspace/opt/
    INFO[0012] cmd: /bin/sh
    INFO[0012] args: [-c mkdir -p /workspace/opt/]
    INFO[0012] Running: [/bin/sh -c mkdir -p /workspace/opt/]
    INFO[0012] Taking snapshot of full filesystem...
    INFO[0012] No files were changed, appending empty layer to config. No layer added to image.
    INFO[0012] COPY ./test.log /workspace/opt/
    INFO[0012] Taking snapshot of files...
    INFO[0012] Pushing image to xxx
    INFO[0023] Pushed image to 1 destinations
  6. Seems like the COPY always begins from the /workspace directory. I know that the COPY should not use the absolute path in docker build, but how to name the right one besides the '/workspace' in kaniko ?

Additional Information

kvaps commented 3 years ago

You should not use COPY ./test.log /workspace/opt/ Use COPY ./test.log /opt/ instead

Also when you use WORKDIR /data/, it will look for /data/test.log not /test.log and obliviously will not find it

aaronkan007 commented 3 years ago

You should not use COPY ./test.log /workspace/opt/ Use COPY ./test.log /opt/ instead

Also when you use WORKDIR /data/, it will look for /data/test.log not /test.log and obliviously will not find it

Thanks kvaps for your reply. @kvaps My first dockerfile may lead some misunderstanding there, for the error info is caused by the 'WORKDIR' command instead of the 'COPY' one. Here is a new dockerfile only contains the 'WORKDIR':

FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/

And the kaniko-executor with debug tag build shows:

INFO[0000] Resolved base name alpine:latest to builder
INFO[0000] Retrieving image manifest alpine:latest
INFO[0000] Retrieving image alpine:latest from registry index.docker.io
INFO[0007] Built cross stage deps: map[]
INFO[0007] Retrieving image manifest alpine:latest
INFO[0007] Returning cached image manifest
INFO[0007] Executing 0 build triggers
INFO[0007] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0017] RUN mkdir /data/
INFO[0017] Taking snapshot of full filesystem...
INFO[0017] cmd: /bin/sh
INFO[0017] args: [-c mkdir /data/]
INFO[0017] Running: [/bin/sh -c mkdir /data/]
INFO[0017] Taking snapshot of full filesystem...
INFO[0017] WORKDIR /data/
INFO[0017] cmd: workdir
INFO[0017] Changed working directory to /data
INFO[0017] No files changed in this command, skipping snapshotting.
error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory

Besides the 'WORKDIR' command, my previous question is about how to use the 'COPY' in kaniko. I can mount the stuff into the '/workspace' to copy when creating the kaniko execute container as a workaround for now, but I am not quite sure the offical usage of 'WORKDIR' and 'COPY' in kaniko. Any advice is helpful. Thanks in advance.

kvaps commented 3 years ago

Just use it the same way as in Docker. /workspace is a service directory for holding context while building, you should not mention it anywhere in your Dockerfile.

I can mount the stuff into the '/workspace' to copy when creating the kaniko execute container as a workaround for now

Try kubectl-build plugin it will send your context to kaniko container via stdin without any mounts πŸ™‚

aaronkan007 commented 3 years ago

Just use it the same way as in Docker. /workspace is a service directory for holding context while building, you should not mention it anywhere in your Dockerfile.

I can mount the stuff into the '/workspace' to copy when creating the kaniko execute container as a workaround for now

Try kubectl-build plugin it will send your context to kaniko container via stdin without any mounts πŸ™‚

Thanks kvaps. I made a new compare when using 'WORKDIR' and 'COPY', here is the result:

  1. dockerfile 1
    FROM alpine:latest as builder
    RUN mkdir /data/
    WORKDIR /data/
  2. the build result with dockerfile 1: success
    INFO[0004] Resolved base name alpine:latest to builder
    INFO[0004] Retrieving image manifest alpine:latest
    INFO[0004] Retrieving image alpine:latest from registry index.docker.io
    INFO[0010] Built cross stage deps: map[]
    INFO[0010] Retrieving image manifest alpine:latest
    INFO[0010] Returning cached image manifest
    INFO[0010] Executing 0 build triggers
    INFO[0010] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
    INFO[0019] RUN mkdir /data/
    INFO[0019] Taking snapshot of full filesystem...
    INFO[0019] cmd: /bin/sh
    INFO[0019] args: [-c mkdir /data/]
    INFO[0019] Running: [/bin/sh -c mkdir /data/]
    INFO[0019] Taking snapshot of full filesystem...
    INFO[0019] WORKDIR /data/
    INFO[0019] cmd: workdir
    INFO[0019] Changed working directory to /data
    INFO[0019] No files changed in this command, skipping snapshotting.
    INFO[0019] Pushing image to xxx
    INFO[0026] Pushed image to 1 destinations
  3. dockerfile 2: just add 'COPY . .' compares to dockerfile 1.
    FROM alpine:latest as builder
    RUN mkdir /data/
    WORKDIR /data/
    COPY . .
  4. the build result with dockerfile 2: failed
    INFO[0000] GET KEYCHAIN
    INFO[0003] Resolved base name alpine:latest to builder
    INFO[0003] Retrieving image manifest alpine:latest
    INFO[0003] Retrieving image alpine:latest from registry index.docker.io
    INFO[0003] GET KEYCHAIN
    INFO[0008] Built cross stage deps: map[]
    INFO[0008] Retrieving image manifest alpine:latest
    INFO[0008] Returning cached image manifest
    INFO[0008] Executing 0 build triggers
    INFO[0008] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
    INFO[0016] RUN mkdir /data/
    INFO[0016] Taking snapshot of full filesystem...
    INFO[0016] cmd: /bin/sh
    INFO[0016] args: [-c mkdir /data/]
    INFO[0016] Running: [/bin/sh -c mkdir /data/]
    INFO[0016] Taking snapshot of full filesystem...
    INFO[0017] WORKDIR /data/
    INFO[0017] cmd: workdir
    INFO[0017] Changed working directory to /data
    INFO[0017] No files changed in this command, skipping snapshotting.
    error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory

The only difference between dockerfile 1 and 2 is the 'COPY . .' and the error info shows the '/workspace/workspace' directory while the dockerfile did not mention it. Any sugesstions about the 'COPY . .' command?

kvaps commented 3 years ago

Unfortunately I can't reproduce:

mkdir -p /tmp/3
cat > /tmp/3/Dockerfile <<\EOF
FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/
COPY . .
EOF
docker run --rm -v /tmp/3:/workspace --entrypoint=/kaniko/executor gcr.io/kaniko-project/executor:v1.6.0 --dockerfile=Dockerfile --context=/workspace --no-push

output:

INFO[0000] Resolved base name alpine:latest to builder
INFO[0000] Retrieving image manifest alpine:latest
INFO[0000] Retrieving image alpine:latest from registry index.docker.io
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest alpine:latest
INFO[0001] Returning cached image manifest
INFO[0001] Executing 0 build triggers
INFO[0001] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0001] RUN mkdir /data/
INFO[0001] Taking snapshot of full filesystem...
INFO[0001] cmd: /bin/sh
INFO[0001] args: [-c mkdir /data/]
INFO[0001] Running: [/bin/sh -c mkdir /data/]
INFO[0001] Taking snapshot of full filesystem...
INFO[0001] WORKDIR /data/
INFO[0001] cmd: workdir
INFO[0001] Changed working directory to /data
INFO[0001] No files changed in this command, skipping snapshotting.
INFO[0001] COPY . .
INFO[0001] Taking snapshot of files...
INFO[0001] Skipping push to container registry due to --no-push flag

what I'm doing wrong?

aaronkan007 commented 3 years ago

Unfortunately I can't reproduce:

mkdir -p /tmp/3
cat > /tmp/3/Dockerfile <<\EOF
FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/
COPY . .
EOF
docker run --rm -v /tmp/3:/workspace --entrypoint=/kaniko/executor gcr.io/kaniko-project/executor:v1.6.0 --dockerfile=Dockerfile --context=/workspace --no-push

output:

INFO[0000] Resolved base name alpine:latest to builder
INFO[0000] Retrieving image manifest alpine:latest
INFO[0000] Retrieving image alpine:latest from registry index.docker.io
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest alpine:latest
INFO[0001] Returning cached image manifest
INFO[0001] Executing 0 build triggers
INFO[0001] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0001] RUN mkdir /data/
INFO[0001] Taking snapshot of full filesystem...
INFO[0001] cmd: /bin/sh
INFO[0001] args: [-c mkdir /data/]
INFO[0001] Running: [/bin/sh -c mkdir /data/]
INFO[0001] Taking snapshot of full filesystem...
INFO[0001] WORKDIR /data/
INFO[0001] cmd: workdir
INFO[0001] Changed working directory to /data
INFO[0001] No files changed in this command, skipping snapshotting.
INFO[0001] COPY . .
INFO[0001] Taking snapshot of files...
INFO[0001] Skipping push to container registry due to --no-push flag

what I'm doing wrong?

Thanks kvaps, sorry for my late reply. My previous test was done in the kubernetes env, and I followed the guide to create the pv, pvc and pod.

I noticed that the difference between docker and kubernetes yaml might be this line, so I made a new test by replacing the '--context=dir://workspace' with '--context=/workspace', and it succeed.

The kaniko doc shows that the build context starts with the prefix 'dir://' when the source is local directory(doc). So I'm not sure whether this is a bug in kubernetes env when using 'COPY'.

The following is my failed pod yaml:

apiVersion: v1
kind: Pod
metadata:
  name: kaniko-dir
spec:
  containers:
  - name: kaniko-dir
    image: gcr.io/kaniko-project/executor:latest
    imagePullPolicy: IfNotPresent
    args:
    - "--dockerfile=/workspace/Dockerfile"
    - "--context=dir://workspace"
    - "--no-push"
    volumeMounts:
    - name: kanikoconfig
      mountPath: /workspace
  restartPolicy: Never
  volumes:
  - name: kanikoconfig
    persistentVolumeClaim:
      claimName: dockerfile-claim

The pv and pvc yaml are the same in example link upstairs. The build failed with error log:

error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory
mo-dayyeh commented 8 months ago

Facing similar issue Error from my jenkins build job:

Step #0 - "build-and-push-container": error building image: error building stage: failed to optimize instructions: failed to get files used from context: failed to get fileinfo for /kaniko/0/apps/agency-admin-frontend/dist: lstat /kaniko/0/apps/agency-admin-frontend/dist: no such file or directory

I'm having the following issue when using kankiko with cloud build and doing multi-stage build nodejs (build-stage) then copying the output static content from build-stage to nginx image

I cannot set the context differently because if I set it to work with nginx it breaks during the node image build and vice versa , I'm I doing something wrong here ?

FROM node:20.11.1-alpine3.18 AS build-stage
ARG NODE_ENV
ENV NODE_ENV=${NODE_ENV}
ARG TARGET_ENV
ENV TARGET_ENV=${TARGET_ENV}
ARG VITE_BASE_URL
ENV VITE_BASE_URL=${VITE_BASE_URL}
ARG VITE_API_URL
ENV VITE_API_URL=${VITE_API_URL}

WORKDIR /app

COPY ./*.json ./
COPY ./global.d.ts ./global.d.ts

## AUTOGENERATED FREQUENTLY CHANGING DEPENDENCIES START
...
...
COPY ./packages/ui-kit ./packages/ui-kit
COPY ./apps/agency-admin-frontend ./apps/agency-admin-frontend
## AUTOGENERATED FREQUENTLY CHANGING DEPENDENCIES END

RUN npm install \
    && npx nx run-many --target=build --nx-bail \
    && npm cache clean --force \
    && npm run \
    && rm -rf node_modules/.cache

FROM nginx:stable-alpine
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From 'build' stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=build-stage ./apps/agency-admin-frontend/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
##----------
##Cloudbuild
##-----------
steps:
- id: build-and-push-container
  name: "gcr.io/kaniko-project/executor:v1.20.1"
  args: ["--dockerfile=$_INFRASTRUCTURE_ROOT/$_DOCKERFILE_NAME", "--destination=${_IMAGE_NAME}:${_COMMIT_SHA}", "--cache=true", "--compressed-caching=false", "--use-new-run", "--cache-ttl=${_CACHE_TTL_HOURS}h"]
ihelmer07 commented 5 months ago

@mo-dayyeh I'm having a similar issue. Did you find a solution?

marcinx64 commented 3 weeks ago

In case of Jenkins You have to set '--context' to current Jenkins workspace or copy all files from Jenkins workspace to '/workspace' (I've only tested first one) ex.: sh "/kaniko/executor --context=${workspace} ..."