Open aaronkan007 opened 3 years ago
You should not use COPY ./test.log /workspace/opt/
Use COPY ./test.log /opt/
instead
Also when you use WORKDIR /data/
, it will look for /data/test.log
not /test.log
and obliviously will not find it
You should not use
COPY ./test.log /workspace/opt/
UseCOPY ./test.log /opt/
insteadAlso when you use
WORKDIR /data/
, it will look for/data/test.log
not/test.log
and obliviously will not find it
Thanks kvaps for your reply. @kvaps My first dockerfile may lead some misunderstanding there, for the error info is caused by the 'WORKDIR' command instead of the 'COPY' one. Here is a new dockerfile only contains the 'WORKDIR':
FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/
And the kaniko-executor with debug tag build shows:
INFO[0000] Resolved base name alpine:latest to builder
INFO[0000] Retrieving image manifest alpine:latest
INFO[0000] Retrieving image alpine:latest from registry index.docker.io
INFO[0007] Built cross stage deps: map[]
INFO[0007] Retrieving image manifest alpine:latest
INFO[0007] Returning cached image manifest
INFO[0007] Executing 0 build triggers
INFO[0007] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0017] RUN mkdir /data/
INFO[0017] Taking snapshot of full filesystem...
INFO[0017] cmd: /bin/sh
INFO[0017] args: [-c mkdir /data/]
INFO[0017] Running: [/bin/sh -c mkdir /data/]
INFO[0017] Taking snapshot of full filesystem...
INFO[0017] WORKDIR /data/
INFO[0017] cmd: workdir
INFO[0017] Changed working directory to /data
INFO[0017] No files changed in this command, skipping snapshotting.
error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory
Besides the 'WORKDIR' command, my previous question is about how to use the 'COPY' in kaniko. I can mount the stuff into the '/workspace' to copy when creating the kaniko execute container as a workaround for now, but I am not quite sure the offical usage of 'WORKDIR' and 'COPY' in kaniko. Any advice is helpful. Thanks in advance.
Just use it the same way as in Docker. /workspace
is a service directory for holding context while building, you should not mention it anywhere in your Dockerfile.
I can mount the stuff into the '/workspace' to copy when creating the kaniko execute container as a workaround for now
Try kubectl-build plugin it will send your context to kaniko container via stdin without any mounts π
Just use it the same way as in Docker.
/workspace
is a service directory for holding context while building, you should not mention it anywhere in your Dockerfile.I can mount the stuff into the '/workspace' to copy when creating the kaniko execute container as a workaround for now
Try kubectl-build plugin it will send your context to kaniko container via stdin without any mounts π
Thanks kvaps. I made a new compare when using 'WORKDIR' and 'COPY', here is the result:
FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/
INFO[0004] Resolved base name alpine:latest to builder
INFO[0004] Retrieving image manifest alpine:latest
INFO[0004] Retrieving image alpine:latest from registry index.docker.io
INFO[0010] Built cross stage deps: map[]
INFO[0010] Retrieving image manifest alpine:latest
INFO[0010] Returning cached image manifest
INFO[0010] Executing 0 build triggers
INFO[0010] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0019] RUN mkdir /data/
INFO[0019] Taking snapshot of full filesystem...
INFO[0019] cmd: /bin/sh
INFO[0019] args: [-c mkdir /data/]
INFO[0019] Running: [/bin/sh -c mkdir /data/]
INFO[0019] Taking snapshot of full filesystem...
INFO[0019] WORKDIR /data/
INFO[0019] cmd: workdir
INFO[0019] Changed working directory to /data
INFO[0019] No files changed in this command, skipping snapshotting.
INFO[0019] Pushing image to xxx
INFO[0026] Pushed image to 1 destinations
FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/
COPY . .
INFO[0000] GET KEYCHAIN
INFO[0003] Resolved base name alpine:latest to builder
INFO[0003] Retrieving image manifest alpine:latest
INFO[0003] Retrieving image alpine:latest from registry index.docker.io
INFO[0003] GET KEYCHAIN
INFO[0008] Built cross stage deps: map[]
INFO[0008] Retrieving image manifest alpine:latest
INFO[0008] Returning cached image manifest
INFO[0008] Executing 0 build triggers
INFO[0008] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0016] RUN mkdir /data/
INFO[0016] Taking snapshot of full filesystem...
INFO[0016] cmd: /bin/sh
INFO[0016] args: [-c mkdir /data/]
INFO[0016] Running: [/bin/sh -c mkdir /data/]
INFO[0016] Taking snapshot of full filesystem...
INFO[0017] WORKDIR /data/
INFO[0017] cmd: workdir
INFO[0017] Changed working directory to /data
INFO[0017] No files changed in this command, skipping snapshotting.
error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory
The only difference between dockerfile 1 and 2 is the 'COPY . .' and the error info shows the '/workspace/workspace' directory while the dockerfile did not mention it. Any sugesstions about the 'COPY . .' command?
Unfortunately I can't reproduce:
mkdir -p /tmp/3
cat > /tmp/3/Dockerfile <<\EOF
FROM alpine:latest as builder
RUN mkdir /data/
WORKDIR /data/
COPY . .
EOF
docker run --rm -v /tmp/3:/workspace --entrypoint=/kaniko/executor gcr.io/kaniko-project/executor:v1.6.0 --dockerfile=Dockerfile --context=/workspace --no-push
output:
INFO[0000] Resolved base name alpine:latest to builder
INFO[0000] Retrieving image manifest alpine:latest
INFO[0000] Retrieving image alpine:latest from registry index.docker.io
INFO[0001] Built cross stage deps: map[]
INFO[0001] Retrieving image manifest alpine:latest
INFO[0001] Returning cached image manifest
INFO[0001] Executing 0 build triggers
INFO[0001] Unpacking rootfs as cmd RUN mkdir /data/ requires it.
INFO[0001] RUN mkdir /data/
INFO[0001] Taking snapshot of full filesystem...
INFO[0001] cmd: /bin/sh
INFO[0001] args: [-c mkdir /data/]
INFO[0001] Running: [/bin/sh -c mkdir /data/]
INFO[0001] Taking snapshot of full filesystem...
INFO[0001] WORKDIR /data/
INFO[0001] cmd: workdir
INFO[0001] Changed working directory to /data
INFO[0001] No files changed in this command, skipping snapshotting.
INFO[0001] COPY . .
INFO[0001] Taking snapshot of files...
INFO[0001] Skipping push to container registry due to --no-push flag
what I'm doing wrong?
Unfortunately I can't reproduce:
mkdir -p /tmp/3 cat > /tmp/3/Dockerfile <<\EOF FROM alpine:latest as builder RUN mkdir /data/ WORKDIR /data/ COPY . . EOF docker run --rm -v /tmp/3:/workspace --entrypoint=/kaniko/executor gcr.io/kaniko-project/executor:v1.6.0 --dockerfile=Dockerfile --context=/workspace --no-push
output:
INFO[0000] Resolved base name alpine:latest to builder INFO[0000] Retrieving image manifest alpine:latest INFO[0000] Retrieving image alpine:latest from registry index.docker.io INFO[0001] Built cross stage deps: map[] INFO[0001] Retrieving image manifest alpine:latest INFO[0001] Returning cached image manifest INFO[0001] Executing 0 build triggers INFO[0001] Unpacking rootfs as cmd RUN mkdir /data/ requires it. INFO[0001] RUN mkdir /data/ INFO[0001] Taking snapshot of full filesystem... INFO[0001] cmd: /bin/sh INFO[0001] args: [-c mkdir /data/] INFO[0001] Running: [/bin/sh -c mkdir /data/] INFO[0001] Taking snapshot of full filesystem... INFO[0001] WORKDIR /data/ INFO[0001] cmd: workdir INFO[0001] Changed working directory to /data INFO[0001] No files changed in this command, skipping snapshotting. INFO[0001] COPY . . INFO[0001] Taking snapshot of files... INFO[0001] Skipping push to container registry due to --no-push flag
what I'm doing wrong?
Thanks kvaps, sorry for my late reply. My previous test was done in the kubernetes env, and I followed the guide to create the pv, pvc and pod.
I noticed that the difference between docker and kubernetes yaml might be this line, so I made a new test by replacing the '--context=dir://workspace' with '--context=/workspace', and it succeed.
The kaniko doc shows that the build context starts with the prefix 'dir://' when the source is local directory(doc). So I'm not sure whether this is a bug in kubernetes env when using 'COPY'.
The following is my failed pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: kaniko-dir
spec:
containers:
- name: kaniko-dir
image: gcr.io/kaniko-project/executor:latest
imagePullPolicy: IfNotPresent
args:
- "--dockerfile=/workspace/Dockerfile"
- "--context=dir://workspace"
- "--no-push"
volumeMounts:
- name: kanikoconfig
mountPath: /workspace
restartPolicy: Never
volumes:
- name: kanikoconfig
persistentVolumeClaim:
claimName: dockerfile-claim
The pv and pvc yaml are the same in example link upstairs. The build failed with error log:
error building image: error building stage: failed to get files used from context: failed to get fileinfo for /workspace/workspace: lstat /workspace/workspace: no such file or directory
Facing similar issue Error from my jenkins build job:
Step #0 - "build-and-push-container": error building image: error building stage: failed to optimize instructions: failed to get files used from context: failed to get fileinfo for /kaniko/0/apps/agency-admin-frontend/dist: lstat /kaniko/0/apps/agency-admin-frontend/dist: no such file or directory
I'm having the following issue when using kankiko with cloud build and doing multi-stage build nodejs (build-stage) then copying the output static content from build-stage to nginx image
I cannot set the context differently because if I set it to work with nginx it breaks during the node image build and vice versa , I'm I doing something wrong here ?
FROM node:20.11.1-alpine3.18 AS build-stage
ARG NODE_ENV
ENV NODE_ENV=${NODE_ENV}
ARG TARGET_ENV
ENV TARGET_ENV=${TARGET_ENV}
ARG VITE_BASE_URL
ENV VITE_BASE_URL=${VITE_BASE_URL}
ARG VITE_API_URL
ENV VITE_API_URL=${VITE_API_URL}
WORKDIR /app
COPY ./*.json ./
COPY ./global.d.ts ./global.d.ts
## AUTOGENERATED FREQUENTLY CHANGING DEPENDENCIES START
...
...
COPY ./packages/ui-kit ./packages/ui-kit
COPY ./apps/agency-admin-frontend ./apps/agency-admin-frontend
## AUTOGENERATED FREQUENTLY CHANGING DEPENDENCIES END
RUN npm install \
&& npx nx run-many --target=build --nx-bail \
&& npm cache clean --force \
&& npm run \
&& rm -rf node_modules/.cache
FROM nginx:stable-alpine
## Remove default nginx website
RUN rm -rf /usr/share/nginx/html/*
## From 'build' stage copy over the artifacts in dist folder to default nginx public folder
COPY --from=build-stage ./apps/agency-admin-frontend/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
##----------
##Cloudbuild
##-----------
steps:
- id: build-and-push-container
name: "gcr.io/kaniko-project/executor:v1.20.1"
args: ["--dockerfile=$_INFRASTRUCTURE_ROOT/$_DOCKERFILE_NAME", "--destination=${_IMAGE_NAME}:${_COMMIT_SHA}", "--cache=true", "--compressed-caching=false", "--use-new-run", "--cache-ttl=${_CACHE_TTL_HOURS}h"]
@mo-dayyeh I'm having a similar issue. Did you find a solution?
In case of Jenkins You have to set '--context' to current Jenkins workspace or copy all files from Jenkins workspace to '/workspace' (I've only tested first one)
ex.:
sh "/kaniko/executor --context=${workspace} ..."
Actual behavior
I'm using kaniko to test the copy used in dockerfile. Both executor:latest and executor:debug image face the same error. So I'm confused about how the COPY in dockerfile should be used in kaniko.
Expected behavior
How to use the COPY in dockerfile with kaniko.
To Reproduce Steps to reproduce the behavior:
Additional Information
Kaniko Image (fully qualified with digest) gcr.io/kaniko-project/executor:debug sha256:7053f62a27a84985c6ac886fcb5f9fa74090edb46536486f69364e3360f7c9ad
Triage Notes for the Maintainers
--cache
flag