GoogleContainerTools / kaniko

Build Container Images In Kubernetes
Apache License 2.0
14.61k stars 1.42k forks source link

build image is too slow to unbearable #1392

Open YuhuWang2002 opened 4 years ago

YuhuWang2002 commented 4 years ago

Actual behavior Hi all: I want to use kaniko to build image in tekton. I find it's too slow over 40 minutes; but use docker to build image, it just only takes a few minutes. so I try to use kaniko by docker to find what's going on. It looks seem in "unpack rootfs" phase. sometimes report "stream error: stream ID 17; INTERNAL_ERROR" and failed

Expected behavior I think to build image as fast as docker is just can be used

To Reproduce Steps to reproduce the behavior:

  1. this is my test repo: https://github.com/YuhuWang2002/empty-proj.git
  2. run the build.sh to start kaniko

Additional Information

FROM openjdk:8-jre-alpine ENV APP_FILE demo-v1.jar COPY --from=builder /app/target/*.jar /app/ WORKDIR /app RUN touch xxxx EXPOSE 8080 ENTRYPOINT ["sh", "-c"] CMD ["exec java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005 -jar $APP_FILE"]



 **Triage Notes for the Maintainers**
 <!-- πŸŽ‰πŸŽ‰πŸŽ‰ Thank you for an opening an issue !!! πŸŽ‰πŸŽ‰πŸŽ‰
We are doing our best to get to this. Please help us by helping us prioritize your issue by filling the section below -->

 | **Description** | **Yes/No** |
 |----------------|---------------|
 | Please check if this a new feature you are proposing        | <ul><li>- [No] </li></ul>|
 | Please check if the build works in docker but not in kaniko | <ul><li>- [Yes] </li></ul>| 
 | Please check if this error is seen when you use `--cache` flag | <ul><li>- [No] </li></ul>|
 | Please check if your dockerfile is a multistage dockerfile | <ul><li>- [Yes] </li></ul>| 
tejal29 commented 4 years ago

@YuhuWang2002 you can please more details logs. i am assuming the build is failing at RUN mvn package command? From the dockerfile looks like you are running the app in the kaniko pod. Is that correct?

YuhuWang2002 commented 4 years ago

@YuhuWang2002 you can please more details logs. i am assuming the build is failing at RUN mvn package command? From the dockerfile looks like you are running the app in the kaniko pod. Is that correct?

this dockerfile is use to build app.jar from source, for reduce the image size, use the other base image.

this is the logs

[ c3987eca-3002-4dd1-a95e-8b5ade22c065-993e71c8-771a-4bb6-8-2zsh8:step-create-dir-993e71c8-771a-4bb6-85f4-195f87f93760-image-rep]

[ c3987eca-3002-4dd1-a95e-8b5ade22c065-993e71c8-771a-4bb6-8-2zsh8:step-git-source-993e71c8-771a-4bb6-85f4-195f87f93760-git-repo]
{"level":"info","ts":1598004737.1354308,"caller":"git/git.go:136","msg":"Successfully cloned https://YuhuWang2002:9d825036eb25062c211247609f3976a369fcfa0c@github.com/YuhuWang2002/empty-proj.git @ 8ce0dd5a2f8cf14214ba7cdc9c24693c9e2cf63c (grafted, HEAD, origin/master) in path /workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0"}
{"level":"info","ts":1598004737.1872802,"caller":"git/git.go:177","msg":"Successfully initialized and updated submodules in path /workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0"}

[ c3987eca-3002-4dd1-a95e-8b5ade22c065-993e71c8-771a-4bb6-8-2zsh8:step-executor]
DEBU[0000] Copying file /workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0/Dockerfile to /kaniko/Dockerfile 
TRAC[0000] Adding /var/run to initialIgnoreList         
E0821 10:12:23.567948       9 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors
DEBU[0006] Skip resolving path /kaniko/Dockerfile       
DEBU[0006] Skip resolving path /workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0 
DEBU[0006] Skip resolving path /cache                   
DEBU[0006] Skip resolving path                          
DEBU[0006] Skip resolving path                          
DEBU[0006] Skip resolving path                          
INFO[0006] Resolved base name maven:3.6.3-jdk-8-openj9 to builder 
DEBU[0006] Built stage name to index map: map[builder:0] 
INFO[0006] Retrieving image manifest maven:3.6.3-jdk-8-openj9 
INFO[0006] Retrieving image maven:3.6.3-jdk-8-openj9    
DEBU[0009] No file found for cache key sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b stat /cache/sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b: no such file or directory 
DEBU[0009] Image maven:3.6.3-jdk-8-openj9 not found in cache 
INFO[0009] Retrieving image manifest maven:3.6.3-jdk-8-openj9 
INFO[0009] Retrieving image maven:3.6.3-jdk-8-openj9    
INFO[0013] Retrieving image manifest openjdk:8-jre-alpine 
INFO[0013] Retrieving image openjdk:8-jre-alpine        
DEBU[0017] No file found for cache key sha256:b2ad93b079b1495488cc01375de799c402d45086015a120c105ea00e1be0fd52 stat /cache/sha256:b2ad93b079b1495488cc01375de799c402d45086015a120c105ea00e1be0fd52: no such file or directory 
DEBU[0017] Image openjdk:8-jre-alpine not found in cache 
INFO[0017] Retrieving image manifest openjdk:8-jre-alpine 
INFO[0017] Retrieving image openjdk:8-jre-alpine        
DEBU[0022] Resolved /app/target/*.jar to /app/target/*.jar 
DEBU[0022] Resolved /app/ to /app/                      
INFO[0022] Built cross stage deps: map[0:[/app/target/*.jar]] 
INFO[0022] Retrieving image manifest maven:3.6.3-jdk-8-openj9 
INFO[0022] Retrieving image maven:3.6.3-jdk-8-openj9    
DEBU[0026] No file found for cache key sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b stat /cache/sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b: no such file or directory 
DEBU[0026] Image maven:3.6.3-jdk-8-openj9 not found in cache 
INFO[0026] Retrieving image manifest maven:3.6.3-jdk-8-openj9 
INFO[0026] Retrieving image maven:3.6.3-jdk-8-openj9    
INFO[0030] Executing 0 build triggers                   
DEBU[0030] Resolved ./settings.xml to settings.xml      
DEBU[0030] Resolved /usr/share/maven/conf/settings.xml to /usr/share/maven/conf/settings.xml 
DEBU[0030] Getting files and contents at root /workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0 for /workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0/settings.xml 
DEBU[0030] Using files from context: [/workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0/settings.xml] 
DEBU[0030] optimize: composite key for command COPY ./settings.xml /usr/share/maven/conf/settings.xml {[sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b COPY ./settings.xml /usr/share/maven/conf/settings.xml bea0e8c3c1a42a00b0796715c3a3cea1c757f9a4e021331162e3c5de1a8bb38e]} 
DEBU[0030] optimize: cache key for command COPY ./settings.xml /usr/share/maven/conf/settings.xml 9d65e0d38641f96af8a99ec324d3693dae6d3938ded12857860e2ad69cebce3d 
INFO[0030] Checking for cached layer 10.244.176.33/lstack/javatest/cache:9d65e0d38641f96af8a99ec324d3693dae6d3938ded12857860e2ad69cebce3d... 
INFO[0030] Using caching version of cmd: COPY ./settings.xml /usr/share/maven/conf/settings.xml 
DEBU[0030] Resolved ./demo to demo                      
DEBU[0030] Resolved /app to /app                        
DEBU[0030] Using files from context: [/workspace/993e71c8-771a-4bb6-85f4-195f87f93760-git-repo-0/demo] 
DEBU[0030] optimize: composite key for command COPY ./demo /app {[sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b COPY ./settings.xml /usr/share/maven/conf/settings.xml bea0e8c3c1a42a00b0796715c3a3cea1c757f9a4e021331162e3c5de1a8bb38e COPY ./demo /app 14b9714d0baa2fad8a555e89416f4acc37374e76644fc6bbb2ae2ad6b13c81db]} 
DEBU[0030] optimize: cache key for command COPY ./demo /app 23f1f652f765f32e950c53598a8be1062317c172befd04de97b1eddd8c6a5b33 
INFO[0030] Checking for cached layer 10.244.176.33/lstack/javatest/cache:23f1f652f765f32e950c53598a8be1062317c172befd04de97b1eddd8c6a5b33... 
DEBU[0030] Failed to retrieve layer: GET http://10.244.176.33/v2/lstack/javatest/cache/manifests/23f1f652f765f32e950c53598a8be1062317c172befd04de97b1eddd8c6a5b33: NOT_FOUND: artifact lstack/javatest/cache:23f1f652f765f32e950c53598a8be1062317c172befd04de97b1eddd8c6a5b33 not found 
INFO[0030] No cached layer found for cmd COPY ./demo /app 
DEBU[0030] Key missing was: sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b-COPY ./settings.xml /usr/share/maven/conf/settings.xml-bea0e8c3c1a42a00b0796715c3a3cea1c757f9a4e021331162e3c5de1a8bb38e-COPY ./demo /app-14b9714d0baa2fad8a555e89416f4acc37374e76644fc6bbb2ae2ad6b13c81db 
DEBU[0030] optimize: composite key for command WORKDIR /app {[sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b COPY ./settings.xml /usr/share/maven/conf/settings.xml bea0e8c3c1a42a00b0796715c3a3cea1c757f9a4e021331162e3c5de1a8bb38e COPY ./demo /app 14b9714d0baa2fad8a555e89416f4acc37374e76644fc6bbb2ae2ad6b13c81db WORKDIR /app]} 
DEBU[0030] optimize: cache key for command WORKDIR /app f9524d22163aac4c4c4249a7abc874d2974169a66d3acb7572f083b44c922e35 
DEBU[0030] optimize: composite key for command RUN mvn package {[sha256:e195c80107fa98cd04c676042f88f3f9d0c0b7277aa406c92f65761b16b6782b COPY ./settings.xml /usr/share/maven/conf/settings.xml bea0e8c3c1a42a00b0796715c3a3cea1c757f9a4e021331162e3c5de1a8bb38e COPY ./demo /app 14b9714d0baa2fad8a555e89416f4acc37374e76644fc6bbb2ae2ad6b13c81db WORKDIR /app RUN mvn package]} 
DEBU[0030] optimize: cache key for command RUN mvn package ab20b27b89ee71abef91f5ea28d77eff01a9f899b9067f12528b49f8e6787f49 
INFO[0030] Unpacking rootfs as cmd COPY ./demo /app requires it. 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1819 710 0:308 / / rw,relatime master:202 - overlay overlay rw,lowerdir=/var/lib/docker/overlay2/l/GANXNVNWJWT3BFVIXEXGD4TAXD:/var/lib/docker/overlay2/l/FMV6Z4SMFBS2KMYS3PHBVVNEJ5:/var/lib/docker/overlay2/l/I2XVOBLNPJSN3DZSSFGUMFYYXR:/var/lib/docker/overlay2/l/TLYTZ4Z6XRYABAWASAQOTK4O6S:/var/lib/docker/overlay2/l/EL2VHMFNXHPSEJ57PP3JP6DD3F:/var/lib/docker/overlay2/l/GIO4THPS67TC6PUXXULB44323W:/var/lib/docker/overlay2/l/JD3A3UGGM23WJKXG6OQUYESY22:/var/lib/docker/overlay2/l/TG73NLQAZTW54BHURB2ACMUXSN:/var/lib/docker/overlay2/l/EYIAYTZTL3R75URXGY24A6HOP5:/var/lib/docker/overlay2/l/YK757YSDLXLJE7RUPA5LI2JGRT,upperdir=/var/lib/docker/overlay2/0372557fe9eb4f834e8b77f745ac99782b23cac55bc4014f1ebe5a6d8de166b4/diff,workdir=/var/lib/docker/overlay2/0372557fe9eb4f834e8b77f745ac99782b23cac55bc4014f1ebe5a6d8de166b4/work 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1820 1819 0:309 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw 
TRAC[0030] Appending /proc from line: 1820 1819 0:309 / /proc rw,nosuid,nodev,noexec,relatime - proc proc rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1821 1819 0:310 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Appending /dev from line: 1821 1819 0:310 / /dev rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1822 1821 0:311 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 
TRAC[0030] Appending /dev/pts from line: 1822 1821 0:311 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=666 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1823 1819 0:119 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro 
TRAC[0030] Appending /sys from line: 1823 1819 0:119 / /sys ro,nosuid,nodev,noexec,relatime - sysfs sysfs ro 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1824 1823 0:312 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755 
TRAC[0030] Appending /sys/fs/cgroup from line: 1824 1823 0:312 / /sys/fs/cgroup ro,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1825 1824 0:22 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime master:9 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 
TRAC[0030] Appending /sys/fs/cgroup/systemd from line: 1825 1824 0:22 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/systemd ro,nosuid,nodev,noexec,relatime master:9 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1826 1824 0:24 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime master:10 - cgroup cgroup rw,freezer 
TRAC[0030] Appending /sys/fs/cgroup/freezer from line: 1826 1824 0:24 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/freezer ro,nosuid,nodev,noexec,relatime master:10 - cgroup cgroup rw,freezer 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1827 1824 0:25 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,cpuset 
TRAC[0030] Appending /sys/fs/cgroup/cpuset from line: 1827 1824 0:25 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/cpuset ro,nosuid,nodev,noexec,relatime master:11 - cgroup cgroup rw,cpuset 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1828 1824 0:26 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,memory 
TRAC[0030] Appending /sys/fs/cgroup/memory from line: 1828 1824 0:26 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/memory ro,nosuid,nodev,noexec,relatime master:12 - cgroup cgroup rw,memory 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1829 1824 0:27 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,cpuacct,cpu 
TRAC[0030] Appending /sys/fs/cgroup/cpu,cpuacct from line: 1829 1824 0:27 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/cpu,cpuacct ro,nosuid,nodev,noexec,relatime master:13 - cgroup cgroup rw,cpuacct,cpu 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1830 1824 0:28 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,blkio 
TRAC[0030] Appending /sys/fs/cgroup/blkio from line: 1830 1824 0:28 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/blkio ro,nosuid,nodev,noexec,relatime master:14 - cgroup cgroup rw,blkio 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1831 1824 0:29 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,net_prio,net_cls 
TRAC[0030] Appending /sys/fs/cgroup/net_cls,net_prio from line: 1831 1824 0:29 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/net_cls,net_prio ro,nosuid,nodev,noexec,relatime master:15 - cgroup cgroup rw,net_prio,net_cls 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1832 1824 0:30 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,hugetlb 
TRAC[0030] Appending /sys/fs/cgroup/hugetlb from line: 1832 1824 0:30 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/hugetlb ro,nosuid,nodev,noexec,relatime master:16 - cgroup cgroup rw,hugetlb 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1833 1824 0:31 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,perf_event 
TRAC[0030] Appending /sys/fs/cgroup/perf_event from line: 1833 1824 0:31 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/perf_event ro,nosuid,nodev,noexec,relatime master:17 - cgroup cgroup rw,perf_event 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1834 1824 0:32 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,pids 
TRAC[0030] Appending /sys/fs/cgroup/pids from line: 1834 1824 0:32 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/pids ro,nosuid,nodev,noexec,relatime master:18 - cgroup cgroup rw,pids 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1835 1824 0:33 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,devices 
TRAC[0030] Appending /sys/fs/cgroup/devices from line: 1835 1824 0:33 /kubepods/besteffort/pod86dfb729-e682-4e49-8436-a52c972f5756/07026d4cde22b62f6fe1481b6550b00f5ce9f017ad9012a9916c46f817c5e37b /sys/fs/cgroup/devices ro,nosuid,nodev,noexec,relatime master:19 - cgroup cgroup rw,devices 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1836 1821 0:115 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw 
TRAC[0030] Appending /dev/mqueue from line: 1836 1821 0:115 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1837 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-workspace /workspace rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /workspace from line: 1837 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-workspace /workspace rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1838 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-tools /tekton/tools rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /tekton/tools from line: 1838 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-tools /tekton/tools rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1839 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-home /tekton/home rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /tekton/home from line: 1839 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-home /tekton/home rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1840 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-results /tekton/results rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /tekton/results from line: 1840 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~empty-dir/tekton-internal-results /tekton/results rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1841 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/containers/step-executor/4c8e0c65 /tekton/termination rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /tekton/termination from line: 1841 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/containers/step-executor/4c8e0c65 /tekton/termination rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1842 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~configmap/docker-config /kaniko/.docker ro,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /kaniko/.docker from line: 1842 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/volumes/kubernetes.io~configmap/docker-config /kaniko/.docker ro,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1843 1819 253:1 /var/lib/docker/containers/e484cb3f80f9aef9af2cc913a051cc9e0d934d0aaf34e6ea19ec22344ee06a0f/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /etc/resolv.conf from line: 1843 1819 253:1 /var/lib/docker/containers/e484cb3f80f9aef9af2cc913a051cc9e0d934d0aaf34e6ea19ec22344ee06a0f/resolv.conf /etc/resolv.conf rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1844 1819 253:1 /var/lib/docker/containers/e484cb3f80f9aef9af2cc913a051cc9e0d934d0aaf34e6ea19ec22344ee06a0f/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /etc/hostname from line: 1844 1819 253:1 /var/lib/docker/containers/e484cb3f80f9aef9af2cc913a051cc9e0d934d0aaf34e6ea19ec22344ee06a0f/hostname /etc/hostname rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1845 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/etc-hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Appending /etc/hosts from line: 1845 1819 253:1 /var/lib/kubelet/pods/86dfb729-e682-4e49-8436-a52c972f5756/etc-hosts /etc/hosts rw,relatime - ext4 /dev/vda1 rw,data=ordered 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1846 1821 0:114 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k 
TRAC[0030] Appending /dev/shm from line: 1846 1821 0:114 / /dev/shm rw,nosuid,nodev,noexec,relatime - tmpfs shm rw,size=65536k 
TRAC[0030] Read the following line from /proc/self/mountinfo: 1847 1819 0:111 / /var/run/secrets/kubernetes.io/serviceaccount ro,relatime - tmpfs tmpfs rw 
TRAC[0030] Appending /var/run/secrets/kubernetes.io/serviceaccount from line: 1847 1819 0:111 / /var/run/secrets/kubernetes.io/serviceaccount ro,relatime - tmpfs tmpfs rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 711 1820 0:309 /bus /proc/bus ro,relatime - proc proc rw 
TRAC[0030] Appending /proc/bus from line: 711 1820 0:309 /bus /proc/bus ro,relatime - proc proc rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 712 1820 0:309 /fs /proc/fs ro,relatime - proc proc rw 
TRAC[0030] Appending /proc/fs from line: 712 1820 0:309 /fs /proc/fs ro,relatime - proc proc rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 713 1820 0:309 /irq /proc/irq ro,relatime - proc proc rw 
TRAC[0030] Appending /proc/irq from line: 713 1820 0:309 /irq /proc/irq ro,relatime - proc proc rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 714 1820 0:309 /sys /proc/sys ro,relatime - proc proc rw 
TRAC[0030] Appending /proc/sys from line: 714 1820 0:309 /sys /proc/sys ro,relatime - proc proc rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 715 1820 0:309 /sysrq-trigger /proc/sysrq-trigger ro,relatime - proc proc rw 
TRAC[0030] Appending /proc/sysrq-trigger from line: 715 1820 0:309 /sysrq-trigger /proc/sysrq-trigger ro,relatime - proc proc rw 
TRAC[0030] Read the following line from /proc/self/mountinfo: 716 1820 0:313 / /proc/acpi ro,relatime - tmpfs tmpfs ro 
TRAC[0030] Appending /proc/acpi from line: 716 1820 0:313 / /proc/acpi ro,relatime - tmpfs tmpfs ro 
TRAC[0030] Read the following line from /proc/self/mountinfo: 717 1820 0:310 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Appending /proc/kcore from line: 717 1820 0:310 /null /proc/kcore rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 718 1820 0:310 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Appending /proc/keys from line: 718 1820 0:310 /null /proc/keys rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 719 1820 0:310 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Appending /proc/timer_list from line: 719 1820 0:310 /null /proc/timer_list rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 720 1820 0:310 /null /proc/timer_stats rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Appending /proc/timer_stats from line: 720 1820 0:310 /null /proc/timer_stats rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 721 1820 0:310 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Appending /proc/sched_debug from line: 721 1820 0:310 /null /proc/sched_debug rw,nosuid - tmpfs tmpfs rw,size=65536k,mode=755 
TRAC[0030] Read the following line from /proc/self/mountinfo: 722 1820 0:314 / /proc/scsi ro,relatime - tmpfs tmpfs ro 
TRAC[0030] Appending /proc/scsi from line: 722 1820 0:314 / /proc/scsi ro,relatime - tmpfs tmpfs ro 
TRAC[0030] Read the following line from /proc/self/mountinfo: 723 1823 0:315 / /sys/firmware ro,relatime - tmpfs tmpfs ro 
TRAC[0030] Appending /sys/firmware from line: 723 1823 0:315 / /sys/firmware ro,relatime - tmpfs tmpfs ro 
TRAC[0030] Read the following line from /proc/self/mountinfo:  
TRAC[0030] Reached end of file /proc/self/mountinfo     
DEBU[0030] Mounted directories: [{/kaniko false} {/etc/mtab false} {/tmp/apt-key-gpghome true} {/var/run false} {/proc false} {/dev false} {/dev/pts false} {/sys false} {/sys/fs/cgroup false} {/sys/fs/cgroup/systemd false} {/sys/fs/cgroup/freezer false} {/sys/fs/cgroup/cpuset false} {/sys/fs/cgroup/memory false} {/sys/fs/cgroup/cpu,cpuacct false} {/sys/fs/cgroup/blkio false} {/sys/fs/cgroup/net_cls,net_prio false} {/sys/fs/cgroup/hugetlb false} {/sys/fs/cgroup/perf_event false} {/sys/fs/cgroup/pids false} {/sys/fs/cgroup/devices false} {/dev/mqueue false} {/workspace false} {/tekton/tools false} {/tekton/home false} {/tekton/results false} {/tekton/termination false} {/kaniko/.docker false} {/etc/resolv.conf false} {/etc/hostname false} {/etc/hosts false} {/dev/shm false} {/var/run/secrets/kubernetes.io/serviceaccount false} {/proc/bus false} {/proc/fs false} {/proc/irq false} {/proc/sys false} {/proc/sysrq-trigger false} {/proc/acpi false} {/proc/kcore false} {/proc/keys false} {/proc/timer_list false} {/proc/timer_stats false} {/proc/sched_debug false} {/proc/scsi false} {/sys/firmware false}] 
TRAC[0030] Extracting layer 0 of media type application/vnd.docker.image.rootfs.diff.tar.gzip 
TRAC[0031] creating dir /bin                            
TRAC[0031] creating file /bin/bash                      
TRAC[0032] access time for /bin/bash is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/bunzip2                   
TRAC[0032] access time for /bin/bunzip2 is zero, converting to zero for epoch 
TRAC[0032] link from bin/bunzip2 to /bin/bzcat          
TRAC[0032] symlink from bzdiff to /bin/bzcmp            
TRAC[0032] creating file /bin/bzdiff                    
TRAC[0032] access time for /bin/bzdiff is zero, converting to zero for epoch 
TRAC[0032] symlink from bzgrep to /bin/bzegrep          
TRAC[0032] creating file /bin/bzexe                     
TRAC[0032] access time for /bin/bzexe is zero, converting to zero for epoch 
TRAC[0032] symlink from bzgrep to /bin/bzfgrep          
TRAC[0032] creating file /bin/bzgrep                    
TRAC[0032] access time for /bin/bzgrep is zero, converting to zero for epoch 
TRAC[0032] link from bin/bunzip2 to /bin/bzip2          
TRAC[0032] creating file /bin/bzip2recover              
TRAC[0032] access time for /bin/bzip2recover is zero, converting to zero for epoch 
TRAC[0032] symlink from bzmore to /bin/bzless           
TRAC[0032] creating file /bin/bzmore                    
TRAC[0032] access time for /bin/bzmore is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/cat                       
TRAC[0032] access time for /bin/cat is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/chgrp                     
TRAC[0032] access time for /bin/chgrp is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/chmod                     
TRAC[0032] access time for /bin/chmod is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/chown                     
TRAC[0032] access time for /bin/chown is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/cp                        
TRAC[0032] access time for /bin/cp is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/dash                      
TRAC[0032] access time for /bin/dash is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/date                      
TRAC[0032] access time for /bin/date is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/dd                        
TRAC[0032] access time for /bin/dd is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/df                        
TRAC[0032] access time for /bin/df is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/dir                       
TRAC[0032] access time for /bin/dir is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/dmesg                     
TRAC[0032] access time for /bin/dmesg is zero, converting to zero for epoch 
TRAC[0032] symlink from hostname to /bin/dnsdomainname  
TRAC[0032] symlink from hostname to /bin/domainname     
TRAC[0032] creating file /bin/echo                      
TRAC[0032] access time for /bin/echo is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/egrep                     
TRAC[0032] access time for /bin/egrep is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/false                     
TRAC[0032] access time for /bin/false is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/fgrep                     
TRAC[0032] access time for /bin/fgrep is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/findmnt                   
TRAC[0032] access time for /bin/findmnt is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/grep                      
TRAC[0032] access time for /bin/grep is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/gunzip                    
TRAC[0032] access time for /bin/gunzip is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/gzexe                     
TRAC[0032] access time for /bin/gzexe is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/gzip                      
TRAC[0032] access time for /bin/gzip is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/hostname                  
TRAC[0032] access time for /bin/hostname is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/kill                      
TRAC[0032] access time for /bin/kill is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/ln                        
TRAC[0032] access time for /bin/ln is zero, converting to zero for epoch 
TRAC[0032] creating file /bin/login                     
peacememories commented 4 years ago

Just chiming in to say we, too, are having problems with build speed in Kaniko. We're mainly using Kaniko as part of our Gitlab CI pipelines, because we don't want to enable privileged containers for our runners. In this regard, it's great that Kaniko exists at all, because otherwise we would just be out of luck

But speed-wise, the executor build takes forever for any Using caching version of cmd: or Unpacking rootfs step.

Some informations about the machine we're running this on: It's a 16 core vm host with 32gb of memory allocated to the vm, and ext4 block storage backed by nfs on a hypervisor level The bottleneck is our uplink, which is about 16Mb symmetric.

I have not checked yet whether the processes that take so long are bottlenecked by uplink speed, though at least the cache retrieval should happen over local ethernet (from our private gitlab registry).

We have turned on registry caching, but have not yet used base image caching since we'd need to set up a warmer for that (it would be nice if kaniko could generate caches on the fly^^'). But I suspect this might not be the main issue, owing to the fact that cache extractions are also very slow.

I hope this information helps a bit. I really like the concept of Kaniko and would like to be able to confidently tell our team that this is what we'll be using, for the security aspect alone!

If I can provide any more information let me know.

raijinsetsu commented 4 years ago

@peacememories the issue is that Kaniko, because it is not able to access the Docker machinery, cannot use the overlay filesystem driver, which is what the Docker Daemon uses when building images. This means any "RUN" steps require iterating over all the files in the file system to see what has changed.

If, like me, your RUN commands only create NEW files and never modify existing files, you should be able to change the snapshot mode to "time" (ie. --snapshowMode=time). You could also try out the "redo" mode which is faster than the default "full".

Laski commented 3 years ago

Even with the --snapshotMode=redo and --use-new-run my builds take almost 10 minutes for a fully cached (nothing changed) run.

gfvirga commented 3 years ago

We are having the same issue. For us it is happening with Alpine images only.

tejal29 commented 3 years ago

@gfvirga What version of kaniko are you using? @Laski Can you provide your build logs?

Laski commented 3 years ago

Sure! The slowest part is

INFO[0237] Retrieving image manifest nginx:1.16.0-alpine 
INFO[0237] Checking for cached layer us.gcr.io/buendolar/cache/frontend-backoffice:744c2398b36de0ddaf5d82394344186becb4eb7e0ee7dfd0e830663240b93040... 
INFO[0238] Using caching version of cmd: COPY --from=builder /app/build /usr/share/nginx/html 
INFO[0238] Checking for cached layer us.gcr.io/buendolar/cache/frontend-backoffice:45b44ee7dc3e6d92e6fc83daad93f9f9750b4927e133d33c899ef0d1ca4c6418... 
INFO[0238] Using caching version of cmd: COPY backoffice/nginx.conf /etc/nginx/conf.d/default.conf 
INFO[0238] cmd: EXPOSE                                  
INFO[0238] Adding exposed port: 80/tcp                  
INFO[0238] Skipping unpacking as no commands require it. 
INFO[0238] Taking snapshot of full filesystem...        
INFO[0238] Resolving paths                              
INFO[0238] COPY --from=builder /app/build /usr/share/nginx/html 
INFO[0238] Found cached layer, extracting to filesystem 
INFO[0238] COPY backoffice/nginx.conf /etc/nginx/conf.d/default.conf 
INFO[0238] Found cached layer, extracting to filesystem 
INFO[0238] EXPOSE 80                                    
INFO[0238] cmd: EXPOSE                                  
INFO[0238] Adding exposed port: 80/tcp                  
INFO[0238] No files changed in this command, skipping snapshotting. 
INFO[0238] CMD ["nginx", "-g", "daemon off;"]           
INFO[0238] No files changed in this command, skipping snapshotting. 

The "Taking snapshot of full filesystem..." can take between 5 and 15 minutes.

tejal29 commented 3 years ago

@Laski Can you use the --snapshotMode=redo flag?

tejal29 commented 3 years ago

@Laski Can you use the --snapshotMode=redo flag?

nvm. i see you have used the --use-new-run flag.

tejal29 commented 3 years ago

Sure! The slowest part is

INFO[0237] Retrieving image manifest nginx:1.16.0-alpine 
INFO[0237] Checking for cached layer us.gcr.io/buendolar/cache/frontend-backoffice:744c2398b36de0ddaf5d82394344186becb4eb7e0ee7dfd0e830663240b93040... 
INFO[0238] Using caching version of cmd: COPY --from=builder /app/build /usr/share/nginx/html 
INFO[0238] Checking for cached layer us.gcr.io/buendolar/cache/frontend-backoffice:45b44ee7dc3e6d92e6fc83daad93f9f9750b4927e133d33c899ef0d1ca4c6418... 
INFO[0238] Using caching version of cmd: COPY backoffice/nginx.conf /etc/nginx/conf.d/default.conf 
INFO[0238] cmd: EXPOSE                                  
INFO[0238] Adding exposed port: 80/tcp                  
INFO[0238] Skipping unpacking as no commands require it. 
INFO[0238] Taking snapshot of full filesystem...        
INFO[0238] Resolving paths                              
INFO[0238] COPY --from=builder /app/build /usr/share/nginx/html 
INFO[0238] Found cached layer, extracting to filesystem 
INFO[0238] COPY backoffice/nginx.conf /etc/nginx/conf.d/default.conf 
INFO[0238] Found cached layer, extracting to filesystem 
INFO[0238] EXPOSE 80                                    
INFO[0238] cmd: EXPOSE                                  
INFO[0238] Adding exposed port: 80/tcp                  
INFO[0238] No files changed in this command, skipping snapshotting. 
INFO[0238] CMD ["nginx", "-g", "daemon off;"]           
INFO[0238] No files changed in this command, skipping snapshotting. 

The "Taking snapshot of full filesystem..." can take between 5 and 15 minutes.

From the logs look like you are using kaniko version before v1.2.0. In v1.2.0 we skipped caching Copy command. Folks have reported, instead of caching COPY layer and retrieving, its faster to run it locally.

shubham149 commented 3 years ago

I have found that syscall.Sync() call in https://github.com/GoogleContainerTools/kaniko/blob/master/pkg/snapshot/snapshot.go#L154:L158 is taking most of the time in filesystem snapshot in kaniko build. In the below logs, it took around 2 minutes for syscall.Sync to finish (full filesystem snapshot also took 2minutes in total).

[36mINFO[2021-08-20T13:53:34Z] Taking snapshot of files...
DEBU[2021-08-20T13:53:34Z] Taking snapshot of files [/opt/set_default_variables.sh /opt/start_process.sh] INFO[2021-08-20T13:53:34Z] RUN wget https://github.com/mikefarah/yq/releases/download/3.3.2/yq_linux_amd64 -O /usr/bin/yq -O /usr/bin/yq INFO[2021-08-20T13:53:34Z] Taking snapshot of full filesystem...
TRAC[2021-08-20T13:55:41Z] SNAPSHOT_TIMEOUT_DURATION environment not set. Using default snapshot timeout 90m TRAC[2021-08-20T13:55:41Z] Analyzing path /
TRAC[2021-08-20T13:55:41Z] Analyzing path /kaniko
TRAC[2021-08-20T13:55:41Z] Skipping paths under /kaniko, as it is a ignored directory TRAC[2021-08-20T13:55:41Z] Analyzing path /tmp

OrangeFlag commented 2 years ago

v1.6.0 - same behaviour

ludydoo commented 2 years ago

Same here. I have a pipeline step to create a "cache" image for building multiple node apps from the same monorepo (nrwl/nx). It's a warmer.

"Extracting to fileSystem" takes at least 15 minutes. On docker, it takes a second. This is running on a 8vcore 32GB RAM burstable node on aks. I've tried all my tricks to try getting the build as fast as possible. Just unpacking a "cached" step is longer than creating the cache image itself... I'm not sure how useful the cache is at this point

E1126 19:57:29.672455      18 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
    For verbose messaging see aws.Config.CredentialsChainVerboseErrors
INFO[0002] Resolved base name node:lts-slim to builder  
INFO[0002] Using dockerignore file: /work/source/.dockerignore 
INFO[0002] Retrieving image manifest node:lts-slim      
INFO[0002] Retrieving image node:lts-slim from registry index.docker.io 
INFO[0003] Retrieving image manifest node:lts-slim      
INFO[0003] Returning cached image manifest              
INFO[0003] Retrieving image manifest nginx:alpine       
INFO[0003] Retrieving image nginx:alpine from registry index.docker.io 
INFO[0004] Retrieving image manifest nginx:alpine       
INFO[0004] Returning cached image manifest              
INFO[0005] Built cross stage deps: map[0:[/app/dist/apps/da]] 
INFO[0005] Retrieving image manifest node:lts-slim      
INFO[0005] Returning cached image manifest              
INFO[0005] Retrieving image manifest node:lts-slim      
INFO[0005] Returning cached image manifest              
INFO[0005] Executing 0 build triggers                   
INFO[0005] Checking for cached layer myacr/cache:fa2b3731fbf527659bdcf7ddff328489a8b2f62cc29f08c655b6393543c8da0b... 
INFO[0005] Using caching version of cmd: COPY package.json . 
INFO[0005] Checking for cached layer myacr/cache:db7f8f8e7653b50c23cae432ab4fa4f6dccd743a1c4f82f9332ad67a94c22410... 
INFO[0005] Using caching version of cmd: COPY yarn.lock . 
INFO[0005] Checking for cached layer myacr/cache:d8bbd365c352d1aa3ba92fd1b2d48040768c3176d2048f1668edcf6e0f925fbe... 
INFO[0006] Using caching version of cmd: RUN yarn --frozen-lockfile 
INFO[0007] Checking for cached layer myacr/cache:837ea83ff85ab098196dbcbbdcdf940cab6431410ab98a8bb2e96617c023b0e4... 
INFO[0007] Using caching version of cmd: COPY . .       
INFO[0007] Checking for cached layer myacr/cache:529348db565d4c2bac6f6bc8264665506e58bc1312127c08b1f05630952cda76... 
INFO[0007] Using caching version of cmd: RUN chown -R node:node . 
INFO[0007] cmd: USER                                    
INFO[0007] Checking for cached layer myacr/cache:a9106c354b89962584ab4730446e25c3666ed1e54def60d5aec0ae81235af398... 
INFO[0008] Using caching version of cmd: RUN npx nx run-many --target=build --configuration=production --all --parallel=7 && rm -rf node_modules 
INFO[0015] WORKDIR /app                                 
INFO[0015] cmd: workdir                                 
INFO[0015] Changed working directory to /app            
INFO[0015] Creating directory /app                      
INFO[0015] Taking snapshot of files...                  
INFO[0015] COPY package.json .                          
INFO[0015] Found cached layer, extracting to filesystem 
INFO[0015] COPY yarn.lock .                             
INFO[0015] Found cached layer, extracting to filesystem 
INFO[0015] RUN yarn --frozen-lockfile                   
INFO[0015] Found cached layer, extracting to filesystem   <----- 15 minutes at least

Even running a simple chown takes 10 minutes

INFO[0948] RUN chown -R node:node .                     
INFO[0948] Found cached layer, extracting to filesystem <-- 10 minutes
lololozhkin commented 2 years ago

Same behavior. Extraction to filesystem takes too much time.

The idea of kaniko is great, thank you for this project, waiting for news on this issue!

ohgenlong commented 2 years ago

we are get the same issue for this, this is a very important item for our choice to make a discision for build image tooling , does it have a plan to optimize it ? thank you so much

iamkhalidbashir commented 2 years ago

This issue is not resolved. Closing it doesn't make sense.

matsko commented 2 years ago

Any movement on this? I just had a build go from 10m to 45m when I split up my Dockerfile steps into various multi-stage parts without introducing any new actual steps.

emirb commented 1 year ago

Any news on this?

matsko commented 1 year ago

On GCP we switched over from Kaniko to just doing multiple Docker image builds on the regular gcloud docker. By splitting up the build into one base Docker file and then three sub Docker files (that are run in parallel), we got our builds down from 45m to an average of 6m.

GuillaumeCisco commented 10 months ago

I think I have the same question as anybody else. Using a multi stage docker with only one stage for building javascript dependencies with:

FROM node:18.5.0-alpine as dependencies

WORKDIR /usr/src/app

COPY package.json ./
COPY yarn.lock ./
RUN apk add --update --no-cache python3 make g++
RUN yarn install --frozen-lockfile --production=false

FROM dependencies as build
...

and if nothing changed, by using the cache:

INFO[0000] Resolved base name node:18.5.0-alpine to dependencies 
INFO[0000] Resolved base name dependencies to build     
INFO[0000] Resolved base name node:18.5.0-alpine to production 
INFO[0000] Using dockerignore file: /workspace/.dockerignore 
INFO[0000] Retrieving image manifest node:18.5.0-alpine 
INFO[0000] Retrieving image node:18.5.0-alpine from registry index.docker.io 
INFO[0001] Retrieving image manifest node:18.5.0-alpine 
INFO[0001] Returning cached image manifest              
INFO[0001] Retrieving image manifest node:18.5.0-alpine 
INFO[0001] Returning cached image manifest              
INFO[0001] Retrieving image manifest node:18.5.0-alpine 
INFO[0001] Returning cached image manifest              
INFO[0001] Built cross stage deps: map[0:[/usr/src/app/package.json /usr/src/app/yarn.lock]] 
INFO[0001] Retrieving image manifest node:18.5.0-alpine 
INFO[0001] Returning cached image manifest              
INFO[0001] Retrieving image manifest node:18.5.0-alpine 
INFO[0001] Returning cached image manifest              
INFO[0001] Executing 0 build triggers                   
INFO[0001] Building stage 'node:18.5.0-alpine' [idx: '0', base-idx: '-1'] 
INFO[0001] Checking for cached layer censured/cache:4f724592ff1e3ec9e81c65ddc6f3b4cf1b9f5867bd692e5198aa7131b3b54173... 
INFO[0002] Using caching version of cmd: COPY package.json ./ 
INFO[0002] Checking for cached layer censured/cache:bd9087dabd5a7f0033eeac25e98337a2e6cc77868721a0d6b3e8ee89c4844d11... 
INFO[0004] Using caching version of cmd: COPY yarn.lock ./ 
INFO[0004] Checking for cached layer censured/cache:5203355a370391dcf369bf45ccb01e623f6d3ef22bf3c39216f57e9f6cfd5ee7... 
INFO[0005] Using caching version of cmd: RUN apk add --update --no-cache python3 make g++ 
INFO[0005] Checking for cached layer censured/cache:6d528bfcd39c50319e6f71cb170d20d04ebd229859f2829469a9c33ba7f81fe4... 
INFO[0006] Using caching version of cmd: RUN yarn install --frozen-lockfile --production=false 
INFO[0012] Initializing snapshotter ...                 
INFO[0012] Taking snapshot of full filesystem...        
INFO[0012] WORKDIR /usr/src/app                         
INFO[0012] Cmd: workdir                                 
INFO[0012] Changed working directory to /usr/src/app    
INFO[0012] Creating directory /usr/src/app with uid -1 and gid -1 
INFO[0012] Taking snapshot of files...                  
INFO[0012] COPY package.json ./                         
INFO[0012] Found cached layer, extracting to filesystem 
INFO[0013] COPY yarn.lock ./                            
INFO[0013] Found cached layer, extracting to filesystem 
INFO[0013] RUN apk add --update --no-cache python3 make g++ 
INFO[0013] Found cached layer, extracting to filesystem 
INFO[0022] RUN yarn install --frozen-lockfile --production=false 
INFO[0022] Found cached layer, extracting to filesystem 
INFO[0200] Storing source image from stage 0 at path /kaniko/stages/0
INFO[0383] Saving file usr/src/app/package.json for later use 
INFO[0383] Saving file usr/src/app/yarn.lock for later use 
INFO[0383] Deleting filesystem...                       
INFO[0390] Base image from previous stage 0 found, using saved tar at path /kaniko/stages/0 
...

As you can see, it takes 390s to build a stage that should not even be built. How can we skip that? iI am testing with this basic file with kaniko:

 #!/bin/sh
# This command requires kaniko-secret.json, which must contain a Google service account to be able to upload
# images to the gcr image repository
docker run --rm\
    -v "$HOME"/.config/gcloud:/root/.config/gcloud \
    -v "$(pwd)":/workspace \
    -v /tmp:/cache \
    -v "$(pwd)"/kaniko-secret.json:/kaniko/config.json:ro \
    -e GOOGLE_APPLICATION_CREDENTIALS=/kaniko/config.json \
    gcr.io/kaniko-project/executor:debug \
    --dockerfile /workspace/Dockerfile \
    --destination <censured> \
    --context dir:///workspace/ \
    --cache=true --cache-copy-layers --cache-run-layers --cache-dir=/cache \
    --use-new-run --compressed-caching=false --snapshot-mode=redo \
    --ignore-path=/kaniko/stages/*

--ignore-path did not fix the issue as I thought from this issue: https://github.com/GoogleContainerTools/kaniko/issues/2509

Is there any way to get rid of this first stage if nothing changed from package.json or yarn.lock files?

rashmigspai commented 5 months ago

@tejal29 Running into same issue here but more than build time, hit badly by the failures with this this error Unpacking rootfs as cmd RUN mkdir -p $RTI && chown -R 1001:0 $RTI && chown -R 1001:1001 <dir> && chmod -Rf 777 <dir>requires it. error building image: error building stage: failed to get filesystem from image: stream error: stream ID 17; NO_ERROR; received from peer

Can you please suggest why we getting this error ? Is it because of network issue ? Tried to use --use-new-run and --snapshot-mode=redo but none of them helping. Any suggestion on how to overcome or what we can do is highly appreciated as this is totally blocking us with failures.