Open nathan-c opened 5 years ago
I should probably mention that this means we cannot use --reproducible for our builds because the containers running the build are being killed by k8s on our dev cluster and increasing the node RAM capacity is not a cost effective solution
We are having this issue as well. We need to use reproducible builds so that our daily builds do not get added to our container registry if there is no "real" change to the docker image.
@Phylu Will this also be closed in the 1.7.0 release as of #1722?
@Phylu Will this also be closed in the 1.7.0 release as of #1722?
I am not sure. Did not try this flag with my changes. If you are lucky, it could be that way.
@zx96 do you plan to create a PR for the linked commit? It would save many people a lot of frustration if our image builds would stop crashing at the end of a build due to being killed.
@zx96 do you plan to create a PR for the linked commit? It would save many people a lot of frustration if our image builds would stop crashing at the end of a build due to being killed.
I certainly can. I was planning on merging it shortly after I made the branch but ran into another bug (that also broke my use case) and got frustrated. I'll rebase my branch tomorrow and see if the other issue is still around - can't recall what issue number it was.
After rebasing my branch onto the latest main
, it looks like I'm able to build an image with a 16GB test file in 512MB of memory with --compressed-caching=false --zero-file-timestamps
! 🥳
With kaniko-project/executor:v1.19.2-debug, building the same image:
--reproducible
flag, build took 5m 36s and use 7690Mb--reproducible
flag, build took 1m 38s and use 350MbActiving profiling (https://github.com/GoogleContainerTools/kaniko#kaniko-builds---profiling) I see lot of traces inflate/deflate with --reproducible
flag :
kaniko.zip
Still causes problems
To make the code causing the problem a bit clearer:
There are several options:
I'm having some trouble running kaniko in my dev setup so it is hard to see the gains of 2-4, but until metadata is stripped out during build I think implementing them could save a lot of memory (at the cost of disk usage) and result in a small performance gain.
I've found a 5th option: submitting a PR to go-containerregistry making the layerTime function a lazy transformation. Since the problem lies in a dependency, that may be the best way to achieve this result.
Actual behavior When running with --reproducible=true the entire image is loaded into memory.
Expected behavior Memory profile should remain stable regardless of size of image being built.
To Reproduce Steps to reproduce the behavior:
Additional Information
Image used: gcr.io/kaniko-project/executor@sha256:fe07c91e9342a097a7ee7bf90d0d8b49b285a725315758e92de03e9f5debbb5c
Triage Notes for the Maintainers
--cache
flag