Closed keith-turner closed 2 years ago
Did you mean to target this against the next-release
branch? Also, did you look at Docker multi-stage builds? You might only need to change the existing Dockerfile to achieve what you want.
Did you mean to target this against the next-release branch?
Oh I forgot about that branch, yeah I probably want to target against it.
Also, did you look at Docker multi-stage builds?
No I didn't, I will take a look at that.
I'm not sure you even need a multi-stage build. Typically, as I've seen anyway, you would do a multi-stage build if you need to install a lot of dependencies or need a different base image to build some software, but then just want to use the build output in your final image.
Docker images are already layered, and general advice is to minimize commands in the Dockerfile that add layers as that makes the image larger. However, combining everything into a small number of layers will then mean any change causes the one (or few) layers to be rebuilt. You can see that with the single RUN command in the Dockerfile. Since the hadoop/zookeeper/accumulo download and accumulo native build are all combined into a single layer, if you want to rebuild accumulo then everything must be repeated. Simply splitting the RUN command into multiple run commands might achieve what you want. The earlier RUN commands should have the things that you don't expect to change often (e.g., download and install hadoop and zookeeper). Then the later RUN commands should have the things you want to iterate on.
Simply splitting the RUN command into multiple run commands might achieve what you want.
That might work. I was wondering how docker decides that it can reuse a cached layer, i looked and found the following.
https://medium.com/swlh/docker-caching-introduction-to-docker-layers-84f20c48060a https://stackoverflow.com/questions/60578670/why-does-docker-rebuild-all-layers-every-time-i-change-build-args
Based on those, it seems like it will really matter when accumulo is copied in. Going to try the following in the build file.
so hopefully if only accumulo changes then it will only rerun 3 and 4.
Yes, my apologies--I should have been more specific. The order of the steps in the layers is critical, and not just RUN commands create a layer. COPY and ADD commands do as well, so their location within the file matters. I believe the order of steps you list should work fine.
replaced by #22
This PR is intentionally incomplete as I am seeking to improve a problem I see but I am not sure this is the best approach.
In the past when testing compactor and scan servers running Kubernetes I would go through the following process.
Step 4 above takes multiple minutes and creates a 2GB images. Because the image is so large it makes steps 5 and 6 take a while as the image is uploaded and then downloaded from the repo. This PR works around these problems by doing the following.
With the above changes I can have the following workflow.
Step 5 above takes a few seconds (vs a few minutes) and produces a new image where the layers on top of accumulo-base are only ~30MB (can see this with docker history command). The first times step 6 and 7 run, the large accumulo-base image will have to be uploaded and downloaded. However on subsequent runs of step 6 and 7 only ~30MB needs to be uploaded and downloaded, making those steps much much faster.
This is a huge improvement for what I am trying to do. I did just enough work to get this functioning. Before updating the readme, improving the docker file, and download script I would like to see if anyone has feedback.