Open jgwest opened 5 years ago
This bug is only about fixing the current libertyDockerfile to match the current template (eg should be a quick fix).
The other two bugs are about improving the general problems of cache invalidation w/o regeneration and/or slow Java download.
I am looking into this issue, I find update cache image Dockerfile to match the Template Dockerfile is more complicated than I thought before because each time we have to ensure & update:
1 Cache image Dockerfile itself
2 Resources that COPY
or ADD
command uses in Dockerfile
(eg. COPY /target/liberty/wlp/usr/servers/defaultServer /config/
, ADD /artifacts/artifacts.tar.gz $HOME/artifacts
)
So for cache image, instead of using the hardcoded Dockerfile and resources to build the cache image, we can pull the pre-built image as cache image from Dockerhub (under eclipse
namespace), then when we build projects we add --cache-from <cache image>
to the existing docker build command to let it uses the cache image to speed up the building image process.
By doing the approach above for cache image, advantages are: 1 Only takes ~15 secs to pull the cache image instead of taking 10+ mins to build the cache image 2 Don't need to update cache image Dockerfile and resources to match the Template any more, only thing we need to do is push cache image to Dockerhub 3 Can easily apply this approach to other project types (swift, lagom, etc) that take long time to build by pushing other project types' cache image to Dockerhub
@rajivnathan @elsony I also verify the two cases:
1 With --cache-from <cache image>
, if we don't have cache image, docker engine will build the image from the beginning without using cache
2 With --cache-from <cache image>
, if user update Dockerfile, docker engine will still use the cache before user's update
Currently buildah doesn't fully support --cache-from <cache image>
, but they do have stories open in their repository:
Parent story: https://github.com/containers/buildah/issues/599
Child story: https://github.com/containers/buildah/issues/620
We will still use the current cache structure (build cache from scratch) until builah fully supports --cache-from <cache image>
Codewind version: Latest built from master. OS: Confirmed on Windows and Linux
Description:
The MicroProfile cache Dockerfile (https://github.com/eclipse/codewind/blob/master/src/pfe/file-watcher/dockerfiles/liberty/libertyDockerfile I believe) that is used to build the cache currently differs from the actual MicroProfile template that is generated for the user.
This means that every time you create a new MicroProfile project, or disable/re-enable an existing project, you must wait 10-30 minutes for the container image to (re)build (with the vast majority of the time being the Java download, a separate issue).
From Jingfu:
I also highly recommend creating an automated test case that detects when the template and cache Dockerfiles are out of sync, as it seems like we didn't catch this mismatch for a while, and this is a bad product behaviour for normal users w/ no obvious workaround.
Steps to reproduce: 1) Stop all Codewind containers 2) Clone, build, and run Codewind:
3) In the IDE (Eclipse, in my case): create a new project, wait for it build and start. (Should be reproducible here, can also proceed to the next step to confirm it also affects disable/re-enable) 4) Disable or delete the project, then enable/recreate the project...