Open ochorocho opened 5 months ago
Our experience with trying to cache images in GitHub hasn't been very successful, largely because just saving them or extracting them takes more time than downloading them in the first place.
@rfay @ochorocho I was thinking a lot about how we could cache the pipeline stuff. For the moment, this is the biggest draw back of this approach.
I was thinking of a possible solution.
Is DDEV somehow capable of building and pushing a docker image to a registry?
If this would be possible, we could somehow check if the DDEV configuration has changed (maybe a salt.txt file or similar).
In the pipeline we start and build a DDEV container first and push the resulting docker image to the registry like that:
docker push $CI_REGISTRY_IMAGE:$DDEV_SALT
If the DDEV configuration did not change, we can just pull the image from the registry and run the ddev tests.
Or are there other ideas how to avoid the constant download of all docker layers within the container?
DDEV's docker images are all in the hub.docker.com registry. That's the problem here, we don't have a way to efficiently store the images locally. Downloading them doesn't take long, but unpacking/extracting does take a long time. So caching locally (which isn't hard) still takes a long time because the extraction into the local (fresh) docker instance does take time even if the download is fast.
But isn't DDEV building a new docker image with docker compose
?
Locally I can profit from the fact, that the layer are cached. I understand that downloading is as fast as geting them from the cache.
But I thought We could put the resulting image from docker compose which uses the DDEV docker images as a base could be pushed to a registry and reused.
Or is there another mechanism possibly speed up the process? Instead of running Docker-in-Docker, we build the DDEV final docker image, then push it to gitlab registry and in the next step we run the tests inside this prebuild DDEV image.
Maybe this is also the wrong approach. Do you have another idea? We are using kaniko to build and push docker images and I was imaging that this could be a way to save some resources in the pipeline.
DDEV adds a new layer to the image at start, but the key problem with testing is the (download and) extraction of the base image, especially ddev/ddev-webserver. The addition of the extra layers (for username, etc.) takes nearly no time. You can watch it yourself on a ddev start
. The dots show the build time.
IMO the general problem is figuring out how to have the actual docker server persist state, so that images are ready there when they're needed.
Ok I understand. You are absolutely right. Unfortunatly I lack the knowledge how to do that. I'm not even sure that it is technically possible.
But I agree that if the base image would be available, the process would be very fast.
If the images are stored in ddev itself and not in the DIND-Service image we could certainly extend the image to contain the images. Just dunno if these images vary depending on the service version in use.
I'd like to keep this image small if possible.
You point to
for some reason... but that was fixed ages ago so ddev debug download-images
does not require a project.
👍 my intention was to use ddev debug download-images
to download the images and ship this image with the images pre-downloaded.
Will be great to see how that comes out!
Great idea! That make things faster for sure!
Caching the downloaded docker images on a project base would make sense to speed up builds. Currently, all images are download on each and every job run.