Open MarcelCoding opened 3 years ago
@MarcelCoding
The cache can grow very quickly with large images, since old entries are not deleted.
Yes you're right atm caches are copied over the existing cache so it keeps growing. Can you open an issue on buildkit repo about that please? In the meantime you can do this:
[...]
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
[...]
- name: Build
uses: docker/build-push-action@v2
with:
push: false
tags: ${{ steps.prepare.outputs.image }}
platforms: ${{ env.DOCKER_PLATFORMS }}
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache-new
context: .
[...]
- name: Move cache
run:
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
cc. @tonistiigi
@crazy-max that means that the cache-to
option does not export the entire Docker cache, just the one for the current images?
@MarcelCoding
that means that the cache-to option does not export the entire Docker cache, just the one for the current images?
cache-to
exports the build cache for the current image being built with buildkit yes. If you are implying that the cache can be shared between different images, this is not the case. See https://github.com/docker/build-push-action/issues/153#issuecomment-703182778.
Ok, thanks for the help, I will create an issue in the builtkit repo. For implementing an option to clean the cache/remove old versions of the cache.
Hi @crazy-max
Can I use buildkit registry cache within local registry
by GitHub service container, then caching registry volume
as GitHub cache.
After each run, just prune cache in local registry
to keep as small as possible.
However, I'm not sure about GitHub service container is able to create volume for service container and your GitHub action support that (because I'm only see caching inline
option)
https://github.com/docker/buildx/pull/535 should fix this and make using github cache a breeze:
[...]
- name: Build
uses: docker/build-push-action@v2
with:
tags: user/app:latest
cache-from: type=gha
cache-to: type=gha
I used it for caching https://github.com/zero88/gh-registry/blob/main/README.md#usage
@malobre how to fix this?
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@master
with:
install: true
moby/buildkit:buildx-stable-1 => buildkitd github.com/moby/buildkit v0.8.3 81c2cbd8a418918d62b71e347a00034189eea455
error: failed to solve: rpc error: code = Unknown desc = unknown cache exporter: "gha"
102
Error: buildx call failed with: error: failed to solve: rpc error: code = Unknown desc = unknown cache exporter: "gha"
The server side is not implemented yet: https://github.com/moby/buildkit/pull/1974.
FWIW, there's a syntax error in the code block below. There is a missing "|" that took me a while to find and correct.
This should be the correct invocation:
- name: Move cache run: | rm -rf /tmp/.buildx-cache mv /tmp/.buildx-cache-new /tmp/.buildx-cache
@MarcelCoding
The cache can grow very quickly with large images, since old entries are not deleted.
Yes you're right atm caches are copied over the existing cache so it keeps growing. Can you open an issue on buildkit repo about that please? In the meantime you can do this:
[...] - name: Move cache run: rm -rf /tmp/.buildx-cache mv /tmp/.buildx-cache-new /tmp/.buildx-cache
cc. @tonistiigi
Given that moby/buildkit#1974 was just merged, what does the timeline look like to use cache type=gha
in build-push-action?
You can test the gha cache exporter using this workflow while waiting for buildx 0.6 and BuildKit 0.9 to be GA. Feel free to give us your feedback, thanks!
I just tried this out on one of my workflows. See https://github.com/jauderho/dockerfiles/actions/workflows/cloudflared.yml
Looks like the "Setup Buildx" step now takes longer but I'm assuming that's due to the rest not yet being merged in. More importantly, the "Build and push" step looks to be much faster.
Nice job!
@jauderho
Looks like the "Setup Buildx" step now takes longer but I'm assuming that's due to the rest not yet being merged in. More importantly, the "Build and push" step looks to be much faster.
Yes that's it, buildx is built on-fly atm, that's why it takes more time for the setup step.
Awesome. Looking forward to everything being merged in.
@jauderho
Looks like the "Setup Buildx" step now takes longer
Buildx 0.6.0-rc1 has been released. I've updated the workflow to use it so now it should be faster than building from source.
Hmm, not sure if I am doing something wrong here but after updating to call buildx 0.6.0-rc1, it does not seem to trigger the caching.
Here is my action: https://github.com/jauderho/dockerfiles/blob/main/.github/workflows/cloudflared.yml
With buildx 0.6.0-rc1
Compare this to 2 days ago (which has the expected behavior)
@jauderho
Use image=moby/buildkit:master
instead of image=moby/buildkit:v0.9.0-rc1
.
@crazy-max
Per your suggestion, updated to use image=moby/buildkit:master but it does not appear to make a difference.
@jauderho
As you're on a monorepo building multi Docker images with different context, you should use a specific scope
for each one of them to avoid cache collision/invalidation. For example in your cloudflared workflow:
cache-from: type=gha,scope=cloudflared
cache-to: type=gha,scope=cloudflared
I decided to use the workflow name for the scope and it appears to work nicely.
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
It's blazing fast now @ 44s!
https://github.com/jauderho/dockerfiles/actions/runs/1035582157
@crazy-max
Now that new buildkit and buildx are released, can I revert back to just
uses: docker/setup-buildx-action@v1 with: version: v0.6.0-rc1 driver-opts: image=moby/buildkit:master buildkitd-flags: --debug
Or do I switch to
uses: docker/setup-buildx-action@v1 with: version: v0.6.0 driver-opts: image=moby/buildkit:v0.9.0
@jauderho GitHub virtual environments don't have buildx 0.6.0 (0.5.1 atm) so you have to explicitly specify buildx version in your workflow. On the other hand BuildKit default image is up to date (0.9.0) so the following step should be enough:
uses: docker/setup-buildx-action@v1
with:
version: v0.6.0
does this mean I can remove my step for local cache setup and just use:
- name: Checkout repo
uses: actions/checkout@v2
- name: Prepare docker image name
id: image_names
run: |
IMAGES="${GITHUB_REPOSITORY/docker-/},ghcr.io/${GITHUB_REPOSITORY/docker-/}"
echo ::set-output name=images::${IMAGES}
- name: Docker meta
id: meta
uses: docker/metadata-action@v3
with:
images: ${{ steps.image_names.outputs.images }}
tags: |
type=ref,event=tag
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
#- name: Set up build cache
#uses: actions/cache@v2
#with:
#path: /tmp/.buildx-cache
#key: ${{ runner.os }}-buildx-${{ github.sha }}
#restore-keys: |
#${{ runner.os }}-buildx-
- name: Login to GitHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GHCR_TOKEN }}
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Build image
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
builder: ${{ steps.buildx.outputs.name }}
push: true
platforms: linux/amd64,linux/arm/v7,linux/arm64
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
#cache-from: type=local,src=/tmp/.buildx-cache
cache-from: type=gha, scope=${{ github.workflow }}
cache-to: type=gha, scope=${{ github.workflow }}
#cache-to: type=local,dest=/tmp/.buildx-cache
@ksurl
That's correct.
works great. this can be closed probably
works great. this can be closed probably
This issue is about the local
cache so we keep it opened until moby/buildkit#1896 is fixed. Thanks.
https://github.com/docker/build-push-action/issues/756 mentions a comment where the "old" way is used. I switched to cache-from/cache-to:gha BUT if the whole job fails (or the build is not entirely successful), the cache does not seem to be used.
In that regard, it seems still suboptimal or am I potentially doing anything wrong?
Description
The cache can grow very quickly with large images, since old entries are not deleted.
Configuration
Logs
logs_37.zip
My solution
Add a
clean-cache
configuration option that runs the following command before exporting the layersdocker system prune -f --filter "until=5h"
.