Open jamie-wearsafe opened 3 years ago
as in at this part of the code change it to this:
import * as exec from 'actions-exec-listener'
import * as core from '@actions/core'
export class ImageDetector {
async getExistingImages(): Promise<string[]> {
const _filter = core.getInput(`filter`)
const filter = _filter ? `--filter=${_filter}` : ''
const existingSet = new Set<string>([])
const ids = (await exec.exec(`docker image ls -q ${filter}`, [], { silent: true, listeners: { stderr: console.warn }})).stdoutStr.split(`\n`).filter(id => id !== ``)
const repotags = (await exec.exec(`docker`, `image ls --format {{.Repository}}:{{.Tag}} ${filter} --filter=dangling=false`.split(' '), { silent: true, listeners: { stderr: console.warn }})).stdoutStr.split(`\n`).filter(id => id !== ``);
core.debug(JSON.stringify({ log: "getExistingImages", ids, repotags }));
([...ids, ...repotags]).forEach(image => existingSet.add(image))
core.debug(JSON.stringify({ existingSet }))
return Array.from(existingSet)
}
async getImagesShouldSave(alreadRegisteredImages: string[]): Promise<string[]> {
const resultSet = new Set(await this.getExistingImages())
alreadRegisteredImages.forEach(image => resultSet.delete(image))
return Array.from(resultSet)
}
}
this would be cool, at the moment i'm caching many images that really dont need to be (like mysql etc), but OTOH I really want to cache the images that are built locally.. (I'm running this as docker-compose)
Imagine in windows containers, this would be amazing! Have you tested it @jamie-wearsafe ? would be nice to have a pull request if you've done it 🙏
Imagine in windows containers, this would be amazing! Have you tested it @jamie-wearsafe ? would be nice to have a pull request if you've done it 🙏
I am using a fork where it's working in production.
@jamie-wearsafe Thanks! Using your fork for now, hopefully the filter
input can be merged into this project.
For anyone curious on how to use this here is an example:
- name: Cache docker images
uses: Broadshield/action-docker-layer-caching@main
continue-on-error: true
with:
filter: reference=my_custom_images_prefix*
@jamie-wearsafe Thanks! Using your fork for now, hopefully the
filter
input can be merged into this project.For anyone curious on how to use this here is an example:
- name: Cache docker images uses: Broadshield/action-docker-layer-caching@main continue-on-error: true with: filter: reference=my_custom_images_prefix*
Ahh sorry I didn't provide an example!
@jamie-wearsafe Great work, however im seeing this:
Post job cleanup.
/usr/bin/docker image ls --format={{.ID}} {{.Repository}}:{{.Tag}} --filter=dangling=false --filter=reference='openmined/*'
There is no image to save.
However when using tunshell.com
to debug the CI process I can run that exact command and it outputs several hashes...
https://github.com/OpenMined/PySyft/runs/3511338358?check_suite_focus=true
@jamie-wearsafe your fork worked great for me. I needed to cache a single image to be restored by a job later in my workflow. Without the filter, the cache would load all images from previous runs (because they are uniquely tagged).
- name: Set Image Tag
run: echo "GITHUB_SHA_SHORT=$(git rev-parse --short HEAD)" >> $GITHUB_ENV
- name: Cache Container Image
uses: Broadshield/action-docker-layer-caching@main
with:
filter: reference=account/imagename:${{ env.GITHUB_SHA_SHORT }}
@jsirianni Nice to see :)
@madhavajay i know its been a long time but do you need that issue addressed?
Hi @jamie-wearsafe yeah that would be great, I am currently just using cache all and my guess is the cache is getting regularly evicted:
- name: docker cache
uses: actions/cache@v3
if: steps.changes.outputs.stack == 'true'
continue-on-error: true
with:
path: .docker-cache
key: ${{ runner.os }}-docker
restore-keys: |
${{ runner.os }}-docker
@madhavajay so, can you try adjusting your filter for me?
- name: Cache OpenmineD Container Image
uses: Broadshield/action-docker-layer-caching@main
with:
filter: reference="*/openmined*:*"
key: docker-layer-caching-${{ github.workflow }}-${{ github.head_ref || github.ref }}-${{ github.event_name }}-{hash}
restore-keys: |
docker-layer-caching-${{ github.workflow }}-${{ github.head_ref || github.ref }}-${{ github.event_name }}-{hash}
docker-layer-caching-${{ github.workflow }}-${{ github.head_ref || github.ref }}-${{ github.event_name }}
docker-layer-caching-${{ github.workflow }}-${{ github.head_ref || github.ref }}
docker-layer-caching-${{ github.workflow }}
docker-layer-caching-
@jamie-wearsafe im still seeing no cache. Could this be because of dockerx
or something?
https://github.com/OpenMined/PySyft/runs/7245735124?check_suite_focus=true
@madhavajay there will be no cache until the run completes successfully the first time. Is this a consecutive run?
@jamie-wearsafe shouldnt that happen in the final post step:
Post job cleanup.
/usr/bin/docker image ls --format={{.ID}} {{.Repository}}:{{.Tag}} --filter=dangling=false --filter=reference="*/openmined*:*"
There is no image to save.
Is your feature request related to a problem? Please describe. Not all images need caching, and can take up valuable room. Some images are loaded before the cache is loaded such as the images in services. So excluding those images would help reduce the size and speed up the save/load process.
Describe the solution you'd like When doing
docker save $(docker images -q)
you can provide a list of the image ID's to save. If we could provide a filter at that point, it would mean we were only saving the images that we needed and so instead of this list:A filter would be added at this point:
and we would get this list instead: