Closed davidwallacejackson closed 3 years ago
Hey @davidwallacejackson I believe this will be a good feature and doable. We should do both for tarball
and s3
backend. I'll working on it ASAP.
Personal note: I also think the same. Since my packages a bit larger, newest builds rebuild the packages every change and it takes around 5 minutes. If we do this it will take few seconds incrementally.
Brilliant, thank you so much!
I've also added new cache key templates for:
{{ git.branch }}
which prints current build git branch{{ git.commit }}
which prints current build git commit SHASee updated README/CHANGELOG at restore-keys
branch for all changes for v2.4.0
Please note that, if there are more than 1 tarball on disk or s3 that matches with restore key, only the most recent (LastModified) one will be used.
Would it be possible to add support for multiple restore keys with priority order? I'm thinking about something along the lines of
restore-keys
in GitHub's cache Action: https://github.com/actions/cache/blob/main/examples.md#macos-and-ubuntuSo in the linked example, in the event of a cache miss on
${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
, the runner will instead restore the newest cache with the prefix${{ runner.os }}-node-
. It can help when you're iterating on new dependencies, but the really killer feature it enables is for incremental build caching: By backing up, say,node_modules/.cache
with a key ofnode-cache-${GIT_BRANCH}-${COMMIT_SHA}
, and then usingnode-cache-${GIT_BRANCH}-
and thennode-cache-master-
as restore keys, you can make sure that:For the filesystem-backed cache backends, I'd think this could be accomplished pretty easily by
ls
ing on a glob -- for S3 and the future Google Cloud backend, I'm pretty sure those bucket stores both have "list by prefix" APIs. If I were handier with Bash I'd just make a PR for it, but I figured I'd at least ask. Thanks for making this plugin available for all of us!