Open FrancescElies opened 1 month ago
Interesting idea. Would the cache be cleaned up on some sort of LRU basis?
That would be my first naive choice.
Interesting, I've been using UV_CACHE_DIR
instead to be relative to the github workspace and cleaning the workspace at the end of a run using a ACTIONS_RUNNER_HOOK_JOB_COMPLETED
(for self hosted runners).
I think it makes sense if we want to include details for self hosted github runners caching. I can take a initial stab
Interesting, I've been using
UV_CACHE_DIR
instead to be relative to the github workspace and cleaning the workspace at the end of a run using aACTIONS_RUNNER_HOOK_JOB_COMPLETED
(for self hosted runners).
Doesn't cleaning the cache after each job defeat the purpose of using the cache? wouldn't in that case be the same as running uv pip install
with --no-cache
flag?
Currently we took the approach of randomly calling uv cache clean
10% of the time, 10% of the jobs are slower but the cache won't grow wild.
I think what's non obvious in my statement is that you could arguably store the cache in between using actions cache instead of relying on the runner to hold it for you and slowly filling up the disk. To clarify, I don't do that right now (saving it in actions/cache), so it's just an idea.
Unless I missed it, I think currently there is no way to limit the cache size.
I think this would be valuable in ci self-hosted runners where the cache can grow pretty big due to the need to support many python versions packages downloads, we have seen it growing up to ~40GB.
E.g. sccache has a
SCCACHE_CACHE_SIZE
that sets a maximum size of the cache.At the moment, if it grows to big you run
uv cache clean
and start over the game again, but with a cache size limit one wouldn't have to worry.What do you think?
PS.: thanks for making pip installs great again