pytorch / torchchat

Run PyTorch LLMs locally on servers, desktop and mobile
BSD 3-Clause "New" or "Revised" License
3.1k stars 193 forks source link

cache executorch builds on runners #617

Open mikekgfb opened 4 months ago

mikekgfb commented 4 months ago

As suggested by @malfet

mikekgfb commented 4 months ago

This is the gt explanation for caches, https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#cache-hits-and-misses

but we don't have an example of how the cache is populated.

metascroy commented 4 months ago

Summarizing offline convo:

cc @dbort @larryliu0820

larryliu0820 commented 4 months ago

Short term:

Long term:

mikekgfb commented 4 months ago

it seems it’s straightforward — indicate what the cache should cache, how to get a cache key (I guess so we can figure out changes? git version and hardware system, probably), and it will just make things appear (if it has it), and you can check whether cache checkout was successful, or else build it

No saving required, the build system will capture all the files you tried to retrieve and push them into the cache when the run finishes