pytorch / torchchat

Run PyTorch LLMs locally on servers, desktop and mobile
BSD 3-Clause "New" or "Revised" License
3.1k stars 192 forks source link

Leverage the HF cache for models #992

Open byjlw opened 1 month ago

byjlw commented 1 month ago

🚀 The feature, motivation and pitch

torchchat currently uses the hf hub which has it's own model cache, torchchat copies it into it's own model directory so you end up two copies of the same model.

We should leverage the hf hub cache but not force users to use that location if they're using their own models.

Alternatives

No response

Additional context

From r/localllama "One annoying thing is that it uses huggingface_hub for downloading but doesn't use the HF cache - it uses it's own .torchtune folder to store models so you just end up having double of full models (grr). Just use the defaul HF cache location.”

RFC (Optional)

No response

orionr commented 1 month ago

Great job bringing these back as issues! Is this also a problem with torchtune given that we're using .torchtune for this? cc @kartikayk ?