Open ForestsKing opened 6 months ago
@ForestsKing typically, using a HF model prefix should not have a significant overhead. However, if you're facing issues with your connection, you might try downloading the model first and loading from a local path. Here's how to do it:
git lfs
as described here.~/.cache/huggingface/hub/models--<model-name>/snapshots/<commit-hash>/
. Here's an example path from my machine ~/.cache/huggingface/hub/models--amazon--chronos-t5-small/snapshots/6cb0a414b8bc7ed3cfdcb7edac48a9778dd175f8/
. You can copy this directory to another more accesible directory../checkpoints/chronos-t5-small/
), you can load it as follows:
import torch
from chronos import ChronosPipeline
pipeline = ChronosPipeline.from_pretrained( "./checkpoints/chronos-t5-small", device_map="cuda", torch_dtype=torch.bfloat16, )
Thank!
Leaving open as FAQ
The connection between my server and Hugging Face is not very smooth. I have downloaded the model weights. I would like to know if it is possible to close the connection to Hugging Face before calling Chronos, it often takes a lot of time and may fail. Thanks!