Closed hengyjj closed 4 days ago
I think there should be a way to do that. Usually all models are downloaded into your cache and then loaded from there. When working with the transformers library directly, one could use from_pretrained to load from a local model path. But not sure how it is with the langchain huggingfaceinstrucembeddings I used for this repository. Check this out: https://huggingface.co/docs/huggingface_hub/guides/manage-cache
@hengyjj Here are the models you can download and add to the models folder: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF
Here are the models you can download and add to the llava folder inside the 'models' folder:
I also recommend you to watch @Leon-Sander videos
Hi. Is it possible to download the embeddings model locally? And how do I tweak the codes to use the embedding models locally? As I am currently on work laptop so there is some sort of network restrictions so I have to download the embeddings model and use it. Thank you.