Leon-Sander / Local-Multimodal-AI-Chat

GNU General Public License v3.0
124 stars 73 forks source link

Embedding Model #14

Closed hengyjj closed 4 days ago

hengyjj commented 6 months ago

Hi. Is it possible to download the embeddings model locally? And how do I tweak the codes to use the embedding models locally? As I am currently on work laptop so there is some sort of network restrictions so I have to download the embeddings model and use it. Thank you.

Leon-Sander commented 6 months ago

I think there should be a way to do that. Usually all models are downloaded into your cache and then loaded from there. When working with the transformers library directly, one could use from_pretrained to load from a local model path. But not sure how it is with the langchain huggingfaceinstrucembeddings I used for this repository. Check this out: https://huggingface.co/docs/huggingface_hub/guides/manage-cache

B7-9414 commented 4 months ago

@hengyjj Here are the models you can download and add to the models folder: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.1-GGUF

  1. mistral-7b-instruct-v0.1.Q3_K_M.gguf
  2. mistral-7b-instruct-v0.1.Q5_K_M.gguf

Here are the models you can download and add to the llava folder inside the 'models' folder:

  1. https://huggingface.co/mys/ggml_llava-v1.5-13b/blob/main/ggml-model-q5_k.gguf
  2. https://huggingface.co/mys/ggml_llava-v1.5-7b/blob/main/mmproj-model-f16.gguf
  3. https://huggingface.co/NousResearch/Nous-Capybara-7B-V1-GGUF/blob/e6263e5fabbdcd2d682364c66ecf54b65f25aa39/ggml-model-Q4_K.gguf

I also recommend you to watch @Leon-Sander videos