aurelio-labs / semantic-router

Superfast AI decision making and intelligent processing of multi-modal data.
https://www.aurelio.ai/semantic-router
MIT License
1.93k stars 207 forks source link

Adding an option to specify a local embedding model #312

Closed martinlyubenov closed 3 months ago

martinlyubenov commented 3 months ago

Hello everyone,

I am working on a project where I am running the semantic router on an embedded hardware. I have all my models, LLMs and encoding models, residing locally and I am already using the Huggingface sentence transformers encoder with Chroma vector database. However, it doesn't seem like the semantic router library allows for specifying a local encoding model when creating the encoder. For example, the HuggingFaceEncoder specifies the "sentence-transformers/all-MiniLM-L6-v2" name and it automatically tries to download the encoding model into cache once the object is created. I would like to be able to specify the path to a local instance of the all-MiniLM-L6-v2 model instead, so that it won't try to download anything. I used "semantic-router[local]" to install the semantic-router library.

Thanks!

martinlyubenov commented 3 months ago

Never mind. I figured I can just pass the local path as a name like HuggingFaceEncoder(name=local_path). Sorry for the inconvenience.