OmarShehata / semantic-embedding-template

9 stars 2 forks source link

Use with LM Studio #2

Closed Trippnology closed 2 weeks ago

Trippnology commented 3 weeks ago

Thanks for this awesome repo that clearly demonstrates how to work with embeddings!

This pull request adds a extra demo that shows how to do this via LM Studio using the same nomic-ai/nomic-embed-text-v1.5-GGUF model.

  1. Load the model in LM Studio.
  2. Start the local server.
  3. Profit!

lmstudio-local-server

OmarShehata commented 3 weeks ago

this looks so cool @Trippnology !! I think maybe we should add a little README snippet to explain this example? Either in the root README.md or under example-lmstudio-embedding/README.md.

Basically, it'll all work as long as you're running LM Studio, which hosts its server at http://localhost:1234/v1, and it sends stuff there instead of OpenAI, right? It looks like this would allow people to use Qwen as well which I've been trying to figure out how to use!

Trippnology commented 3 weeks ago

No problem, I'll add a short note to the main README.

Yes, that's correct. LM Studio provides an OpenAI compatible API, so the only difference in the embedding script was to change the baseURL to point to the local server instead of OpenAI.

OmarShehata commented 2 weeks ago

Awesome, thank you!!