Closed Trippnology closed 2 weeks ago
this looks so cool @Trippnology !! I think maybe we should add a little README snippet to explain this example? Either in the root README.md
or under example-lmstudio-embedding/README.md
.
Basically, it'll all work as long as you're running LM Studio, which hosts its server at http://localhost:1234/v1
, and it sends stuff there instead of OpenAI, right? It looks like this would allow people to use Qwen
as well which I've been trying to figure out how to use!
No problem, I'll add a short note to the main README.
Yes, that's correct. LM Studio provides an OpenAI compatible API, so the only difference in the embedding script was to change the baseURL
to point to the local server instead of OpenAI.
Awesome, thank you!!
Thanks for this awesome repo that clearly demonstrates how to work with embeddings!
This pull request adds a extra demo that shows how to do this via LM Studio using the same nomic-ai/nomic-embed-text-v1.5-GGUF model.