Open tbergeron opened 1 month ago
LM Studio added better embedding support recently, I'm going to test it out, perhaps post a video about this.
Need add "LM Studio" to embedded provider and support /v1/embeddings
Hi, I have the same problem. I've added embeddings model in LMStudio (tried both nomic-ai and bge-large), then I've added custom model in Obsidian Copilot Settings in QA section (I've specified embeddings model as a name and normal localhost:11434/v1 as a base url). Requests are reaching LMStudio server, I see the following in logs:
[2024-09-10 18:25:43.342] [INFO] Received POST request to /v1/embeddings with body: {
"model": "CompendiumLabs/bge-large-en-v1.5-gguf/bge-large-en-v1.5-q4_k_m.gguf",
"input": [
"......my note here....."
]
However, nothing happens then. Obsidian Copilot keep showing the following notification.
Copilot is indexing your vault... 0/11 files processed.
And when I ask something in Vault QA mode, the Copilot waiting indefinitely...
I was able to solve the issue by upgrading LM Studio to version 0.3.2
I was able to solve the issue by upgrading LM Studio to version 0.3.2
I am also using the latest version. What parameters did you specify in the connection?
In LM Studio I have port 11434
and enabled CORS.
In Copilot Obsidian you need to specify full path in Model Name, e.g. lmstudio-community/Meta-Llama-3.1-8B-Instruct-GGUF/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf
.
Provider lm-studio
.
Base URL http://localhost:11434/v1
.
API key lm-studio
.
That's in General Settings.
As for QA Settings, the settings are the same except for model and provider.
Model: nomic-ai/nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.Q8_0.gguf
(that's my embeddings model).
Provider 3rd party (openai format)
.
But it seems that the QA Settings doesn't matter, it works anyway...
As for QA Settings, the settings are the same except for model and provider. Model:
nomic-ai/nomic-embed-text-v1.5-GGUF/nomic-embed-text-v1.5.Q8_0.gguf
(that's my embeddings model). Provider3rd party (openai format)
.
Thank you. With these settings, it worked fine.
I'm having trouble with QA using LM Studio too. The tips Morig kindly wrote for us don't seem to work for the QA. I've been going back and forth for hours trying to figure out what works and what doesn't, and there isn't much help anywhere about how to do this. Maybe LM Studio or Copilot or both were updated and broke this. I can't figure it out.
Suggestions:
Thanks.
I would like to add that the token should be made optional
@Armandeus66 It's difficult to say what exactly wrong with your setup, however try the following steps:
/v1
pathmodel
in Obsidian-copilot settings is a full path of model in LM Studio (you can navigate to "My Models" page, find your main model and embeddings model, right click and then "Copy Model Path"). You can also experiment and try just model name without full pathWhile experimenting try enable Verbose Logging in LM Studio Server. It will log all requests and errors, this can helm to understand what's wrong.
Btw, I agree that the Obsidian-copilot should have a LM Studio option in the menu for QA LLM provider
Thank you very much for that information! I couldn't find anything that clearly stated anywhere. I can now see where I was making mistakes. I appreciate the help!
EDIT I tried what you said and though I got the chat LLM running, the QA still gives me trouble.
I have the latest version of LM Studio.
I am running the default text-embedding-nomic-embed-text-v1.5 that came with LM Studio for the QA.
Everything is running with CORS on.
My provider for QA is: text-embedding-nomic-embed-text-v1.5 (There is nothing for this model under the My Models tab.)
Provider: 3rd Party
Base URL http://localhost:11434/v1
API key lm-studio (necessary?)
The QA throws an error and I can't get it to run.
I see lm-studio in general settings but not in QA settings.
So far it works great for chat with lm-studio but I can't get it to work with QA mode.
Being able to use a local LLM to index my entire vault sounds like the best deal ever.
Any suggestion?
Thanks