continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
16.78k stars 1.3k forks source link

Adding Local LLM, LM Studio not working. Doesn't show up in list. #1111

Open NiceShyGuy opened 5 months ago

NiceShyGuy commented 5 months ago

Before submitting your bug report

Relevant environment info

- OS: Windows
- Continue: v0.9.110 (pre-release)
- IDE: 1.89.0-insider

Description

Adding Local LLM, LM Studio not working. Doesn't show up in list. screenshot

To reproduce

  1. Initially added manually using LM studio ref from docs. Tried to add with GUI but the screen only showed for split second then closed not even long enough for me to see what it was.
  2. open settings.json. Save new model { "title": "LM Studio", "provider": "lmstudio", "model": "llama2-7b" }
  3. Worked initially when i saved and still had setting file open.
  4. Closed setting file. everything was working.
  5. Reopened settings file using continue chat dialog setting icon to look at tabAutocompleteModel setting.
  6. Noticed that all trial models and local lm studio model i just added was missing from setting file. It was replaced by ollama setting which i didn't initiate.
  7. Still i am able to use local model at this point until i restart VS Code.
  8. Reopened settings with same issue and now i can't run local llm.
  9. Tried adding lm studio through GUI. This time i can see the screen and try to add LM Studio with auto detect.
  10. Adds LM Studio below Ollama setting i didn't add. LM Studio will not show in list. neither does Ollama setting.
  11. Delete settings.json. reinstall extension. trial setting return. add LM Studio from Gui again. Still now showing in model list. Can't select local llm to que even with setting present in file.

Log output

No response

sestinj commented 5 months ago

@NiceShyGuy It sounds like there are a few separate problems. I hope you don't mind that I have a few extra questions about the first two

Tried to add with GUI but the screen only showed for split second then closed not even long enough for me to see what it was.

Does this mean that the sidebar changed back to the main view immediately after selecting a model? Would you be able to share a screenshot or video of what you mean?

Noticed that all trial models and local lm studio model i just added was missing from setting file. It was replaced by ollama setting which i didn't initiate.

Screenshot 2024-04-11 at 11 27 06 PM

Did this happen after seeing a "Keep existing config" vs. "Use optimized models" screen, like in the screenshot here?

Adds LM Studio below Ollama setting i didn't add. LM Studio will not show in list.

When you use the AUTODETECT option, it will call the /v1/models endpoint of the LM Studio server, and fill the dropdown with all of the models you are currently running. If you don't currently have any models running, then the dropdown will be empty. If you want to use AUTODETECT, make sure to set up the local inference server and load a model

NiceShyGuy commented 5 months ago

Does this mean that the sidebar changed back to the main view immediately after selecting a model? Would you be able to share a screenshot or video of what you mean?

On my very first startup in main sidebar view, my first action was to click the + button to add a model. I didn't not know what the + button did at the time because the new view would never show. So i resorted to editing the setting.json manually during the same session as the view glitch.

Did this happen after seeing a "Keep existing config" vs. "Use optimized models" screen, like in the screenshot here?

i don't remember when this screen appeared but i would have selected the the keep existing setup option as i was going for a full local setup. If this is the first view seen in the sidebar i probably would likely have selected this first and then tried to use the + button to add a model and run into the view glitch. Its possible that i did this the other way around though.

When you use the AUTODETECT option, it will call the /v1/models endpoint of the LM Studio server, and fill the dropdown with all of the models you are currently running. If you don't currently have any models running, then the dropdown will be empty. If you want to use AUTODETECT, make sure to set up the local inference server and load a model

LM Studio had a model loaded and was running the Local Inference Server the entire session. It worked manually at first until the settings issue occurred. It never auto detected my running server after multiple restarts of VS Code Insider. Continues to not auto detect my running server today. manual config working.

screenshot2