Open NiceShyGuy opened 5 months ago
@NiceShyGuy It sounds like there are a few separate problems. I hope you don't mind that I have a few extra questions about the first two
Tried to add with GUI but the screen only showed for split second then closed not even long enough for me to see what it was.
Does this mean that the sidebar changed back to the main view immediately after selecting a model? Would you be able to share a screenshot or video of what you mean?
Noticed that all trial models and local lm studio model i just added was missing from setting file. It was replaced by ollama setting which i didn't initiate.
Did this happen after seeing a "Keep existing config" vs. "Use optimized models" screen, like in the screenshot here?
Adds LM Studio below Ollama setting i didn't add. LM Studio will not show in list.
When you use the AUTODETECT option, it will call the /v1/models endpoint of the LM Studio server, and fill the dropdown with all of the models you are currently running. If you don't currently have any models running, then the dropdown will be empty. If you want to use AUTODETECT, make sure to set up the local inference server and load a model
Does this mean that the sidebar changed back to the main view immediately after selecting a model? Would you be able to share a screenshot or video of what you mean?
On my very first startup in main sidebar view, my first action was to click the + button to add a model. I didn't not know what the + button did at the time because the new view would never show. So i resorted to editing the setting.json manually during the same session as the view glitch.
Did this happen after seeing a "Keep existing config" vs. "Use optimized models" screen, like in the screenshot here?
i don't remember when this screen appeared but i would have selected the the keep existing setup option as i was going for a full local setup. If this is the first view seen in the sidebar i probably would likely have selected this first and then tried to use the + button to add a model and run into the view glitch. Its possible that i did this the other way around though.
When you use the AUTODETECT option, it will call the /v1/models endpoint of the LM Studio server, and fill the dropdown with all of the models you are currently running. If you don't currently have any models running, then the dropdown will be empty. If you want to use AUTODETECT, make sure to set up the local inference server and load a model
LM Studio had a model loaded and was running the Local Inference Server the entire session. It worked manually at first until the settings issue occurred. It never auto detected my running server after multiple restarts of VS Code Insider. Continues to not auto detect my running server today. manual config working.
Before submitting your bug report
Relevant environment info
Description
Adding Local LLM, LM Studio not working. Doesn't show up in list.
To reproduce
Log output
No response