-
### #
- [X] I have searched the existing issues
### Is your feature request related to a problem? Please describe it
I was wondering if jan supports Windows on ARM platform, specifically on Snapdra…
-
Local Interface Server works well in example
```curl
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "lmstudio-community/Meta-Llama-…
-
### Feature Request
A functionality to copy or clone the saved chat with an AI model, so we have 2 identical chats. This could be used to experiment different questions with the exact same starting…
-
When having a restricted Windows user that can't write to C:\ the local server tries to write logs to C:\tmp\lmstudio-server-log.txt where access is denied. Local server cannot be started.
Fix: On…
-
![image](https://github.com/lmstudio-ai/model-catalog/assets/3511344/265acf75-fc79-48b4-89c7-344f88938332)
When using app, at start it worked but then starting like this bug. Tried clear cache, rem…
-
### Describe the issue
Whatever I type, it just terminates the session.
### Steps to reproduce
1. Changed to a different model, Mistral Instruct v0.1-> Mistral Instruct v0.2.
2. Changed to a diff…
-
**Describe the bug**
`lms` hangs after displaying `Verification succeeded. The server is running on port 11435.` and doesn't load a model, though LM Studio itself opens successfully.
**To Reproduc…
-
I was trying to use an LM studio hosted local server, but apparently put in the wrong end point. Every end point I attempted to enter as the server showed up with an error. I haven't connected an age…
-
### Version
Command-line (Python) version
### Operating System
Windows 10
### What happened?
Edited config.json to use OpenRouter:
```
// Configuration for the LLM providers that can be used.…
-
I'm noticing with v0.3.2 my CPU is getting slaughtered. The UI revamp is worse than the previous iteration with GPU offload now hidden on "My Models" page but even with all the layers assigned to GPU …