-
**Description**
Letta seems to be creating requests for LM studio with `context_overflow_policy` set to `0`.
```
"lmstudio": {
"context_overflow_policy": 0
},
```
Expected values seem to …
-
I downloaded the gguf file manually; how can I add it to LM studio? I added it to folder, but i have You have 1 uncategorized model files
-
![image](https://github.com/lmstudio-ai/model-catalog/assets/3511344/265acf75-fc79-48b4-89c7-344f88938332)
When using app, at start it worked but then starting like this bug. Tried clear cache, rem…
-
### 🚀 The feature, motivation and pitch
The state-of-the-art language model Gemma-2-9b has proven to be a powerful SLM for various natural language processing tasks. It is the best model of its size.…
-
I was trying to use an LM studio hosted local server, but apparently put in the wrong end point. Every end point I attempted to enter as the server showed up with an error. I haven't connected an age…
-
I just downloaded the ministral 8b from mlx-community but unable to load it gives this error. Same is true for SuperNova Medius as well.
-
### Describe the bug
When max_tokens parameter is None, the agent send a frame /v1/chat/completions with max_tokens: null.
In this case the LLM don't understand and and stop after the second tok…
-
I'm using the default LM Studio settings (In Griptape/ComfyUI and on LM Studio) BUt I'm getting the below error
And the text output from Griptape: Tool Task is: 'NoneType' object is not iterable
2…
-
Hi, I couldn't connect https://huggingface.co, but I still can download models from some mirror site like https://hf-mirror.com/ via configure env `HF_ENDPOINT=https://hf-mirror.com` when using [`hugg…
-
Local Interface Server works well in example
```curl
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "lmstudio-community/Meta-Llama-…