-
### Describe the issue
If I create a model in Autogen studio that points to the LM studio endpoint then add the model to an agent, then a workflow etc, when I run the workflow it will terminate after…
-
Hello, when gemma2 models will be supported?
![obraz](https://github.com/lmstudio-ai/lmstudio-bug-tracker/assets/22175646/4159a7bd-163d-4be3-82a2-1553bbecaa5b)
-
Hi, having this issue with connecting to external llms.
Enviroment server for remote LLM:
- Amd 79503xd
- 64 GB RAM
- 2x 7900xtx
- Using LM-STUDIO fosr hosting LLM server
Enviroment Cli…
-
When I click on Start Server. It does nothing. It will not start and the logs are blank
-
Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9, VMM: yes
CUDA_VISIBLE_DEVICES=0 ./llama-server --host 0.0.0.0 --port 8008 -m /home/kemove/model/gemma-2-27b-it-Q5_K_S.gguf -ngl 99 -t 4 -n…
-
I downloaded the gguf file manually; how can I add it to LM studio? I added it to folder, but i have You have 1 uncategorized model files
-
lm-studio is an open source tool for running LLMs locally. After having done this by hand (as the plugin is doing), and trying other tools to do this, it's clear (to me) that lm-studio is going to be…
-
### How are you running AnythingLLM?
Docker (local)
### What happened?
Going into workspace settings and can choose every LLM provider except LM studio. I have added 2 models to LM studio playgroun…
-
Related to #34.This would be the holy grail for this maintainer, but implied in this is that we would need a _trained_ local model, since local models are terrible with the tasks we need unless fine-t…
-
It will we great if we can interact with other services that are not running locally, using the LM Studio only as GUI by default (of course, we will loose the capability to load and run locally in a …