-
I store some of my models on the main drive and some of them on an external drive, to keep them all in the same place, I use symlinks. When the drive is disconnected, an infinite loop of alert windows…
-
Hi,
I have a finetuned model and it's gguf format. Can someone guide on how to add this model and access it using lmstudio windows version and lmstudio js?
-
![image](https://github.com/lmstudio-ai/model-catalog/assets/3511344/265acf75-fc79-48b4-89c7-344f88938332)
When using app, at start it worked but then starting like this bug. Tried clear cache, rem…
-
### What happened?
Hey everyone,
i am currently trying to set up llama cpp server with a llava vision model.
When using the llama-llava-cli, everything works just fine:
```
./llama-llava-cli -m .…
-
I am running LM Studio on my laptop with an AMD Ryzen 3 5000 series CPU and AMD Radeon graphics. Whenever i launch lm studio and try selecting gpu offload, the option is grayed out. And i get the erro…
-
The new version, app-0.2.31 doesn' seem to use (offload to= nvidia rtx 3060 anymore.
The old one works flawlessly (app-0.2.29).
My spec i7 10700
rtx 3060 phoneix 12 gb
The Gpu Offload checkbox …
-
There should be an option to upload the following to an "AI" to write the report:
- Format requested for the PCR report.
- Protocols to follow
- All information entered in the site
- [ ] Create…
-
Could we have support for [Llama.cpp?](https://github.com/ggerganov/llama.cpp)
That will make the model more accessible to many popular tools like Ollama, LM Studio, Koboldcpp, text-generation-webui,…
-
Is is possible to run LMStudio in headless mode when running the API? Use case is running it in a Linux instance without a UI.
-
### What would your feature do?
LMStudio support for various llm translation llama3 etc