-
Pixtral works great in mlx-vlm (https://github.com/Blaizzy/mlx-vlm/pull/67) would be great to see support land in LM Studio.
-
**Problem**
Integrating Ollama with Jan using the single OpenAI endpoint feels challenging. It’s also a hassle to ‘download’ the model.
**Success Criteria**
- Make it easier to add Ollama endpoin…
-
## Description
Add reverse functionality for taking models in LM studio and making them available in ollama. Very similar to `L`, except it works in the reverse direction.
-
In the GUI of the desktop app: The model does not see the system prompt at all.
Tested with many models.
In the photo you can see the system prompt used and the answer from the model.
While when …
-
Hi, having this issue with connecting to external llms.
Enviroment server for remote LLM:
- Amd 79503xd
- 64 GB RAM
- 2x 7900xtx
- Using LM-STUDIO fosr hosting LLM server
Enviroment Cli…
-
Just to confirm that it works. But we really need to improve the prompt, because it usually autocomplete the whole code, include the lines that has been written before.
-
Can we add a way to use a local API as llm?
Python code should be:
client = OpenAI(
api_key="",
# Change the API base URL to the local interference API
base_url="http://localhost:1337/v1"…
-
**What's the problem?**
Usage of local large language models through LocalAI, LMStudio and so on, all of these are providing OpenAI-compatible API, but application need to expose a setting to c…
-
![image](https://github.com/lmstudio-ai/lmstudio.js/assets/4240638/0f29894b-3b7b-4369-962c-1e50130cfc25)
Hi there,
Above is the config info and errors.
Every time I start the local HTTP serve…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…