-
### Describe the bug
I'm attempting to use some of my local LLMs on Ollama in this fork and everything works great. There aren't any issue with the execution itself. However I'm running into an iss…
-
Hi Kaicheng,
Thank you, this is super helpful! Considering data privacy issues and the emergence of really powerful small models I think a tutorial for using local LLMs would be very useful. I've e…
-
Do you guys have some notes on the settings for local llm configs?
Do we leave API key blank?
Assuming we use the local Ollama URL in both configs?
This is running on an M3 Mac.
Here's wh…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### Model Input Dumps
model="Qwen/Qwen2.5-72B-Instruct"
guid…
-
Hello,
I'm trying to use llm.nvim with my TGI deployment via the following `vim.lua`:
```lua
local vim= vim
local Plug = vim.fn['plug#']
vim.call('plug#begin')
-- Shorthand notation fo…
-
Please add ability to use Locally hosted LLM, for example using LM Studio
Getting far away from Closed Source LLM is the best for everyone.
-
are we allowed to use a local LLM (on-prem)?
-
**Describe the bug**
I'm trying to run letta in docker connected to an ollama service running on the same host. I'm using an .env file with the following vars:
LETTA_LLM_ENDPOINT=http://192.168.xx.x…
-
### System Info
a100
### Who can help?
@Tracin
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` …
-
I'm not sure if I'm placing the URL for the local LLM API correctly, but I have something like this and I can't get the bot to work. Can you give me a solution so that it works with Oobabooga, thanks.…