-
**Describe the bug**
I'm trying to run letta in docker connected to an ollama service running on the same host. I'm using an .env file with the following vars:
LETTA_LLM_ENDPOINT=http://192.168.xx.x…
-
Minor Issue:
No matter what the user chooses for the dropdown of model selection, the first message always uses the default model...Even when the user specifically chose the non-default one. Afte…
-
### Your current environment
VLLM image: v0.5.4
hardware: RTX4090
gpu driver: 550.78
model: qwen1.5-14b-chat-awq
launch cmd: enable-prefix-caching
### 🐛 Describe the bug
```
2024-08-30T15:30…
-
hey! stoked for the Open AI addition but would also love to see local LLMs as well through LM Studio and Ollama.
-
### System Info
cpu: x86_64
gpu: nvidia a100
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially suppo…
-
### Do you need to file an issue?
- [x] I have searched the existing issues and this bug is not already filed.
- [ ] My model is hosted on OpenAI or Azure. If not, please look at the "model providers…
-
metagpt "Write a cli snake game"
2024-11-15 15:03:50.927 | INFO | metagpt.const:get_metagpt_package_root:21 - Package root set to /app/metagpt
2024-11-15 15:03:55.243 | INFO | metagpt.team:i…
-
### Describe the issue
A connect error occurred while using local LLM, Ollama/Mistral.
### Steps to reproduce
Tried to input another address from CMD.
### Screenshots and logs
![AutoGen-01](htt…
-
I am interested in using Uptrain with a locally hosted open-source LLM as evaluator LLM. I'm currently hosting a LLM service using vLLM, not with Ollama. Is there any way to use this local LLM with U…
-
### Describe the bug
I'm attempting to use some of my local LLMs on Ollama in this fork and everything works great. There aren't any issue with the execution itself. However I'm running into an iss…