-
### Your current environment
```
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu …
-
**Is your feature request related to a problem? Please describe.**
No. I just want to ask whether LocalAI can support Qwen model. https://github.com/QwenLM/Qwen
**Describe the solution you'd…
-
I feel this is a major bug, as anyone using ollama for an extended time using several models will have the same issue.
I'm using https://github.com/iplayfast/OllamaPlayground/tree/main/createnotes#…
-
# Bug Report
## Description
**Bug Summary:**
I configured my LocalAI instance as the OpenAI API endpoint; when I ose curl to verify, I see the models just fine:
```
curl.exe http://192.168.28…
-
- [ ] Create philosophical shorts for why LLM may actually "understand"
- [ ] Create a weekly target
- [ ] Reflect on how I would trickle from year to daily vision
- [ ] Create gigs on fastwork
- [ ] …
-
**LocalAI version:**
2.8.2
**Environment, CPU architecture, OS, and Version:**
```
» uname -a
Darwin Dougs-MacBook-Air.local 23.3.0 Darwin Kernel Version 23.3.0: Wed Dec 20 21:33:31 PST 202…
-
Hi,when I was doing the stage-1 training, I met some problems.It seems like the problem is caused by the CUDA_DEVICES, but I can't find the device configure in the train.py.Can you help me out?
Here …
-
I've noticed that after running a few models, sometimes the models don't behave normally. This is a session where that was occurring. I had first tried with bakllava but it wasn't being helpful eithe…
-
Responses to questions are fairly accurate on my Ubuntu 20.04 computer, but the responses are presented at a rate of about 2 or 3 seconds per word. My laptop is a Lenovo IdeaPad 3 with a quad-core In…
-
### What do you need?
this is awesome project, but it needs ollama support.
the OpenAI api is the ease way out.
please add support for local LLMs too.
thank you