-
**LocalAI version:**
v2.13.0-cublas-cuda12-ffmpeg
**Environment, CPU architecture, OS, and Version:**
kubernetes helm release: https://github.com/lenaxia/home-ops-prod/blob/bdb6695ba22777c8f4233c…
-
**LocalAGI version:**
August, 25th 2023
**Environment, CPU architecture, OS, and version:**
MacBook Pro 2018
Intel x86 Core i7 2,6 GHz
16 GB 2400. MHz DDR4
macOS Ventura 13.5.1
Docker Des…
-
**LocalAI version:**
v2.20.1
**Environment, CPU architecture, OS, and Version:**
Ubuntu server, raspberry pi OS
**Describe the bug**
When trying to initiate inference through P2P, the workers…
-
**Why**
The subsystem already supports Ollama and LocalAI, letting users manage their installs really easy, why not add Big AGI to the manager so they can try out what they like?
**Description**
…
-
When using llama.cpp via LocalAI, "failed to find free space in the KV cache" will be displayed after using it for a while, and the string that can respond will gradually become shorter, and eventuall…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I hav…
-
how can we make use of promptfoo on local llm models on my machine?
-
**LocalAI version:**
localai:v2.10.0
**Environment, CPU architecture, OS, and Version:**
windows 11,11th Gen Intel(R) Core(TM) i9-11900K ,NVDIA RTX3090
**Describe the bug**
i just run…
-
Subscribe to this issue and stay notified about new [daily trending repos in C++](https://github.com/trending/c++?since=daily)!
-
**LocalAI version:**
(base) block@192 LocalAI % ./local-ai --version
LocalAI version LocalAI v1.20.1-1-g3829aba-dirty (3829aba869f8925dde7a1c9f280a4718dda3a18c)
**Environment, CPU architect…