-
I've purchased a lifetime subscription to msty with the hope of simplifying LLM management, particularly for local models and basic client functionality. However, I'm encountering some challenges due …
-
**LocalAI version:**
localai/localai:v2.20.1-cublas-cuda12-ffmpeg-core
**Environment, CPU architecture, OS, and Version:**
1. Linux server: x86 GPU 2060 12 GB, Docker Ubuntu 22.04
2. Windows 11 …
-
```
dima@dima-Dell-G15-Special-Edition-5521:~/IdeaProjects/LocalAI$ make BUILD_TYPE=cublas BUILD_GRPC_FOR_BACKEND_LLAMA=ON CUDA_DOCKER_ARCH=all GRPC_BACKENDS=backend-assets/grpc/llama-cpp build
go …
-
**LocalAI version:**
localai/localai:latest-gpu-nvidia-cuda-12
LocalAI version: v2.22.1 (015835dba2854572d50e167b7cade05af41ed214)
**Environment, CPU architecture, OS, and Version:**
Lin…
-
### ⚠️ This issue respects the following points: ⚠️
- [x] This is a **bug**, not a question or a configuration/webserver/proxy issue.
- [x] This issue is **not** already reported on [Github](https://…
-
**LocalAI version:**
localai/localai:latest-aio-cpu
**Environment, CPU architecture, OS, and Version:**
Docker Desktop, Ryzen 7 7800X3D, Windows 11 Pro
**Describe the bug**
Container fail…
-
I tried the Docker installation indicated on the home page:
```
docker run -p 8080:8080 --gpus all --name local-ai -ti localai/localai:latest-aio-gpu-nvidia-cuda-12
```
It fails because there is n…
-
**LocalAI version:**
Using Docker image:
`localai/localai:latest-aio-gpu-hipblas`
**Environment, CPU architecture, OS, and Version:**
- Ubuntu 22.04
- Xeon X5570 [Specs](https://ark.intel.c…
-
## Description
The AI-plugin currently does not work with LocalAI backend. It somehow cannot read the response from Localai-api correctly.
LocalAI version:
2.9.0 (latest)
AI Plugin version:
0…
-
**LocalAI version:**
localai/localai:master-cublas-cuda12-ffmpeg
**Environment, CPU architecture, OS, and Version:**
- K3S
- RTX 3090
- 2x Xeon 2680 V4
**Describe the bug**
Error `r…