-
**LocalAI version:**
v1.40.0
**Environment, CPU architecture, OS, and Version:**
Compiled for CUDA on Linux thommopc 6.6.2-arch1-1 #1 SMP PREEMPT_DYNAMIC Mon, 20 Nov 2023 23:18:21 +0000 x…
-
**LocalAI version:**
v2.0.0-cublas-cuda12-ffmpeg
**Environment, CPU architecture, OS, and Version:**
Linux LocalAi-GPT 5.15.133.1-microsoft-standard-WSL2 #1 SMP Thu Oct 5 21:02:42 UTC 2023 x8…
-
we hit our stretch goal for persona!
this issue will document the progress
- [Framework](#framework)
- [Session](#session)
- [Pipeline](#pipeline)
- [Solvers](#solvers)
- [Server](#server)
…
-
It would be great if a container solution was offered so this could be easily used in conjunction with the localai container offering. This would simplify the required configurations even further than…
-
currently the format is :
```js
const script = [
{
type: 'narrate',
content:
"First I'm creating the canvas with a size of 640 by 360 pixels.",
},
{
…
-
**LocalAI version:**
commit 618fd1d41730ab03f7ac40e2457ea29709756b1f
**Environment, CPU architecture, OS, and Version:**
Macbook Pro M1 Pro 16GB, macOS 12.6
**Describe the bug**
Failure o…
-
**Is your feature request related to a problem? Please describe.**
Openbuddy has upgraded the chat template, some of their models are trained to follow the template strictly. Please consider update t…
-
### Your current environment
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Ubu…
-
I would like to be able to connect from FlowiseAI to this locally running AI, getumbrel/llama-gpt (stared via Docker and running at http://ip:port).
I rather not using the LocalAI solution, unless th…
-
Is there any way to run Falcon 40B model with LocalAI?
I'm trying this models:
- https://huggingface.co/TheBloke/falcon-40b-instruct-GGML/resolve/main/falcon-40b-instruct.ggccv1.q4_0.bin
- https://…