-
**LocalAI version:**
2.11.0
**Environment, CPU architecture, OS, and Version:**
Windows 11 latest, Xeon(R) w5-3435X, 256GB, 2x 20GB RTX 4000 NVIDIA-SMI 550.65 Driver Version: 551.86 CUDA Vers…
-
**LocalAI version:**
2.12.4 in Docker AMD64 emulation
**Environment, CPU architecture, OS, and Version:**
Mac M3 36GB & Docker DE latest
**Describe the bug**
`curl http://localhost:…
-
Ollama current doesn't support [Open AI Compatible Function Calling](https://github.com/ollama/ollama/issues/2790) but there are models such as [Hermes 2 Pro](https://huggingface.co/NousResearch/Herme…
-
dbt
![image](https://user-images.githubusercontent.com/69875491/223252532-b0d79705-c6ec-4557-83da-baa4cd9726d3.png)
![image](https://github.com/devsentient/cdn/assets/69875491/ed65189d-38b4-4a7b-811…
-
Use some Local AI to generate some cool speech phrases for the house.
https://www.youtube.com/watch?v=sfcM-bfFyP4&t=8s
https://docs.litellm.ai/docs/providers/ollama
-
llama-cpp CPU 1500%,Very slow...
my server:centos,20 core, 32GB memory
-
some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
Sad, sad panda faces ensue. Any chance for a fix?
-
**LocalAI version:**
commit d5d82ba344738fc44c75b174ffba47421cf635e8 (HEAD -> master, tag: v2.6.1, origin/master, origin/HEAD)
**Environment, CPU architecture, OS, and Version:**
Mac Studio, …
-
**LocalAI version:** 2.15.0
Docker: image: quay.io/go-skynet/local-ai:master-cublas-cuda12-ffmpeg
**Environment, CPU architecture, OS, and Version:**
Ubuntu 22.04 GPU
**Describe the bug**…
-
I'm Japanese and can't speak English, so I'll use Google Translate to give you an idea.
**Success Criteria**
This is very personal, but I think if this best AI service "Jan" could be used for imag…