:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
I don't quite know what the issue is, and am unfamiliar with go programming, so I don't quite know where to start. I can run it locally, on my 13900K and it works without issue. The server I am trying to run it on is a proxmox host with a Intel 9900K as the processor. I don't quite know what else I'd need to provide to re-create this, but I'm hoping the logs will help.
On another side note, I tried running this on my k3s cluster, that is running off some old Xeons, and I couldn't create the container at all. After googling I guess llama.cpp needs avx2 which my xeons don't have, is this accurate for local-ai as well (would assume so, but confirmation never hurts).
Please let me know If I can provide anything else!
Hey there! I am trying to use this on a docker "server" within my local network, and on that specific machine I get hit with the following error:
I don't quite know what the issue is, and am unfamiliar with go programming, so I don't quite know where to start. I can run it locally, on my 13900K and it works without issue. The server I am trying to run it on is a proxmox host with a Intel 9900K as the processor. I don't quite know what else I'd need to provide to re-create this, but I'm hoping the logs will help.
On another side note, I tried running this on my k3s cluster, that is running off some old Xeons, and I couldn't create the container at all. After googling I guess llama.cpp needs avx2 which my xeons don't have, is this accurate for local-ai as well (would assume so, but confirmation never hurts).
Please let me know If I can provide anything else!