-
### What happened?
The model loading fails (for example when loading https://huggingface.co/unsloth/Llama-3.2-1B-Instruct-GGUF/tree/main Llama-3.2-1B-Instruct-Q4_K_M.gguf) because of One Definition…
-
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
installation is made on a proxmox homelab, on a debian 12 LXC with GPU passthrough for openvino :
…
-
Llama.cpp allows the use of control vectors (https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0), but I am unable to figure out to get it working in Oobabooga
There doe…
-
### Describe the bug
After downloading a model I try to load it but I get this message on the console:
Exception: Cannot import 'llama-cpp-cuda' because 'llama-cpp' is already imported. Switching to…
-
### Issue description
NoBinaryFoundError for Windows when Upgrading from 3.0.0-beta44 -> 3.2.0
### Expected Behavior
I'd expect the WIndows x64 prebuilt binary to be available for use so users do n…
-
My ComfyUI is not the portable. I installed the Searge LLM with the ComfyUI manager, then I installed it manually, in both cases; I got the Traceback below error. I tried to Install the following comm…
-
Right now we call llama.cpp directly, long-term we should go with either llama.cpp directly or llama-cpp-python. Because maintaining two different llama.cpp backends isn't ideal, they will never be in…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ ] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Building wheels for collected packages: llama-cpp-python
Building wheel for llama-cpp-python (pyproject.toml) ... error: subprocess-exited-with-error
× Building wheel for llama-cpp-python (p…
-
### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
Windows OS:
all requirements that CUDA has
gcc++ 14
Runing PrivateGPT but only with CPU not…