-
### What happened?
I can't use docker + SYCL when using -ngl >0
With -ngl 0 it's ok
message error :
No kernel named _ZTSZZL17rms_norm_f32_syclPKfPfiifPN4sycl3_V15queueEiENKUlRNS3_7handlerEE0_c…
-
(base) C:\Users\m>pip install llama-cpp-python
Collecting llama-cpp-python
Using cached llama_cpp_python-0.2.85.tar.gz (49.3 MB)
Installing build dependencies ... done
Getting requirements t…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (llama-cpp-python)
I have updated the gcc version but still the error.
Working for ubuntu verison 24.04 LT…
-
Hi, thanks again for this project.
I finally got to test with the llama cpp backend.
I used the built in model downloader feature to download the mistral 4.0 gguf model
I pip installed gallam…
-
### 📜 Description
Self-hosting localy reports that llama-cpp-python is not installed on the console logs.
When I ask a question on CHAT the following message appear on the console logs:
backe…
-
### Summary
Enable CANN support for WASI-NN ggml plugin.
### Details
Adding CANN support to the WASI-NN ggml plugin is relatively straightforward. The main changes involve adding the following code…
-
### System Info
llama_stack 0.0.52
llama_stack_client 0.0.49
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### 🐛 Describe the b…
-
### Describe the bug
With a fresh install of 1.15, Exllamav2_HF loads a model just fine... However, when I do a local install of exllamav2, then both it and the Exllamav2_HF loaders break ( errors b…
-
@asirgogogo I tried `convert_hf_to_gguf.py` but get errror "ERROR:hf-to-gguf:Model IndexForCausalLM is not supported".
The old `examples/convert_legacy_llama.py` can convert to gguf. but this gguf ou…