-
# Prerequisites
I am running the latest code. Development is very rapid so there are no tagged versions as of now.
I carefully followed the [README.md](https://github.com/abetlen/llama-cpp-python/b…
DDXDB updated
5 months ago
-
**Is your feature request related to a problem? Please describe.**
I am using llama-cpp-python in some projects and the difference in build time between using and not using llama-cpp-python is 15-20 …
-
Thanks for any response. 😊
-
**Describe the bug**
Docker Compose, `CUDA error 711 at /root/workspace/crates/llama-cpp-bindings/llama.cpp/ggml-cuda.cu:6826: peer mapping resources exhausted`
![bug](https://github.com/TabbyML…
-
Replicate results from: https://github.com/socialfoundations/surveying-language-models
-
Llama.dll that can be downloaded from llama.cpp repo is suitable mostly for programming languages that have abilities to work with rather difficult (for novice coder) concepts like pointers, structure…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
I am using the following .env.local with llama-2-7b.Q4_K_S.gguf and llama prompt template
```
MODELS=`[
{
"name": "llama-2-7b.Q4_K_S.gguf",
"chatPromptTemplate": "[INST] \n{{preprom…
-
# Expected Behavior
I want llama-cpp-python to be able to load GGUF models with GPU inside docker. It works properly while installing llama-cpp-python on interactive mode but not inside the dockerf…
-
I recently try the Bodhi CLI to download the llama 3.1 using this script
```
bodhi create llama3_1:instruct_q4 \
--repo bullerwins/Meta-Llama-3.1-8B-Instruct-GGUF \
--filename Meta-Llama-3.1…