-
I originally had this as a discussion but as UV works it seems like a valid issue.
I need llama-cpp-python with cuda, according the [installation docs](https://github.com/abetlen/llama-cpp-python?t…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Hello, I am completly newbie, when it comes to the subject of llms
I install some ggml model to oogabooga webui And I try to use it. It works fine, but only for RAM. For VRAM only uses 0.5gb, and I d…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [X ] I am running the latest code. Development is very rapid so there are no tagged versions as o…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x ] I am running the latest code. Development is very rapid so there are no tagged versions as o…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
I would appreciate if anyone can help with the following problem when using the converted GGUF for inference.
I found that inferencing with llama-cpp generates a different result from inferencing …
-
I am not able to install llama-cpp-python using
https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation-configuration
`set CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=Open…
-
I used the latest module and while embedding the gguf model into chroma, a critical error occurred
`llamaem= LlamaCppEmbeddings(model_path="D:\models\llama-2-7b-chat.Q4_K_M.gguf")
vectorstore = Chro…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [Yes] I am running the latest code. Development is very rapid so there are no tagged versions as …
y6t4 updated
5 months ago