-
I have updated version 20240924.
when i run qwen2.5:3b model. I got the below error:
time=2024-09-25T08:35:39.339+08:00 level=INFO source=server.go:395 msg="starting llama server" cmd="D:\\python\…
-
Installation went fine but I get the following error when trying to invoke the assistant:
`Sorry, there was a problem talking to the backend: RuntimeError('llama_decode returned 1')`
![image](ht…
-
**Describe the bug**
Running the ilab generate command results in an empty dataset being generated for the knowledge I'm trying to add:
```
ilab generate --model-family merlinite --sdg-scale-fa…
-
# Prerequisites
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [x] I carefully followed the [README.md](https://github.com/abetlen/llama…
-
Hello, i have problems with Llama-cpp-python newest versions.
The generations seems fine, but in reality using the same gguf i receive completely stupid responses, or weird artifacts.
For exampl…
-
User Anton on discord reported:
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: jinja2>=2.11.3 …
-
Hello, I am trying to configure lsp-ai to get copilot-like completion in helix. I intend to use only models running locally.
I ideally would like to get them served following an OpenAI compatible AP…
-
### System Info
GPU 2* A30, TRT-LLM branch main, commid id: 66ef1df492f7bc9c8eeb01d7e14db01838e3f0bd
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] …
-
Hello @abetlen
Firstly, I'd like to extend my appreciation for your hard work and dedication in developing and maintaining the llama-cpp-python package. It has been an invaluable tool for our proj…
i486 updated
4 months ago
-
Hey guys I'm trying to install PrivateGPT on WSL but I'm getting this errors. Any ideas?
Command used: `CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-c…