-
User Anton on discord reported:
Collecting diskcache>=5.6.1 (from llama-cpp-python)
Using cached diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: jinja2>=2.11.3 …
-
**Describe the bug**
**To Reproduce**
Steps to reproduce the behavior:
excute this command:
CMAKE_ARGS="-DLLAMA_CUDA=on -DLLAMA_NATIVE=off" pip install 'instructlab[cuda]'
and compile err…
-
OS: 22.04.1-Ubuntu
Python: Python 3.12.2
Build fails for llama-cpp-python
```
$ pip install -r requirements.txt
...
Building wheels for collected packages: llama-cpp-python
Building wheel…
-
Was told to move this https://github.com/comfyanonymous/ComfyUI/issues/5510 story to ComfyUI-N-Nodes repo.
### Expected Behavior
I'm not sure if Ollama models are required in anyway but I do see…
-
### Describe the issue as clearly as possible:
Consider the following code:
```
from outlines import models
# No error
models.llamacpp(
repo_id="M4-ai/TinyMistral-248M-v2-Instruct-GGUF"…
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [ Yes] I am running the latest code. Development is very rapid so there are no tagged versions as…
-
There are multiple issues with the CUDA wheels:
1. The cu125 repository returns 404:
```bash
$ curl -I https://abetlen.github.io/llama-cpp-python/whl/cu125/
HTTP/2 404
```
2…
-
**Chapter 6**
I am running the below in Colab connected to T4.
%%capture
!pip install langchain>=0.1.17 openai>=1.13.3 langchain_openai>=0.1.6 transformers>=4.40.1 datasets>=2.18.0 accelerate>=0.…
-
# Prerequisites
- [ x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
- [ x] I carefully followed the [README.md](https://github.com/abetlen/lla…
-
**Is your feature request related to a problem? Please describe.**
Currently I am using Qwen2vl, this is the best vlm model for my project. I hope llama-cpp-python can support this model. I tried to …