-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x] I am running the latest code. Development is very rapid so there are no tagged versions as of…
-
Hi,
I am trying to run it on my mac, however it shows that cannot find the shared libray "llama"
Here is the error message.
`
(llama) chen@chenshuodeMBP-9 llama-cpp-python % python3 -m llama_cp…
-
Having 32 GB of GPU and 64GB of ram intel 17 13th gen processor its taking 2-4 min to response and not using GPU using llama-cpp-python==0.1.83 --no-cache-dir
![image](https://github.com/PromtEngine…
-
**Is your feature request related to a problem? Please describe.**
When generating chat completion, it is hard-coded to generate a non-standard prompt template that looks something like:
```
### …
-
**Describe the bug**
When I run `ilab data generate --gpus=1 --pipeline full` I'm receiving the following error message:
```
instructlab.sdg.pipeline.EmptyDatasetError: Pipeline stopped: Empty da…
-
It looks like PyPI only has the source distribution for each release: https://pypi.org/project/llama-cpp-python/0.2.6/#files
But the GitHub release at https://github.com/abetlen/llama-cpp-pytho…
-
This epic is a major tracker for all the backends additions that should be part of LocalAI v2 and ongoing efforts.
The objective is to release a v2 which deprecates old models which are now superse…
-
I'm facing an issue where the wheel for llama-cpp-python fails to build while installing requirements, Has anyone encountered this issue before and managed to resolve it? Any advice or pointers would …
-
# Expected Behavior
I have a machine with and AMD GPU (Radeon RX 7900 XT). I tried to install this library as written in the README by running
`CMAKE_ARGS="-DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 pip …
-
### System Info
CPU x86_64
GPU L40s
TensorRT branch: main
commid id:b57221b764bc579cbb2490154916a871f620e2c4
CUDA:
| NVIDIA-SMI 535.154.05 Driver Version: 535.154.05 CUDA V…