abetlen / llama-cpp-python

Python bindings for llama.cpp
https://llama-cpp-python.readthedocs.io
MIT License
7.83k stars 935 forks source link

Publish all wheels to PyPI #741

Open simonw opened 1 year ago

simonw commented 1 year ago

It looks like PyPI only has the source distribution for each release: https://pypi.org/project/llama-cpp-python/0.2.6/#files

CleanShot 2023-09-20 at 12 25 12@2x

But the GitHub release at https://github.com/abetlen/llama-cpp-python/releases/tag/v0.2.6 lists many more files than that:

CleanShot 2023-09-20 at 12 25 51@2x

Would it be possible to push those wheels to PyPI as well?

I'd love to be able to pip install llama-cpp-python and get a compiled wheel for my platform.

abetlen commented 1 year ago

Hey @simonw! Big fan of your datasette project.

I hear you and I would like to make the setup process a little easier and less error-prone.

Currently llama.cpp supports a number of optional accelerations including several BLAS libraries, CUDA versions, OpenCL, and Metal. In theory I could build a pre-built wheel that just includes a version of llama.cpp with no real accelerations enabled but I feel like this is counterintuitive to the goal of providing users with the fastest local inference for their hardware.

I'm open to suggestions though, and I'll try to think of some possible solutions.

simonw commented 12 months ago

Two approaches I can think of trying that might work are:

For that first option, one way that could work is to have a llama-cpp-python package which everyone installs but which doesn't actually work until you install one of the "backend" packages: llama-cpp-python-cuda-12 or llama-cpp-python-metal or similar.

How large are the different binaries? If all of them could be bundled in a single wheel that was less than 50MB then that could be a neat solution, if you can write code that can detect which one to use.

You could even distribute that as llama-cpp-python-bundle and tell people to install that one if they aren't sure which version would work best for them.

it's a tricky problem though! I bet there are good options I've missed here.

abetlen commented 6 months ago

Hey @simonw it took a while but this is finally possible through a self-hosted PEP503 repository on Github Pages (see https://github.com/abetlen/llama-cpp-python/pull/1247)

You should now be able to specify

pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu

on the CLI or

 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
llama-cpp-python

in a requirements.txt to install pre-built binary version of llama-cpp-python.

The PR also includes initial support for Metal and CUDA wheels though I had to limit the number of supported Python and CUDA versions to avoid a combinatorial explosion in the number of builds.