teddykoker / torchsort

Fast, differentiable sorting and ranking in PyTorch
https://pypi.org/project/torchsort/
Apache License 2.0
765 stars 33 forks source link

Binary wheels #70

Closed siddharthab closed 1 year ago

siddharthab commented 1 year ago

Is distributing pre-compiled binary wheels a possibility for this project? Perhaps variants with and without CUDA? At least for Linux?

teddykoker commented 1 year ago

Hi @siddharthab, I currently have no plans to support this. Since the only dependency for building is a C++ toolchain (and CUDA toolchain for CUDA support), I would not think that the compilation during installation is not much of an inconvenience.

That being said, if enough people show interest (or someone submits a PR implementing a build action) I would be happy to provide these. I believe something similar to this github action could be used to build the extension on all of the possible torch/cuda/cpu combinations, but I would likely need to spin up a server to store the wheels, which could take some time and cost.

siddharthab commented 1 year ago

Thank you for the quick reply. I suppose I am coming from the perspective of some build tooling where we don't typically have packages mentioned in install time requirements installed in the build environment.

For example, if we had a requirements.in file (for pip-tools) with both torch and torchsort listed, it will fail to collect the listed packages because torch won't already be installed in the environment.

Another example is if we want to build a docker image with the package installed in it, but the machine where we are building the docker image does not have the cuda toolkit. We still want the cuda capable package because the docker image is going to be deployed on a remote GPU machine.

Thanks for the pointers, I will try to get something working. Or at the very least, we can have instructions for people in these situations.

teddykoker commented 1 year ago

Those use cases make sense to me. If the github build action works, one option would just be uploading the .whl files to the release page. This would allow installation of the pre-built binaries to be done with something like:

pip install https://github.com/teddykoker/torchsort/releases/download/v0.1.9/torch-2.0.0+cpu.whl

Alternatively I could set up an s3 bucket, along with a pypi server. This would allow something like

pip install torchsort==0.1.9 --index-url https://<some url>/whl/torch-1.13.0+cpu/

This would have some monetary cost, but hopefully quite small.

Lastly, there may be a way to use github pages along with the action to host the whl files:

pip install torchsort==0.1.9 --index-url https://teddykoker.github.io/torchsort/whl/torch-1.13.0+cpu/

But this might involve some work making a custom github action to copy the files over and write the proper index.html files.

I'm thinking the first approach would make the most sense for now, but I'd be interested to know if you had any other thoughts.

siddharthab commented 1 year ago

Already on it - https://github.com/siddharthab/torchsort/commit/6d6894dba653998b6275c82b51d3f44b87d7c8ab

It's mostly working, I just have to figure out the right CUDA settings, and then the wheels will show up in the Releases page. I will be sending a PR soon, and you can work out the version matrix you want to support and the release policy.

teddykoker commented 1 year ago

Awesome! Will review once the PR is in.

teddykoker commented 1 year ago

@siddharthab Thanks again for all the help implementing this! I have successfully added the .whl files to the release page, which can be installed directly with pip (also added to readme):

# torchsort version, supports >= 0.1.9
export TORCHSORT=0.1.9
# PyTorch version, supports pt20 and pt113 for versions 2.0 and 1.13 respectively
export TORCH=pt20
# CUDA version, supports cpu, cu113, cu117, cu118 for CPU-only, CUDA 11.3, CUDA 11.7 and CUDA 11.8 respectively
export CUDA=cu118
# Python version, supports cp310 and cp311 for versions 3.10 and 3.11 respectively
export PYTHON=cp310

pip install https://github.com/teddykoker/torchsort/releases/download/v${TORCHSORT}/torchsort-${TORCHSORT}+${TORCH}${CUDA}-${PYTHON}-${PYTHON}-linux_x86_64.whl

Do let me know if you run into any issues with installing them, and don't hesitate to reach out if you need me to expand the matrix at all.

siddharthab commented 1 year ago

Thank you @teddykoker.

cc @gevangel