Open zanieb opened 1 month ago
I think the most common difficulty would be to specify the index url, but I'm glad uv
makes it as easy as possible.
uv add torch --index-url https://download.pytorch.org/whl/cpu # within a project
uv pip install torch --index-url https://download.pytorch.org/whl/cpu # otherwise
However, I've always used uv pip install
for both PyTorch and TensorFlow, because on my Windows system, PyTorch uses CUDA 12.1 and TensorFlow includes tensorflow-intel, both of which are platform-dependent.
Available to help :) I am on ARM macOS and uv add torch
recently has always been nice to me 😅 Regular version (w/o CUDA dependencies) are installed.
Would also like to point out there's been this python package (recently recommended by @muellerzr, HuggingFace Accelerate maintainer) to install torch with automatic backend detection. Might be a good candidate for a first uv plugin, I guess?
Sharing my thoughts to help with the discussion.
I feel that the biggest challenge here is that the source is platform dependent. On MacOs, PyPi is sufficient. But for Linux and Windows, if one wishes to install a cuda enabled version, they need to specify a URL.
I would personally already be helped if I could specify platform-dependent sources for packages.
Now the cuda/not cuda question is difficult. I think that theoretically, using optional dependencies can be a way to deal with this. If you want the cuda version (for any platform), install it as my_package[cuda]. If you only need the cpu version (i.e. Github actions), then install it as my_package[cpu].
But it's more of a flag then an optional dependency. It's either/or. So not sure how to deal with that. Maybe there can be "modes" for dependencies, with a default. And if the "cuda" mode is specified, dependencies are installed based on that.
I would personally already be helped if I could specify platform-dependent sources for packages.
Regarding this.. https://github.com/astral-sh/uv/issues/3397
😠but it must be done!