Open matthewfeickert opened 5 months ago
Did this work in previous versions?
Did this work in previous versions?
I'm not sure about past uv
releases. I'm encountering this for the first time while trying to migrate the CI to use uv
in https://github.com/CoffeaTeam/coffea.
Ah ok, no prob. Mostly was wondering if it was “obviously a regression” from today’s release.
Mostly was wondering if it was “obviously a regression” from today’s release.
Not that I know of, but I can later tonight replicate with an older uv
release to check.
Can you try instead using uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision==0.18.0+cpu torchaudio==2.3.0+cpu
? I can't reproduce this on ARM, but I think it differs on ARM vs. x86.
It's explained here: https://github.com/astral-sh/uv/issues/1497#issuecomment-2098896853
Can you try instead using
uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision==0.18.0+cpu torchaudio==2.3.0+cpu
?
Yeah, that works on x86
Linux
$ docker run --rm -ti -v /tmp:/tmp python:3.12 /bin/bash
root@9b29419d1e98:/# curl -LsSf https://astral.sh/uv/install.sh | sh
downloading uv 0.1.41 x86_64-unknown-linux-gnu
installing to /root/.cargo/bin
uv
everything's installed!
To add $HOME/.cargo/bin to your PATH, either restart your shell or run:
source $HOME/.cargo/env (sh, bash, zsh)
source $HOME/.cargo/env.fish (fish)
root@9b29419d1e98:/# . ~/.cargo/env
root@9b29419d1e98:/# uv venv
Using Python 3.12.3 interpreter at: usr/local/bin/python3
Creating virtualenv at: .venv
root@9b29419d1e98:/# . .venv/bin/activate
(.venv) root@9b29419d1e98:/# uv --version
uv 0.1.41
(.venv) root@9b29419d1e98:/# uv pip install --verbose --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision==0.18.0+cpu torchaudio==2.3.0+cpu &> /tmp/uv_install_cpu_moniker.txt
(.venv) root@9b29419d1e98:/# uv pip list
Package Version
----------------- ----------
filelock 3.13.1
fsspec 2024.2.0
jinja2 3.1.3
markupsafe 2.1.5
mpmath 1.3.0
networkx 3.2.1
numpy 1.26.3
pillow 10.2.0
sympy 1.12
torch 2.3.0+cpu
torchaudio 2.3.0+cpu
torchvision 0.18.0+cpu
typing-extensions 4.9.0
(.venv) root@9b29419d1e98:/#
It's explained here: #1497 (comment)
Huh. That is interesting. I take it that this isn't fully expected, even though there are known differences with regards to local version identifiers?
I haven't really dug into it. My guess is it relates to some unclear decisions around how PyTorch chooses to publish their wheels (e.g., some variants include +cpu
while others do not).
Marking as compatibility
. It's not a bug in uv per se (given our documented limitations) but I wish that it worked.
You can avoid the extra specificity on those depending on torch
, but due to +cpu
you can't use a semver range (like >=2.0.0
) with torch
:
uv pip install --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision torchaudio
NOTE: ARM64 needs to omit the +cpu
or equivalent due to upstream inconsistency with packaging. That may be resolved in future as PyTorch maintainers are open to contributions to drop the +cpu
local identifier.
TL;DR: Either:
+cpu
, +cu121
, etc) suffix to each package (which mandates an explicit version?), and they must be installed all together to resolve correctly it seems. (UPDATE: Only the top-level dependency needs the suffix that others would depend upon)uv
will prioritize packages from), then to ensure torch
without a local identifier is resolvable from PyPi index you'll need --index-strategy unsafe-first-match
, and it'll circle back to the PyTorch variant being successfully resolved# Must provide an explicit torch version that the other two depend on to resolve:
$ uv pip install --index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision torchaudio
Resolved 13 packages in 3.87s
Installed 13 packages in 234ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ numpy==1.26.3
+ pillow==10.2.0
+ sympy==1.12
+ torch==2.3.0+cpu
+ torchaudio==2.3.0+cpu
+ torchvision==0.18.0+cpu
+ typing-extensions==4.9.0
NOTE: If you attempt to use >=
for resolution, you must quote wrap it to avoid shell redirection (>
) which creates a file (eg: =0.0.0+cpu
); uv
will not be aware of this to raise an error (like it would when you use quote wrapping):
$ uv pip install --extra-index-url https://download.pytorch.org/whl/cpu torch==2.3.0+cpu torchvision>=0.0.0+cpu 'torchaudio>=2.0.0+cpu'
error: Failed to parse `torchaudio>=2.0.0+cpu`
Caused by: Operator >= is incompatible with versions containing non-empty local segments (`+cpu`)
UPDATE: I am mistaken with the --index-strategy
approach to resolve torch
. While uv
would happily resolve with this approach, the actual torch
package selected seems to be chosen based on local cache as well:
# Failed to resolve (_related to prior discussions above with the `+cpu` target_)
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cpu torch torchvision torchaudio
# Installed torch (PyPi) while torchvision + torchaudio were `+cu121` (PyTorch)...
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cu121 torch torchvision torchaudio
# ...
+ torch==2.3.0
+ torchaudio==2.3.0+cu121
+ torchvision==0.18.0+cu121
# Install the cuda 12.1 version from PyTorch adding it to cache:
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cu121 torch==2.3.0+cu121 torchvision torchaudio
- torch==2.3.0
+ torch==2.3.0+cu121
# Install again, but in a new venv (this time install without the `-cu121` suffix again):
$ uv pip install --index-strategy unsafe-first-match --extra-index-url https://download.pytorch.org/whl/cu121 torch torchvision torchaudio
+ torch==2.3.0+cu121
+ torchaudio==2.3.0+cu121
+ torchvision==0.18.0+cu121
As can be seen above different resolution due to previous actions, now the cuda variant from PyTorch was installed instead of the PyPi torch pacakge
$ uv pip install torch
Resolved 21 packages in 3.35s
Downloaded 21 packages in 1m 03s
Installed 21 packages in 432ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ nvidia-cublas-cu12==12.1.3.1
+ nvidia-cuda-cupti-cu12==12.1.105
+ nvidia-cuda-nvrtc-cu12==12.1.105
+ nvidia-cuda-runtime-cu12==12.1.105
+ nvidia-cudnn-cu12==8.9.2.26
+ nvidia-cufft-cu12==11.0.2.54
+ nvidia-curand-cu12==10.3.2.106
+ nvidia-cusolver-cu12==11.4.5.107
+ nvidia-cusparse-cu12==12.1.0.106
+ nvidia-nccl-cu12==2.20.5
+ nvidia-nvjitlink-cu12==12.1.105
+ nvidia-nvtx-cu12==12.1.105
+ sympy==1.12
+ torch==2.3.0+cu121
+ typing-extensions==4.9.0
$ uv pip list
Package Version
------------------------ -----------
filelock 3.13.1
fsspec 2024.2.0
jinja2 3.1.3
markupsafe 2.1.5
mpmath 1.3.0
networkx 3.2.1
nvidia-cublas-cu12 12.1.3.1
nvidia-cuda-cupti-cu12 12.1.105
nvidia-cuda-nvrtc-cu12 12.1.105
nvidia-cuda-runtime-cu12 12.1.105
nvidia-cudnn-cu12 8.9.2.26
nvidia-cufft-cu12 11.0.2.54
nvidia-curand-cu12 10.3.2.106
nvidia-cusolver-cu12 11.4.5.107
nvidia-cusparse-cu12 12.1.0.106
nvidia-nccl-cu12 2.20.5
nvidia-nvjitlink-cu12 12.1.105
nvidia-nvtx-cu12 12.1.105
sympy 1.12
torch 2.3.0+cu121
typing-extensions 4.9.0
So that installed with torch
resolved to torch 2.3.0+cu121
, yet when trying to add torchaudio
or the more specific torchaudio==2.3.0+cu121
it fails:
$ uv pip install torchaudio==2.3.0+cu121
× No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.3.0 and torchaudio==2.3.0+cu121 depends on torch==2.3.0, we can conclude that torchaudio==2.3.0+cu121 cannot be used.
And because you require torchaudio==2.3.0+cu121, we can conclude that the requirements are unsatisfiable.
Meanwhile, like with the suggested +cpu
fix before my comment, the equivalent does resolve correctly:
$ uv pip install --index-url https://download.pytorch.org/whl/cu121 torch==2.3.0+cu121 torchaudio==2.3.0+cu121
Resolved 23 packages in 3.98s
Downloaded 4 packages in 29.27s
Installed 23 packages in 339ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ nvidia-cublas-cu12==12.1.3.1
+ nvidia-cuda-cupti-cu12==12.1.105
+ nvidia-cuda-nvrtc-cu12==12.1.105
+ nvidia-cuda-runtime-cu12==12.1.105
+ nvidia-cudnn-cu12==8.9.2.26
+ nvidia-cufft-cu12==11.0.2.54
+ nvidia-curand-cu12==10.3.2.106
+ nvidia-cusolver-cu12==11.4.5.107
+ nvidia-cusparse-cu12==12.1.0.106
+ nvidia-nccl-cu12==2.20.5
+ nvidia-nvjitlink-cu12==12.1.105
+ nvidia-nvtx-cu12==12.1.105
+ sympy==1.12
+ torch==2.3.0+cu121
+ torchaudio==2.3.0+cu121
+ triton==2.3.0
+ typing-extensions==4.9.0
So there is some issue there with uv
resolving torch
?
torch==2.3.0+cu121
, it can only resolve with the explicit torchaudio==2.3.0+cu121
at the same time, not as a 2nd install.torch torchaudio
without the +cu121
suffix fails to resolve.Definitely seems like some inconsistency with uv
?
EDIT: Oh I see the linked issue references this gotcha (local identifiers support) with uv
, and specifically cites PyTorch as an example.
So by setting it as an extra index URL instead, the PyTorch index will be preferred by uv
, but you need the unsafe-first-match
strategy so that it can find/resolve the torch
package available at PyPi (since PyTorch doesn't provide it for an index focused on only that "local identifier" variant), then uv
will resolve it successfully and still prefer the PyTorch package anyway 🤷♂️
$ uv pip install \
--index-strategy unsafe-first-match \
--extra-index-url https://download.pytorch.org/whl/cu121 \
torch torchaudio
Resolved 23 packages in 3.37s
Installed 23 packages in 264ms
+ filelock==3.13.1
+ fsspec==2024.2.0
+ jinja2==3.1.3
+ markupsafe==2.1.5
+ mpmath==1.3.0
+ networkx==3.2.1
+ nvidia-cublas-cu12==12.1.3.1
+ nvidia-cuda-cupti-cu12==12.1.105
+ nvidia-cuda-nvrtc-cu12==12.1.105
+ nvidia-cuda-runtime-cu12==12.1.105
+ nvidia-cudnn-cu12==8.9.2.26
+ nvidia-cufft-cu12==11.0.2.54
+ nvidia-curand-cu12==10.3.2.106
+ nvidia-cusolver-cu12==11.4.5.107
+ nvidia-cusparse-cu12==12.1.0.106
+ nvidia-nccl-cu12==2.20.5
+ nvidia-nvjitlink-cu12==12.1.105
+ nvidia-nvtx-cu12==12.1.105
+ sympy==1.12
+ torch==2.3.0+cu121
+ torchaudio==2.3.0+cu121
+ triton==2.3.0
+ typing-extensions==4.9.0
If you of course remove the extra index URL for PyTorch, then it'll resolve the standard torch==2.3.0
+ torchaudio==2.3.0
packages at PyPi and install those like you'd expect.
As long as the package is known to exist at PyTorch, it should always be preferred this way, even if there were a malicious version on PyPi from what I understand? Once uv
supports the feature to lock the index to PyTorch for these specific packages that may help, but I assume that wouldn't help drop the index strategy (it may even not be able to resolve the PyPi torch
package just so it can circle back to PyTorch?).
Probably better to be explicit about the local identifier though, I am new to Python and was referencing someone elses pip install
where the local identifier was implicit from the --extra-index-url
(a variable during builds to support the PyTorch variants).
I am having almost the same problem, but the issue is
I am using a requirements.txt
that requirements.txt
includes libraries that depend on torch="2.*", ( e.g, transformers )
Even if I install torch cpu using uv pip install torch=2.1.2+cpu, then try to install the requirements.txt
with pypi index, uv resolve the dependencies of torch
in pypip which are are nvidia-cuda deps on linux x86 , to note , it doesn't resolve torch itself again, So i end up with torch+cpu but with torch cuda deps installed, which massively bloats the images size
Unfortunately that's not enough information for me to fully understand the issue, but you should consider using a constraints file in your second install, with torch=2.1.2+cpu
? That would ensure that we respect the already-installed version during resolution.
Sadly specifying +cpu in constraint doesn't work currently in uv here an example
requirements.txt
easyocr==1.7.1
torch=2.1.*
constraint.txt
torch==2.1.2+cpu
torchvision==0.16.2+cpu
when we compile the requirements to check what uv is going to resolve by default without constraints
other packages
....
torch==2.1.2
# via
# easyocr
# torchvision
torchvision==0.16.2
# via easyocr
...
running the command to install with torch cpu index
uv pip install -r requirements.txt -c constraint.txt --extra-index-url "https://pypi.org/simple https://download.pytorch.org/whl/cpu"
we get
× No solution found when resolving dependencies:
╰─▶ Because there is no version of torch==2.1.2+cpu and you require torch==2.1.2+cpu, we can conclude that the requirements are
unsatisfiable.
uv doesn't qualify 2.1.2+cpu as 2.1.* as it is not semver compliant ?
Thanks, I’ll take a look when I can. The PyTorch stuff is always tricky.
Yeah, pytorch does things their own way and are not compliant with any standard :// , they are big enough to gey away with it I would be glad to contribute if you can point me to the relevant parts where uv resolves the dependency tree for requirements
On Sat, Jun 1, 2024, 16:24 Charlie Marsh @.***> wrote:
Thanks, I’ll take a look when I can. The PyTorch stuff is always tricky.
— Reply to this email directly, view it on GitHub https://github.com/astral-sh/uv/issues/3437#issuecomment-2143466601, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH4SAFWSOL6VXDXXU76ZLFLZFHKQTAVCNFSM6AAAAABHLV4XW6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBTGQ3DMNRQGE . You are receiving this because you commented.Message ID: @.***>
Streamlit uses uv to install dependencies from a requirements.txt file which caused our app to fail. I managed to work around it by pinning the version number as suggested here.
--extra-index-url https://download.pytorch.org/whl/cpu
torch==2.3.0+cpu
torchvision
torchaudio
uv pip sync requirements.txt
), ideally including the--verbose
flag.uv_install.txt
pip_install.txt
uv_pypi_install.txt
Linux, though applies across platforms.
uv --version
).Related Issues