lucaspar / poetry-torch

Installing hardware-accelerated PyTorch with Poetry on different hardware using the same `pyproject.toml`
17 stars 0 forks source link

[Mac] Unable to find installation candidates for torch #1

Open ALonelySheep opened 3 months ago

ALonelySheep commented 3 months ago

Hi Lucas, thanks for this tutorial! But I encountered this error when trying to replicate your installation process:

Package operations: 1 install, 1 update, 0 removals

  - Updating torch (2.2.2 -> 2.2.2+cpu): Failed

  RuntimeError

  Unable to find installation candidates for torch (2.2.2+cpu)

I have installed torch by adding torch = "^2.2.2" in toml file before trying your method. Do you have any idea of what is happening here?

Thanks

ALonelySheep commented 3 months ago

The only significant difference in our toml file is that I'm using python=3.8 instead of 3.10

lucaspar commented 3 months ago

hey, so the repo's TOML already has an entry for torch, you shouldn't need to add it. Just change the versions in the original TOML to match v2.2 instead (it's 2.3).

lucaspar commented 3 months ago

I created a new branch that works on Python 3.8.

See the TOML file.

ALonelySheep commented 3 months ago

Hey Lucas, thanks you for taking time to address this. The issue seems to be system specific. I'm using MacOS and this structure seems to work

torch = [
    { version = "~2.2.2", source = "pypi", platform = "darwin", markers = "extra=='cpu' and extra!='gpu'" },
    { version = "~2.2.2+cpu", source = "pytorch_cpu", platform = "linux", markers = "extra=='cpu' and extra!='gpu'" },
    { version = "~2.2.2+cpu", source = "pytorch_cpu", platform = "win32", markers = "extra=='cpu' and extra!='gpu'" },
]
ALonelySheep commented 3 months ago

Also as a caveat, if we add another package which depends on torch, its possible to break the hack mentioned in this repo. I think at its core, this is because Poetry does not support conflicting dependencies. More discussion can be found here: https://github.com/python-poetry/poetry/issues/6419

lucaspar commented 3 months ago

right, I thought platform fields were enough for macs; are you also installing it in an environment with NVIDIA cards that you need the GPU markers?

As for dependencies that have torch itself as a requirement, that's something I haven't explored yet, thanks for bringing it up.

ALonelySheep commented 3 months ago

Yes, I'm trying to create a consistent TOML file which can work on both dev and production machines.

ALonelySheep commented 3 months ago

Sadly, I feel like I've wasted a lot of time on poetry, but the solution I ended up with is very unreliable. In the end, I opted to use a bash script to rename the correct file. I hope Poetry can have better support for ML environments in the future.

lucaspar commented 3 months ago

True, it seems all we have now are workarounds.

This is not even a "Poetry" issue: AFAIK no Python package manager today reliably replicates an environment across different platforms with conditional hardware acceleration. Conda's environment.yaml files perhaps get the closest, but without a proper .lock file I've had them fail miserably when it comes to reproducibility.

pschoen-itsc commented 3 days ago

Solution that works for me

torch = [
    { version = "==2.4.0", source = "pypi", markers = "sys_platform=='darwin'" },
    { version = "==2.4.0", source = "pytorch-cpu", markers = "sys_platform!='darwin' and extra=='cpu' and extra!='cuda'" }
]

[tool.poetry.group.cuda]
optional = true

[tool.poetry.group.cuda.dependencies]
torch = { version = "==2.4.0", markers = "sys_platform!='darwin' and extra=='cuda' and extra!='cpu'" }

pypi source is my own proxy repo of the public pypi, but that should not make a difference. That is also my primary source, so the cuda torch should be also pulled from there. I don't know for sure if everything is needed to make this work, but this does work.