python-poetry / poetry

Python packaging and dependency management made easy
https://python-poetry.org
MIT License
31.67k stars 2.27k forks source link

Instructions for installing PyTorch #6409

Open davidgilbertson opened 2 years ago

davidgilbertson commented 2 years ago

Issue

As mentioned in issue https://github.com/python-poetry/poetry/issues/4231 there is some confusion around installing PyTorch with CUDA but it is now somewhat resolved. It still requires a few steps, and all options have pretty serious flaws. Below are two options that 'worked' for me, on Poetry version 1.2.0.

Option 1 - wheel URLs for a specific platform

[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = { url = "https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl"}
torchaudio = { url = "https://download.pytorch.org/whl/cu116/torchaudio-0.12.1%2Bcu116-cp310-cp310-win_amd64.whl"}
torchvision = { url = "https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl"}

Note that each subsequent poetry update will do another huge download and you'll see this message:

  • Updating torch (1.12.1+cu116 -> 1.12.1+cu116 https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl)
  • Updating torchaudio (0.12.1+cu116 -> 0.12.1+cu116 https://download.pytorch.org/whl/cu116/torchaudio-0.12.1%2Bcu116-cp310-cp310-win_amd64.whl)
  • Updating torchvision (0.13.1+cu116 -> 0.13.1+cu116 https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl)

Option 2 - alternate source

[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = { version = "1.12.1", source="torch"}
torchaudio = { version = "0.12.1", source="torch"}
torchvision = { version = "0.13.1", source="torch"}

[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu116"
secondary = true

This seems to have worked (although I already had the packages installed) but it reports errors like Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/, but I think they get installed anyway (maybe a better message would be "Can't access pillow at 'https://download.pytorch.org/whl/cu116', falling back to pypi")

Also, if you later go on to do, say poetry add pandas (a completely unrelated library) you'll get a wall of messages like:

Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pandas/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pandas/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pytz/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/python-dateutil/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/numpy/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/requests/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/typing-extensions/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/certifi/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/urllib3/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/idna/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/charset-normalizer/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/python-dateutil/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/six/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pytz/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/six/

This happens with or without secondary = true in the source config.

Maintainers: please feel free to edit the text of this if I've got something wrong.

Alain1405 commented 1 year ago

with the latest version of poetry this seems to work for me:

[tool.poetry.dependencies]
torch = [
     {version = "^2.0.1", platform = "darwin"},
     {version = "^2.0.1", platform = "linux", source = "torch"},
     {version = "^2.0.1", platform = "win32", source = "torch"},
 ]
 sympy = [
     {version = "^1.12", platform = "linux", extras = ["mpmath"]},
     {version = "^1.12", platform = "win32", extras = ["mpmath"]},
 ]

[[tool.poetry.source]]
 name = "torch"
 url = "https://download.pytorch.org/whl/cpu"
 priority = "explicit"

note that for darwin I used PyPI instead since the PyTorch source did not have macOS CPU-specific wheels.

This does not work as expected. It downloads all the binaries for all the Python versions for each specific torch version/platform combination which matches the constraint of the Python version in the toml file.

For my particular setup, which is py3.10, torch 1.13.1 cu117, with the config above (linux and windows only) it downloads

  • torch 1.13.1 cu117 py310 linux
  • torch 1.13.1 cu117 py311 linux
  • torch 1.13.1 cu117 py310 win32
  • torch 1.13.1 cu117 py311 win32

It's absurd that it has to download more than 1 single torch binary for my local/Docker setup, regardless of what I run on. Particularly the fact that it downloads files for multiple python versions. I shouldn't have to wait a non trivial amount of time and use a non trivial amount of memory to run this out of the box.

I can confirm this. I have:

torch = [
     {version = "^2.0.1", platform = "darwin"},
     {version = "^2.0.1", platform = "linux", source = "torch-cpu"},
     {version = "^2.0.1", platform = "win32", source = "torch-cpu"},
 ]
...

[[tool.poetry.source]]
 name = "torch-cpu"
 url = "https://download.pytorch.org/whl/cpu"
 priority = "explicit"

and after poetry lock; poetry install I get, on MacOs 13.3.1 :

  Unable to find installation candidates for torch (2.0.1+cpu)
dimbleby commented 1 year ago

This thread is 50 comments long and growing, please resist the temptation to comment only to confirm what is already known - a thumbs-up on the relevant comment will do.

Specifically the error about "Unable to find installation candidate for torch (X.Y.Z+cpu)" on MacOS is #6150 (and #7597, and #7933). Those issues contain more discussion, but really the proper solution is to ask torch to publish wheels with a consistent version scheme. Go over there and contribute a fix!

chunleng commented 1 year ago

Got this to work:

[tool.poetry.dependencies]
torch = [
    {url="https://download.pytorch.org/whl/cpu/torch-1.13.1-cp38-none-macosx_11_0_arm64.whl", markers="platform_system == \"Darwin\" and platform_machine == \"arm64\""}, # resolve for apple silicon on machine
    {url="https://download.pytorch.org/whl/torch-1.13.1-cp38-cp38-manylinux2014_aarch64.whl", markers="platform_system == \"Linux\" and platform_machine == \"aarch64\""}, # resolve for apple silicon on docker
    {version="^1.13.1", source="pytorch-cpu", markers="(platform_system != \"Darwin\" or platform_machine != \"arm64\") and (platform_system != \"Linux\" or platform_machine != \"aarch64\")"}
]

[[tool.poetry.source]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"

It's a temporary solution (even hackish) for apple silicon to run on the non-cpu download and the rest to use CPU download.

Also, there's a catch to doing this, you probably need to run poetry lock --no-update a few times to get the lock file updated correctly

You should see 3 package with name="torch" created.

matanby commented 1 year ago

My use case: install CUDA version of PyTorch on Linux, and a CPU version on MacOS, while supporting multiple different Python versions.

The following solution works for me:

[tool.poetry.dependencies]
torch = [
    {url = "https://download.pytorch.org/whl/cu118/torch-2.0.0%2Bcu118-cp38-cp38-linux_x86_64.whl", platform = "linux", python = ">=3.8 <3.9"},
    {url = "https://download.pytorch.org/whl/cu118/torch-2.0.0%2Bcu118-cp39-cp39-linux_x86_64.whl", platform = "linux", python = ">=3.9 <3.10"},
    {url = "https://download.pytorch.org/whl/cu118/torch-2.0.0%2Bcu118-cp310-cp310-linux_x86_64.whl", platform = "linux", python = ">=3.10 <3.11"},
    {url = "https://download.pytorch.org/whl/cpu/torch-2.0.0-cp38-none-macosx_11_0_arm64.whl", platform = "darwin", python = ">=3.8 <3.9"},
    {url = "https://download.pytorch.org/whl/cpu/torch-2.0.0-cp39-none-macosx_11_0_arm64.whl", platform = "darwin", python = ">=3.9 <3.10"},
    {url = "https://download.pytorch.org/whl/cpu/torch-2.0.0-cp310-none-macosx_11_0_arm64.whl", platform = "darwin", python = ">=3.10 <3.11"},
]

Note that in this example the URLs point to a specific PyTorch version built for a specific CUDA and Python versions. Make sure to replace those with whatever version you actually want to use

norayr-im commented 1 year ago

This issue is partially solved with the new version of the poetry. You can specify the priority = "supplemental"("explicit") of the source and avoid the lookup problem in the second option from the comment

AKuederle commented 1 year ago

Just to provide an updated copy-and-paste solution based on the previous comments:

Configure your additional sources as follows (note that below I configured the CPU version of torch):

[[tool.poetry.source]]
name = "torch_cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Now you can use the explicit source parameter to select the torch repo for installation:

torch = { version = ">=1.6.0", source="torch_cpu" }

Using multiple optional groups, you can (almost seamlessly) switch between cpu and cuda versions:

[tool.poetry.group.torch_cpu]
optional = true

[tool.poetry.group.torch_cpu.dependencies]
torch = { version = ">=1.6.0", source="torch_cpu"}

[tool.poetry.group.torch_cuda]
optional = true

[tool.poetry.group.torch_cuda.dependencies]
torch = { version = ">=1.6.0"}

With this setup you can do:

poetry install --sync --with torch_cpu

or

poetry install --sync --with torch_cuda

Just note, that when you want to switch between them, you have to remove one of the groups first, before installing the other. So, to switch from cuda to cpu in your local install you need to do:

poetry install --sync --without torch_cuda
poetry install --sync --with torch_cpu

Otherwise, poetry will not change the version.

doctorpangloss commented 1 year ago

Configure your additional sources as follows (note that below I configured the CPU version of torch):

Have you tested this...

Based on your snippets, since you don't specify the index that contains CUDA, this can't possibly download it correctly. It sounds like you are aware of that. Are you saying you can only have one source at a time? Or both?

If you tested your approach, for real, on the real machine, with the real GPU, you are very close to the best solution. It sounds like poetry will not be able to detect the "right" accelerator. But otherwise, having at least the command line path is pretty good.

AKuederle commented 1 year ago

For recent versions of torch the cuda version is the PyPi version on Mac and Linux. So no need for an extra repo there.

But I just checked for windows it is the other way around. So probably a different setup required for windows. I don't have access to a windows machine to test though.

On Fri, 7 Jul 2023, 17:42 Benjamin Berman, @.***> wrote:

Configure your additional sources as follows (note that below I configured the CPU version of torch):

Have you tested this...

  • On a Windows machine?
  • With an NVIDIA video card?
  • With a real application?

Based on your snippets, since you don't specify the index that contains CUDA, this can't possibly download it correctly. It sounds like you are aware of that. Are you saying you can only have one source at a time? Or both?

If you tested your approach, for real, on the real machine, with the real GPU, you are very close to the best solution. It sounds like poetry will not be able to detect the "right" accelerator. But otherwise, having at least the command line path is pretty good.

— Reply to this email directly, view it on GitHub https://github.com/python-poetry/poetry/issues/6409#issuecomment-1625601701, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACSHBOYVPQGGGKGCNFEAFUDXPAU7BANCNFSM6AAAAAAQEY7N5Y . You are receiving this because you commented.Message ID: @.***>

juliusfrost commented 1 year ago

For CUDA 11.8 and PyTorch 2.0.1, the specification is consistent. I got the following working on Windows and Linux (WSL). I have not provided CPU intentionally.

[tool.poetry.dependencies]
python = "~3.11"
torch = {version = "2.0.1", source = "torch_cuda118"}

[[tool.poetry.source]]
name = "torch_cuda118"
url = "https://download.pytorch.org/whl/cu118"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Poetry will still download all python wheels (3.8-3.11) for pytorch even though I specified 3.11 here.

doctorpangloss commented 1 year ago

@juliusfrost

[tool.poetry.dependencies]
python = "~3.11"
torch = {version = "2.0.1", source = "torch_cuda118"}

[[tool.poetry.source]]
name = "torch_cuda118"
url = "https://download.pytorch.org/whl/cu118"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Okay, well this snippet is missing a CPU repo reference. Ostensibly both the "CPU" (i.e. ordinary) and CUDA pytorch indices must appear for this to be a working snippet.

Also the above cpu source doesn't work on Windows:

The index you used is wrong.

@ralbertazzi @dimbleby you should probably lock this thread

and author an example snippet of configuration to support

xxx

because that's probably as good as it's going to get. My understanding is the latest changes make the following true:

[ ] a snippet to allow the command poetry install to automatically select the best accelerator (CUDA, ROCm or ordinary "CPU" wheels) of pytorch [x] a snippet that allows the user to select the accelerated wheel with a command line argument

Reading this now, there is no approach that makes sense. I think this ticket should be closed as "wontfix". pytorch should have a single wheel that merges all of the content it needs, and the application should choose the accelerator.

schniewmatz commented 1 year ago

Just to provide an updated copy-and-paste solution based on the previous comments:

Configure your additional sources as follows (note that below I configured the CPU version of torch):

[[tool.poetry.source]]
name = "torch_cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Now you can use the explicit source parameter to select the torch repo for installation:

torch = { version = ">=1.6.0", source="torch_cpu" }

Using multiple optional groups, you can (almost seamlessly) switch between cpu and cuda versions:

[tool.poetry.group.torch_cpu]
optional = true

[tool.poetry.group.torch_cpu.dependencies]
torch = { version = ">=1.6.0", source="torch_cpu"}

[tool.poetry.group.torch_cuda]
optional = true

[tool.poetry.group.torch_cuda.dependencies]
torch = { version = ">=1.6.0"}

With this setup you can do:

poetry install --sync --with torch_cpu

or

poetry install --sync --with torch_cuda

Just note, that when you want to switch between them, you have to remove one of the groups first, before installing the other. So, to switch from cuda to cpu in your local install you need to do:

poetry install --sync --without torch_cuda
poetry install --sync --with torch_cpu

Otherwise, poetry will not change the version.

If I follow your isntructions with the following pyproject.toml:

[tool.poetry.dependencies]
python = "^3.10"

[tool.poetry.group.torch_cpu]
optional = true

[tool.poetry.group.torch_cpu.dependencies]
torch = { version = "2.0.0", source = "torch_cpu"}
torchvision = { version = "0.15.0", source = "torch_cpu"}
torchaudio = { version = "2.0.0", source = "torch_cpu"}

[tool.poetry.group.torch_cu117]
optional = true

[tool.poetry.group.torch_cu117.dependencies]
torch = { version = "2.0.0", source = "torch_cu117"}
torchvision = { version = "0.15.0", source = "torch_cu117"}
torchaudio = { version = "2.0.0", source = "torch_cu117"}

[[tool.poetry.source]]
name = "torch_cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental"

[[tool.poetry.source]]
name = "torch_cu117"
url = "https://download.pytorch.org/whl/cu117"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

And then run poetry install --sync --with torch_cpu --without torch_cu117 poetry goes ahead and only downloads the cu117 wheels and installs the cu117 versions. What am I doing wrong?

doctorpangloss commented 1 year ago

And then run poetry install --sync --with torch_cpu --without torch_cu117 poetry goes ahead and only downloads the cu117 wheels and installs the cu117 versions. What am I doing wrong?

This will be the last comment of mine on this thread, but you are saying it downloaded cu117 wheels, and your command says --without torch_cu117. I know you probably miswrote that comment. This thread has become a magnet for people miswriting stuff. It's hard to speculate what you actually did.

bpleshakov commented 1 year ago

I can report that method proposed by @AKuederle doesn't work for poetry 1.5.0. Even if I set up two supplementary repositories and corresponding groups, one for cuda and one for cpu, the output of poetry lock will always lock pytorch version for the last group in pyproject.toml.

If anyone want to disprove it, please provide pyproject.toml with corresponding poetry.lock.

david-waterworth commented 1 year ago

The examples above didn't work for me, plus you shouldn't have to specify https://download.pytorch.org/whl/cu117 as it's the default (pip install torch installs torch with the additional nvidia packages required for cuda 117). You should only need to specify a source if you're using ROCm, CPU or cuda118

I found https://github.com/pytorch/pytorch/issues/100974 which I think explains some of the confusion.

For me, adding torch_cu117 as a source resulted in torch being installed without any cuda dependencies so failed to load libcudnn.so.8 (plus it took almost 10 minutes due to downloading every wheel - something that will get worse if py312, py413 etc gets released). On a machine with cuda already installed it may appear to work though.

Instead I followed the suggestion of @sammlapp and skipped torch 2.0.1 ie.

torch = {version = ">=2.0.0, !=2.0.1"}

worked perfectly, it installed as quick as pip install torch and correctly installed the required dependencies

$ poetry install
Installing dependencies from lock file

Package operations: 14 installs, 2 updates, 0 removals

  • Installing cmake (3.27.1)
  • Installing lit (16.0.6)
  • Updating wheel (0.40.0 -> 0.41.1)
  • Installing nvidia-cublas-cu11 (11.10.3.66)
  • Installing nvidia-cuda-cupti-cu11 (11.7.101)
  • Installing nvidia-cuda-nvrtc-cu11 (11.7.99)
  • Installing nvidia-cuda-runtime-cu11 (11.7.99)
  • Installing nvidia-cudnn-cu11 (8.5.0.96)
  • Installing nvidia-cufft-cu11 (10.9.0.58)
  • Installing nvidia-curand-cu11 (10.2.10.91)
  • Installing nvidia-cusolver-cu11 (11.4.0.1)
  • Installing nvidia-cusparse-cu11 (11.7.4.91)
  • Installing nvidia-nccl-cu11 (2.14.3)
  • Installing nvidia-nvtx-cu11 (11.7.91)
  • Installing triton (2.0.0)
  • Updating torch (2.0.1 -> 2.0.0)
jroeger23 commented 1 year ago

Please take note of the update below.

Hey, I just configured a fully partially (See Update) working setup for PyTorch. This only works since Poetry 1.6.0. In this example I use PyTorch 1.13.0 with cuda116 or cpu backend. This should work for more recent versions as well.

First specify the supplemental/primary sources as follows:

[[tool.poetry.source]]
name = "torch_cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental"

[[tool.poetry.source]]
name = "torch_cu116"
url = "https://download.pytorch.org/whl/cu116"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Then specify an empty extra group, I called it "cuda":

[tool.poetry.extras]
cuda = []

This step is necessary, since it allows us to control the PEP-508 Extras via Poetry. We could call poetry install --sync --extras cuda or poetry install --sync to later distinguish via dependency markers.

Lastly add the cuda/cpu relevant PyTorch dependencies, which are distinguished by the markers:

[tool.poetry.dependencies]
python = "3.10.*"
torch = [
  { version = "1.13.0+cu116", source = "torch_cu116", markers = "extra=='cuda'" },
  { version = "1.13.0+cpu", source = "torch_cpu", markers = "extra!='cuda'" },
]
torchaudio = [
  { version = "0.13.0+cu116", source = "torch_cu116", markers = "extra=='cuda'" },
  { version = "0.13.0+cpu", source = "torch_cpu", markers = "extra!='cuda'" },
]
torchvision = [
  { version = "0.14.0+cu116", source = "torch_cu116", markers = "extra=='cuda'" },
  { version = "0.14.0+cpu", source = "torch_cpu", markers = "extra!='cuda'" },
]

You need to specify torchaudio and torchvision, because if they are installed via other dependencies, the cpu/cuda version is not chosen accordingly. It seems, that since Poetry 1.6.0 the markers="extra=='cuda'" and markers="extra!='cuda'" are mutually exclusive, which makes this approach work.

So if you include PyTorch like this, you can seamlessly switch between cpu and cuda with poetry install --sync --extras cuda and poetry install --sync.

UPDATE: I just noticed, that this approach actually does not work as intended. Once the dependencies change (e.g. adding a new one) Poetry alternates between the two versions. In this case torch+cu116 and torch+cpu. Poetry seems to ignore the extra arguments. (Tested with Poetry 1.6.1)

Note: This should work with more than two options as well, but I have not tested it yet.

codingbutstillalive commented 1 year ago

How about "cpu-only"?

slashtechno commented 1 year ago

Regarding the solution proposed by @jroeger23, is it possible to use ^ to allow newer versions of torch, torchaudio, and torchvision?

Hemanthkumar2112 commented 1 year ago
poetry source add pytorch https://download.pytorch.org/whl/cpu
poetry update
poetry add --source pytorch torch
[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cpu"
default = false
secondary = false
david-waterworth commented 1 year ago

Note https://github.com/pytorch/pytorch/issues/100974 impacts torch==2.1.0 as well

So for torch "latest" with poetry you now need

torch = {version = ">=2.0.0, !=2.0.1, !=2.1.0"}

(2.1.1 apparently fixes this)

jroeger23 commented 1 year ago

Hey, I believe I found the root cause for the current issues with local torch versions. The main reason for this seems to be a deviation from PEP 440.

Let me summarize how I came to this conclusion:

1 Scenario

The torch versions that are to be distinguished only differ in their Local Version Identifier. In itself this is not an issue, since

Local version identifiers SHOULD NOT be used when publishing upstream projects to a public index server, but MAY be used to identify private builds created directly from the project source. Local version identifiers SHOULD be used by downstream projects when releasing a version that is API compatible with the version of the upstream project identified by the public version identifier, but contains additional changes (such as bug fixes).

All those torch versions available for cu118, cpu, ... are downstream-projects and are properly distributed for example at https://download.pytorch.org/whl/cu118.

2 Problem

Now if we want to distinguish them in poetry we can (AFAIK) only do something like:

[tool.poetry.dependencies]
torch = [
  { version = "2.0.1+cu118", source = "torch_cu118", markers = "extra=='cuda'" },
  { version = "2.0.1+cpu", source = "torch_cpu", markers = "extra!='cuda'" },
]

[tool.poetry.extras]
cuda = []

[[tool.poetry.source]]
name = "torch_cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental"

[[tool.poetry.source]]
name = "torch_cu118"
url = "https://download.pytorch.org/whl/cu118"
priority = "supplemental"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Without (one?) exception this is not possible with Version Specifiers, since

Except where specifically noted below, local version identifiers MUST NOT be permitted in version specifiers, and local version labels MUST be ignored entirely when checking if candidate versions match a given version specifier.

3 The Exception?

Below in the Version Matching is stated, that local version identifier must be used when explicitly specified:

If the specified version identifier is a public version identifier (no local version label), then the local version label of any candidate versions MUST be ignored when matching versions.

If the specified version identifier is a local version identifier, then the local version labels of candidate versions MUST be considered when matching versions, with the public version identifier being matched as described above, and the local version label being checked for equivalence using a strict string equality comparison.

This was already discussed in issue https://github.com/python-poetry/poetry/issues/6570.

4 Poetry Issue?

If the torch dependency is specified like in (2) poetry seems to ignore the local version specifier, which should be non-compliant to PEP 440.

The behavior in poetry==1.7.0 with the setup in (2) is, that the initial install with poetry install --sync --extras cuda installs both torch==2.0.1+cu118 and torch==2.0.1+cpu. Each consecutive trigger of poetry install via poetry add or directly causes poetry to alternate between the two local versions.

Once this is resolved torch versioning via environment markers should be possible.


I could only test this with torch so far, since I do not know any other packages, that are distributed like this. It would be interesting to see if this behavior would be the same for other packages to rule out torch issues.

neersighted commented 1 year ago

Thanks for the concise summary! While it is pretty well-known that local versions make Torch and other ML-based wheels difficult to use with Poetry, as to be "compatible" as people expect, we have to sacrifice correct version number comparison, I don't think anyone has summarized this as well as for non-maintainers/packaging experts as you have.

I (and @pradyunsg) have been meaning to try and pin down some folks at NVIDIA to help us design additional wheel/environment markers, just as you have observed. No progress to report on yet, but it's definitely not something we're ignorant too -- as you have realized, the issue is structural and the split-repository model only works for e.g. pip and not for tools with a different repository model.

dimbleby commented 1 year ago

is the example of https://github.com/python-poetry/poetry/issues/6409#issuecomment-1792957550 really anything to do with local version identifiers?

I think it is more about the attempt to use the project's own extras as markers, you can see similar without local versions or extra sources by doing something like:

torch = [
  { version = "=2.0.0", markers = "extra == \"foo\"" },
  { version = "=2.0.1", markers = "extra != \"foo\"" },
]

see also https://github.com/python-poetry/poetry-core/pull/613

jroeger23 commented 1 year ago

Thanks for giving me some insight @neersighted! It's good to hear, that this is already a known problem. Would you say, that it is in principle possible to explicitly specify local versions with poetry, while staying PEP compliant? For my understanding version matching would suffice for this. If not, some other matching specification like arbitrary equality could in theory be used (when explicitly requested by the user).

Another thing is, that the setup in https://github.com/python-poetry/poetry/issues/6409#issuecomment-1792957550 (2) seems to invoke some sort of undefined behavior, because poetry tries to install the two torch versions at the same time, which results in warnings (see below). I'm not sure, if this is already a known bug. Seems to relate to https://github.com/python-poetry/poetry-core/pull/613.

poetry install --sync --extras cuda output in a fresh environment created by poetry==1.7.0
Followed by a bunch of warnings like:
Installing {poetry-env}/lib/python3.11/site-packages/torch/utils/data/_utils/collate.py over existing file
If this is relevant: the generated poetry.lock does not contain any references to the cuda extra marker, except for it's declaration.

Hi @dimbleby, I haven't quite said what my actual intentions are with these snippets. I want to setup poetry to work with different versions of torch simultaneously, meaning I can do something like poetry install --sync --extras cuda and poetry install --sync on different machines to select the correct package. The only cli controllable switch I know of were the extra markers. For my understanding the problem is, that there is no way for poetry to distinguish the requested package torch+cu118 and torch+cpu and hence it does not (re-)install the correct requested package.

dimbleby commented 1 year ago

I understand what you are trying to do: my point is that I don't think the failure you are seeing has anything much to do with whether poetry can or cannot respect local versions. So far as I know it does - but your experiment does not get to find out either way because it is first blocked by the fact that using a project's own extras as markers doesn't do what you want it to do.

ie your "Another thing" is in fact primary here.

jroeger23 commented 1 year ago

I see, this makes sense. I was assuming that extra markers could be used as I did, since https://github.com/python-poetry/poetry-core/pull/636 was included in poetry 1.7.0. But I might just have misused the extra markers here.

QuentinSoubeyranAqemia commented 1 year ago

Thanks @jroeger23 for the explanation, this has helped me greatly in understanding the situation!

I am confused by the last message from @dimbleby , in particular:

using a project's own extras as markers doesn't do what you want it to do

and wonder what actually happens, to have a better understanding of poetry and dependencies specs.

I understand poetry 1.7 release improved handling of extra marker to mean that poetry should now ignore the items in the dependency list for torch in the above example for un-specified extras. @dimbleby says this is not what it does, and I'm unclear on both why it's not doing that, and what this environment marker is actually doing (if anything):

I found PEP 508 and poetry doc on environment marker do not clarify the questions above (or I mis-understand them) and would appreciate any help :)

laclouis5 commented 1 year ago

I got it working on Linux + CUDA at least (not yet tested on Windows and macOS). Note that NumPy isn't installed automatically and one has to explicitly add the dependency to the TOML file (as stated in the PyTorch documentation).

[tool.poetry.dependencies]
python = "^3.10,<3.12"
torch = { version = "^2.1.0+cu118", source = "pytorch" }
torchvision = { version = "^0.16.0+cu118", source = "pytorch" }
numpy = "^1.26.1"

[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu118"
priority = "explicit"

The Python version should be be constrained to be inferior to 3.12 to satisfy NumPy requirements.

Note the explicit priority for the pytorch source which allows using the given source only for the packages with the source mentioned explicitly. This avoids some of the expensive dependency resolution work.

I emptied all the Poetry caches, removed the lockfile and poetry lock took less than a minute to resolve dependencies on a fast internet connection.

I tested that Pytorch was installed correctly and that CUDA was supported with the following command:

poetry run python -c "import torch; torch.zeros(1, device='cuda')"

According to the poetry.lock file, the installation should work for Linux x86_64 and Windows amd64 platforms. I don't think this would be compatible out of the box with macOS however since there is no CUDA macOS wheels. I'll try to specify a constrain for macOS in the TOML and i'll test this etting on multiple platforms.

EDIT

Indeed, this works for Linux and Windows but not macOS. The following TOML configuration works for the three platforms (Linux+CUDA, Windows+CUDA, macOS+MPS):

[tool.poetry.dependencies]
python = "^3.9,<3.12"
torch = [
    { version = "^2.0.0", source = "pytorch", platform = "!=darwin"},
    { version = "^2.0.0", source = "pypi", platform = "darwin"},
]
torchvision = [
    { version = "^0.15.0", source = "pytorch", platform = "!=darwin"},
    { version = "^0.15.0", source = "pypi", platform = "darwin"},
]
numpy = "^1.26.1"

[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu118"
priority = "explicit"

I was able to test this successfully on the three platforms using the same lockfile.

QuentinSoubeyranAqemia commented 1 year ago

I don't think the above config allows switching between CPU and CUDA versions of pytorch, as jroeger23 is trying to do.

laclouis5 commented 1 year ago

My post in an answer to the original question about installing Pytorch + CUDA with Poetry, not other issues mentioned later in this thread.

radoering commented 1 year ago

I understand poetry 1.7 release improved handling of extra marker to mean that poetry should now ignore the items in the dependency list for torch in the above example for un-specified extras.

Improvements in Poetry 1.7 were primarily for dependencies with extras, in other words if your project needs an extra (or multiple extras) of a dependency. Extras of the project itself are another topic.

Something like

[tool.poetry.dependencies]
torch = [
  { version = "2.0.1+cu118", source = "torch_cu118", markers = "extra=='cuda'" },
  { version = "2.0.1+cpu", source = "torch_cpu", markers = "extra!='cuda'" },
]

[tool.poetry.extras]
cuda = []

was never expected to work, because normally an empty list means that this extra does not contain any dependencies, i.e. it's useless. Surprisingly, it works halfway (locking seems to be quite ok, but installing can't cope with it). To be clear, if someone finds a good solution to make this work, a PR is welcome. A starting point could be https://github.com/python-poetry/poetry-core/pull/613#issuecomment-1694697769

roansong commented 12 months ago

I've been happily using the info from this issue to get a working setup for a linux environment with CUDA + a local Mac environment with the sys_platform marker like so:

[tool.poetry.dependencies]
python = "~3.11"
torch = [
  {version = "^2.1.0+cu118", source = "pytorch", markers = "sys_platform == 'linux'"},
  {version = "^2.1.0", source = "PyPI", markers = "sys_platform == 'darwin'"}
]

[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu118"
priority = "explicit"

[[tool.poetry.source]]
name = "PyPI"
priority = "primary"

Life has also improved now that poetry lock doesn't require downloading multiple GB of torch code every time 😅.

What I am struggling with at the moment is trying to install the CPU-only version of torch on my linux Github Actions runners to improve download/caching speed. I think I can get creative with the platform_release marker (I can't rememebr the exact marker I used actually) but that is likely to be out of date anytime a) my cluster admins update the linux environment or b) Github changes anything in the runner environment.

Does anyone have any tips or advice on how to accomplish this? (I did see this comment about NVIDIA-specific environment markers that will hopefully solve this in the future)

I have tried specifying different optional dependency groups so I could do --with cuda or simliar, but dependency resolution needs to happen across all dependency groups, so it ended up using the cuda versions anyway

edwintorok commented 12 months ago

FWIW this is the workaround I used on Fedora 39 which has Python3.12 as default. You can install Python3.11 alongside and tell poetry to use it via poetry env use /usr/bin/python3.11 and then it works (I only tested the CPU version): https://github.com/pytorch/pytorch/issues/110436#issuecomment-1806787334

edwintorok commented 12 months ago

Would be useful if Poetry (and Pip) told you why a package is not available. Took me a while to figure out that it is because of my python version, and pytorch is not yet available on Python 3.12. But I only found that out by searching pytorch's issue tracker.

thekaranacharya commented 12 months ago

Hello @roansong, thank you for your comment! I tried installing the CPU-only version and it seems to be working.

Just change the url in your source from url = "https://download.pytorch.org/whl/cu118" to url = "https://download.pytorch.org/whl/cpu".

ahoho commented 11 months ago

Hello @roansong, thank you for your comment! I tried installing the CPU-only version and it seems to be working.

Just change the url in your source from url = "https://download.pytorch.org/whl/cu118" to url = "https://download.pytorch.org/whl/cpu".

Thanks, this makes sense, but what if we want to still have the option to install the cuda version if cuda is available? (I'm facing the same problem)

MaKaNu commented 10 months ago

Did somebody tried ROCm already?

aa956 commented 9 months ago

Do I understand correctly there is no easy way to use poetry for projects using pytorch and expecting to get cross-platform GPU acceleration?

Started a new project, thought that something better than pip would be nice to use for once.

Run through poetry introduction and basic usage docs, installed poetry, created a new project, added click.

Very nice so far.

Then tried to add pytorch. Looked at the documentation here https://python-poetry.org/docs/repositories/

Got the following in pyproject.toml:

[tool.poetry.dependencies]
python = "^3.11"
click = ">=8.1.7"

[[tool.poetry.source]]
name = "torch-cu121"
url = "https://download.pytorch.org/whl/cu121"
priority = "supplemental"

[[tool.poetry.source]]
name = "torch-cu118"
url = "https://download.pytorch.org/whl/cu118"
priority = "supplemental"

[[tool.poetry.source]]
name = "torch-rocm56"
url = "https://download.pytorch.org/whl/rocm5.6"
priority = "supplemental"

[[tool.poetry.source]]
name = "torch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "supplemental"

And got the following after poetry add --source torch-cu121 torch torchvision on windows:

> poetry add --source torch-cu121 torch torchvision
Using version ^2.1.2+cu121 for torch
Using version ^0.16.2+cu121 for torchvision

Updating dependencies
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl  11% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl  29% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl  47% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl  63% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl  80% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-linux_x86_64.whl  99% (0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl  14% (0.7s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl  37% (0.8s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl  59% (0.9s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl  75% (1.0s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp310-cp310-win_amd64.whl  99% (1.1s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl   9% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl  27% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl  45% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl  60% (1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl  77% (2
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-linux_x86_64.whl  94% (2
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl   2% (2.2s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl  25% (2.3s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl  48% (2.4s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl  65% (2.5s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl  82% (2.6s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp311-cp311-win_amd64.whl  91% (2.7s
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl   6% (2.8
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  12% (2.9
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  21% (3.0
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  33% (3.1
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  42% (3.2
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  50% (3.3
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  61% (3.4
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  71% (3.5
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  80% (3.6
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  89% (3.7
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp38-cp38-linux_x86_64.whl  99% (3.8
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl  19% (5.3
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl  37% (5.4
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl  52% (5.5
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl  70% (5.6
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torchvision-0.16.2%2Bcu121-cp39-cp39-linux_x86_64.whl  89% (5.8
Resolving dependencies... Downloading https://download.pytorch.org/whl/cu121/torch-2.1.2%2Bcu121-cp310-cp310-linux_x86_64.whl  29% (65.3s)
^C^C^C

Killed poetry as soon as seen what it tries to download so not sure if it would have tried to install all of the downloads too.

Expected something at least not a lot worse than plain pip - 40-50 seconds to install, nothing downloaded, not even the 2.5Gb wheels that were installed as everything is cached after previous installs of various projects that use torch for this OS/user.

mathewcohle commented 9 months ago

Using following workaround around the setup described in https://github.com/python-poetry/poetry/issues/6409#issuecomment-1792957550:

[tool.poetry.dependencies]
torch = {version = "^2.1.2", source = "pytorch-cpu", markers = "extra!='cuda'" }

[tool.poetry.group.remote]
optional = true

[tool.poetry.group.remote.dependencies]
torch = {version = "^2.1.2", source = "pytorch-cu121", markers = "extra=='cuda'"}

[tool.poetry.extras]
cuda = []

[[tool.poetry.source]]
name = "pytorch-cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"

[[tool.poetry.source]]
name = "pytorch-cu121"
url = "https://download.pytorch.org/whl/cu121"
priority = "explicit"

and then installing the dependencies:

poetry install # to get CPU version
poetry install -E cuda --with remote # to get GPU version

Seems like poetry.lock is properly generated and the setup is working.

Hope there will be better solution in the near future (:

dimbleby commented 9 months ago

a couple of relevant fixes recently

the upshot is that the sequence given in https://github.com/python-poetry/poetry/issues/6409#issuecomment-1891118188 when run with the latest master poetry results in locking succeeding in roughly 5 seconds (with downloads of roughly no gigabytes).

of course there still are a lot of wheels to download during installation, but there really is nothing much to be done about that.

aa956 commented 9 months ago

a couple of relevant fixes recently

Thank you, install worked as expected after poetry update to master branch!

slashtechno commented 9 months ago

I tried the @mathewcohle's approach but I get the following with poetry install: This is on Windows with Python 3.11.5

However, I get the following, and think it might be related.

  RuntimeError

  Unable to find installation candidates for nvidia-cudnn-cu11 (8.7.0.84)

  at ~\AppData\Roaming\pypoetry\venv\Lib\site-packages\poetry\installation\chooser.py:76 in choose_for
       72│
       73│             links.append(link)
       74│
       75│         if not links:
    →  76│             raise RuntimeError(f"Unable to find installation candidates for {package}")
       77│
       78│         # Get the best link
       79│         chosen = max(links, key=lambda link: self._sort_key(package, link))
       80│

What Python version are you using, and on what platform?

EDIT: Upon further testing, the error continues even when no operations are being taken with torch. It seems it was caused by TensorFlow, which was also a dependency. In addition to installing Torch with GPU capability, is it possible to install TensorFlow with CUDA (tensorflow = {version = "^2.14.0", extras = ["and-cuda"]}) when using -E cuda but otherwise, install it normally (tensorflow = {version = "^2.14.0"})?

david-waterworth commented 8 months ago

For me @mathewcohle's approach results in neither version being installed when I type poetry install -E cuda --with group - despite not being installed it tells me No dependencies to install or update

The only difference I see is I have two groups one which uses torch cpu and the other torch gpu. Also I have a private pypi repo as well (AWS Code Artifact) which contains torch, so I wonder if the lock file is being incorrectly generated because many packages I use have torch as a dependency. I try to explicitly specify torch as a dependency first (before say transformers) but perhaps this is failing.

I'm trying to create different environments during my build process. I have a docker file which I construct using one group with torch cpu, a local experiments group where I want torch on my workstation with cuda, and a build group which doesn't require either version (it just registers pipeline steps in sagemaker using the built docker image(s).

MaKaNu commented 8 months ago

I tried the @mathewcohle's approach but I get the following with poetry install: This is on Windows with Python 3.11.5

...

What Python version are you using, and on what platform?

EDIT: Upon further testing, the error continues even when no operations are being taken with torch. It seems it was caused by TensorFlow, which was also a dependency. In addition to installing Torch with GPU capability, is it possible to install TensorFlow with CUDA (tensorflow = {version = "^2.14.0", extras = ["and-cuda"]}) when using -E cuda but otherwise, install it normally (tensorflow = {version = "^2.14.0"})?

If not aware I want to mention that Tensorflow does not support native Windows anymore with cuda support.

david-waterworth commented 8 months ago

A follow up on my comment, the reason my attempt failed was I wanted to add torch to optional groups, and I wanted to be able to specify a different architecture per group (i.e. I have a notebooks group for local experiments, a docker group for building AWS pipeline steps and a build group which runs my build scripts, I don't want to install any version of torch on the build server, the docker container must use the specific version of cuda shipped in the AWS container, and for local experiments I want to select whatever is appropriate for my workstation.)

So far I've not been able to do this, the closest I've got is to install torch+cpu by default, or torch+cuXXX as an extra.

[[tool.poetry.source]]
name = "torch+cpu"
url = "https://download.pytorch.org/whl/cpu"
priority = "explicit"

[[tool.poetry.source]]
name = "torch+cu117"
url = "https://download.pytorch.org/whl/cu117"
priority = "explicit"

[tool.poetry.extras]
cuda = ["torch"]

[tool.poetry.dependencies]
python = "^3.10"
torch = [
    {version = "2.0.0", markers = "extra != 'cuda'", source = "torch+cpu"},
    {version = "2.0.0", markers = "extra == 'cuda'", source = "torch+cu117", optional = true}
    ]

Is there any way of marking both torch versions optional, and then using extra's / markers to install either one or the other or neither? The issue I see here is there are multiple other groups, some of which contain packages that rely on torch. So I can see where the complexity is. In my case my groups are mutually exclusive but there's no way of expressing that contraint in poetry.

david-waterworth commented 8 months ago

So the issue I have is each time I run the install command it swaps between cuda and cpu, i.e.

>poetry install -E cuda

Installing dependencies from lock file

Package operations: 0 installs, 1 update, 0 removals

  • Updating torch (2.0.0+cpu -> 2.0.0+cu117)
>poetry install -E cuda
Installing dependencies from lock file

Package operations: 0 installs, 1 update, 0 removals

  • Downgrading torch (2.0.0+cu117 -> 2.0.0+cpu)
>poetry install -E cuda

Installing dependencies from lock file

Package operations: 0 installs, 1 update, 0 removals

  • Updating torch (2.0.0+cpu -> 2.0.0+cu117)

etc

QuentinSoubeyranAqemia commented 8 months ago

@david-waterworth This has already been found out above in this comment. To be fair I'm not quite clear on why. I suspect extra != doesn't work as we'd like.

One thing I have yet to try is to have all torch version be marked as optional, and filtered on a specific extra (cpu, cuXX, etc...). Any dependency that then depends on torch would also need be optional. If necessary, they can also be filtered on the extra to sync cpu/gpu/etc... capabilities. The cpu, cuXX etc... extra must not be empty, otherwise poetry seems to ignore it. In this setups it contains torch and all additional package that depends on torch, so that shouldn't be a problem.

If this doesn't work, then I don't understand what the markers = "extra == 'cuda'" does, and this is not documented in either the PEP nor poetry's own documentation to the best of my knowledge.

DWarez commented 8 months ago

@QuentinSoubeyranAqemia I also think that "extra != 'foo'" is not working as intended.

I am trying something like:

[tool.poetry.group.remote_cpu]
optional = true

[tool.poetry.group.remote_cuda]
optional = true

[tool.poetry.group.remote_mps]
optional = true

[tool.poetry.group.remote_cpu.dependencies]
torch = {version = "^2.2.0", source = "pytorch-cpu", markers = "extra=='cpu' and extra!='mps' and extra!='cuda'"}

[tool.poetry.group.remote_cuda.dependencies]
torch = {version = "^2.2.0", source = "pytorch-cu121", markers = "extra=='cuda' and extra!='mps' and extra!='cpu'"}

[tool.poetry.group.remote_mps.dependencies]
torch = {version = "^2.2.0", markers = "extra=='mps' and extra!='cuda' and extra!='cpu'"}

[tool.poetry.extras]
cpu = ["cpu"]
cuda = ["cuda"]
mps = ["mps"]

However, this seems not to work. It really looks like the extra!='foo' has no impact on the install, even if it is required to make the constraints work.

creat89 commented 8 months ago

The issue with the marker extra is well known, see https://github.com/python-poetry/poetry/issues/7748

david-waterworth commented 8 months ago

@DWarez I think also the value extra's are supposed to be the package name as well aren't they - i.e.

[tool.poetry.extras]
cpu = ["torch"]
cuda = ["torch"]
mps = ["torch"]

I'm not totally sure how it's supposed to work, or if it's working as expected and we're abusing it. Also so far the only way I've got this close to working is to add torch as a main dependency. Adding it to multiple optional groups always seems to in poetry check reporting

Error: Cannot find dependency "torch" for extra "cpu" in main dependencies.
Error: Cannot find dependency "torch" for extra "cuda" in main dependencies.

I was thinking it would also be nice if upstream (torch) was refactored so there was a base package (i.e. torch) and optional packages for each accelerator (i.e. torch_cuda, torch_rocm etc.) so they could all be installed from the same repo (i.e. pip install torch or pip install torch[torch_cuda] etc. I'm not at all sure this would fix the issue of being able to instruct poetry to install torch for a specific accelerator, or not install it at all, depending on certain runtime parameters though.

DWarez commented 8 months ago

@david-waterworth I tried a lot of different tricks, also the one you just mentioned, but still I cannot make things work when trying configure for cpu, cuda and mps. The trick described in https://github.com/python-poetry/poetry/issues/6409#issuecomment-1911735833 works, however it seems like that when defining multiple conditions in the markers, some of them (if not all) are ignored.

e.g. markers = "extra!='cuda' and extra!='mps'" doesn't work at all.