python-poetry / poetry

Python packaging and dependency management made easy
https://python-poetry.org
MIT License
31.08k stars 2.26k forks source link

Instructions for installing PyTorch #6409

Open davidgilbertson opened 2 years ago

davidgilbertson commented 2 years ago

Issue

As mentioned in issue https://github.com/python-poetry/poetry/issues/4231 there is some confusion around installing PyTorch with CUDA but it is now somewhat resolved. It still requires a few steps, and all options have pretty serious flaws. Below are two options that 'worked' for me, on Poetry version 1.2.0.

Option 1 - wheel URLs for a specific platform

[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = { url = "https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl"}
torchaudio = { url = "https://download.pytorch.org/whl/cu116/torchaudio-0.12.1%2Bcu116-cp310-cp310-win_amd64.whl"}
torchvision = { url = "https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl"}

Note that each subsequent poetry update will do another huge download and you'll see this message:

  • Updating torch (1.12.1+cu116 -> 1.12.1+cu116 https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp310-cp310-win_amd64.whl)
  • Updating torchaudio (0.12.1+cu116 -> 0.12.1+cu116 https://download.pytorch.org/whl/cu116/torchaudio-0.12.1%2Bcu116-cp310-cp310-win_amd64.whl)
  • Updating torchvision (0.13.1+cu116 -> 0.13.1+cu116 https://download.pytorch.org/whl/cu116/torchvision-0.13.1%2Bcu116-cp310-cp310-win_amd64.whl)

Option 2 - alternate source

[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = { version = "1.12.1", source="torch"}
torchaudio = { version = "0.12.1", source="torch"}
torchvision = { version = "0.13.1", source="torch"}

[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu116"
secondary = true

This seems to have worked (although I already had the packages installed) but it reports errors like Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/, but I think they get installed anyway (maybe a better message would be "Can't access pillow at 'https://download.pytorch.org/whl/cu116', falling back to pypi")

Also, if you later go on to do, say poetry add pandas (a completely unrelated library) you'll get a wall of messages like:

Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pandas/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pandas/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pytz/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/python-dateutil/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/numpy/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pillow/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/requests/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/typing-extensions/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/certifi/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/urllib3/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/idna/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/charset-normalizer/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/python-dateutil/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/six/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/pytz/
Source (torch): Authorization error accessing https://download.pytorch.org/whl/cu116/six/

This happens with or without secondary = true in the source config.

Maintainers: please feel free to edit the text of this if I've got something wrong.

dimbleby commented 2 years ago

Failure to cache during resolution is covered by #2415, poetry's insistence on checking all sources for all packages is discussed at #5984

variolam commented 2 years ago

Solution 1 seems infeasible when working in a team with machines on different operating systems, due to the need of providing the complete URL of the wheel including operating system and exact version number.

Solution 2 seems to work, but it results in downloading every single PyTorch version that can found, independent on the operating system. I'm running Windows and the download looks like this:

image

A single installation takes around 15-20 minutes at ~250 Mbps.

neersighted commented 2 years ago

Poetry will always download wheels for every platform when you install -- this is because there is no other way to get package metadata from a repository using PEP 503's API.

Queuecumber commented 2 years ago

Can you elaborate a little on what metadata is needed and why downloading every conceivable version of a package yields that metadata? As mentioned this leads to a ~20min install for one package

radoering commented 2 years ago

Solution 1 seems infeasible when working in a team with machines on different operating systems, due to the need of providing the complete URL of the wheel including operating system and exact version number.

It's not convenient, but it should be feasible with multiple constraints dependencies.

neersighted commented 2 years ago

Poetry requires the package's core metadata aka the METADATA file (most critically this includes dependencies), as well as the bdist/sdist itself for hashing purposes. Note that PEP 658 is a standard for serving the METADATA file that is implementable by third-party repositories, and PEP 691 specifies a (potentially) richer JSON API (including hashes) that third-party repositories could likely implement.

However, Poetry is unlikely to grow support for these new APIs until PyPI does, and I think third party repos are unlikely to implement it before PyPI. Eventually support for these APIs will allow for feature and performance parity in Poetry between PyPI and third-party repositories. Until then, we are stuck with the legacy HTML API, which requires us to download every package when generating a lock file for the first time.

After your cache is warm you will not need to download again, and on other platforms you will only download the necessary files as the metadata is captured in the lock file.

Queuecumber commented 2 years ago

What I'm not understanding is that poetry knows I'm on Linux with python 3.8 but it still downloads

https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp37-cp37-win_amd64.whl

Or does that wheel not contain the core metadata that is needed?

Queuecumber commented 2 years ago

Also there seems to be a second problem going on here unless I've misunderstood the documentation

I have this in my pyproject.toml

[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true

i.e. secondary = true

Yet poetry is asking that repository for every package I try to install. I thought from the documentation that secondary meant it would go to pypi for any package unless specifically asked to go to that custom repository.

neersighted commented 2 years ago

What I'm not understanding is that poetry knows I'm on Linux with python 3.8 but it still downloads

https://download.pytorch.org/whl/cu116/torch-1.12.1%2Bcu116-cp37-cp37-win_amd64.whl

Or does that wheel not contain the core metadata that is needed?

Poetry constructs a universal lock file -- we write hashes to the lock file for all supported platforms. Thus on the first machine you generate a lock file, you will download a wheel for every supported platform. There is no way to write hashes to the lock file for those foreign/other platform versions without downloading them first.

If you want to reduce the scope of this a bit, you can tighten your Python constraint. There is a prototype of a new feature at #4956 (though it needs resurrection, design, and testing work) to add arbitrary markers to let a project reduce its supported platforms as an opt-in.

Also there seems to be a second problem going on here unless I've misunderstood the documentation

I have this in my pyproject.toml

[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true

i.e. secondary = true

Yet poetry is asking that repository for every package I try to install. I thought from the documentation that secondary meant it would go to pypi for any package unless specifically asked to go to that custom repository.

I think it might be you misreading -- that is the intended and documented behavior. There is a proposal to introduce new repository types at https://github.com/python-poetry/poetry/pull/5984#issuecomment-1237245571 as the current secondary behavior covers multiple use cases poorly, while being ideal for none of them.

Queuecumber commented 2 years ago

Poetry constructs a universal lock file

OK this makes sense now, thanks for the explanation, looking forward to that PR hopefully being merged eventually

I think it might be you misreading

I was misreading I see that this is intended behavior

the current secondary behavior covers multiple use cases poorly

I agree with that and I hope that these new repository types can be implemented

ZetiMente commented 1 year ago

Can't wait for option 2 to have good performance !

neersighted commented 1 year ago

Please :+1: on issues instead of commenting me too -- it keeps the notifications down and still shows interest. Thanks!

voegtlel commented 1 year ago

Probably a follow-up issue on the second option: If I do poetry install on that, I get

Installing dependencies from lock file

Package operations: 50 installs, 0 updates, 0 removals

  • Installing wrapt (1.12.1): Failed

  KeyringLocked

  Failed to unlock the collection!

  at ~/.local/share/pypoetry/venv/lib/python3.10/site-packages/keyring/backends/SecretService.py:67 in get_preferred_collection
       63│             raise InitError("Failed to create the collection: %s." % e)
       64│         if collection.is_locked():
       65│             collection.unlock()
       66│             if collection.is_locked():  # User dismissed the prompt
    →  67│                 raise KeyringLocked("Failed to unlock the collection!")
       68│         return collection
       69│ 
       70│     def unlock(self, item):
       71│         if hasattr(item, 'unlock'):

I guess it tries to load that from the secondary repo as well, and expects to use the keyring due to the unauthorized thing?

neersighted commented 1 year ago

That's #1917 -- our use of keyring hits surprisingly many system configurations in which hard errors occur, and it needs some work.

felix-ht commented 1 year ago

@neersighted at least PyPI seems have started work on the new json APi

neersighted commented 1 year ago

Indeed, Poetry 1.2.2 relies on the new PEP 691 support. However, PEP 658 is the real blocker for better performance in third-party repos -- there is a long-running PR blocked on review and a rather combative contributor, but otherwise no major progress on that front.

MaKaNu commented 1 year ago

could you add a link to the blocked PR? I switched today to method 1 because method 2 took ages. Seems the meta servers are slowed down today... Dependency resolving which takes up to 4000 seconds for method 1 is also insane. And then it failed because I accidental copied 1.12.0 instead of 1.12.1 for the windows release. I really like the idea of poetry but this needs huge improvement.

felix-ht commented 1 year ago

We use neither these approaches. My hacky solution to just install the base version of pytorch say 1.12.1 with poetry and then install the specific versions needed for the machines with pip in a makefile command.

As the gpu versions have the same dependencies as the base version this should be OK.

The big downsides are

The big upside is that it is very easy to create make scripts for different machines - and that its pretty fast (very important for ci/cd) Example:

[tool.poetry.dependencies]
python = "^3.10"
numpy = "^1.23.2"
torch = "1.12.1"
torchvision = "0.13.1"
install_cu116:
    poetry install
    poetry run pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
timothyjlaurent commented 1 year ago
install_cu116:
  poetry install
  poetry run pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html

this has essentially been our approach, but in Dockerfiles, gated by a build arg

ARG TORCH_ARCH="cpu"
#ARG TORCH_ARCH="cu113"

RUN  poetry install -vvv --no-root --no-dev \
    && pip install -U wheel torch==1.12.1+${TORCH_ARCH} torchvision==0.13.1+${TORCH_ARCH} -f https://download.pytorch.org/whl/torch_stable.html \
    && pip uninstall poetry -y \
    && rm -rf ~/.config/pypoetry \
    && rm -rf /root/.cache/pip

This also allows installing the CPU version which is smaller a smaller package that lacks the CUDA drivers that come in the normal pytorch package from pypi. They help to slim the images down.

brochier commented 1 year ago

Hi ! Do you know if it's possible to specify two different optional versions of torch in the pyproject.toml ? I would like to use a cpu version locally and a gpu version on a distant server. You can have a look at an example in this stack overflow post.

neersighted commented 1 year ago

That is something not unlike #5222; the consensus has been that as Poetry is an interoperable tool, no functionality will be added to the core project to support this until there is a standards-based method. A plugin can certainly support this with some creativity and would be the immediate "I want Poetry to support this" use case solution in my mind.

robinbrochier commented 1 year ago

5222

Thanks for your answer, do you have any such plugin in mind ?

neersighted commented 1 year ago

I don't have any links at hand, but building on top of Light the Torch has been discussed. But if you mean if I know anyone is working on one, no, not that I am aware of.

arthur-st commented 1 year ago

For as long as pip install torch does Just Work ™️, having to put any effort at all into installing torch via poetry will be a vote against using it in commercial ML work that requires torch or similar hardware-bound components. Assuming that one is open to consider poetry, conda, and pipenv as functionally equivalent options, of course.

neersighted commented 1 year ago

I don't see that as a Poetry problem -- the issue is that there is an unmet packaging need, and no one from the ML world is working with the PyPA to define how to handle this robustly/no one from the PyPA has an interest in solving it.

Likewise, there is no interest in Poetry in supporting idiosyncratic workarounds for a non-standard and marginally compatible ecosystem; we'll be happy to implement whatever standards-based process evolves to handle these binary packages, but in the mean time any special-casing and package-specific functionality belong in a plugin and not Poetry itself.

arthur-st commented 1 year ago

@neersighted I respect your perspective. If Poetry has no interest in supporting ML world, perhaps as a non-standard and marginal corner of the Python developer ecosystem, then its idiosyncrasies are absolutely of no concern to the project.

JacobHayes commented 1 year ago

@arthur-st poetry install torch works the exact same as pip install torch - they both install the ~CPU version only~ same version (CPU only on Windows/Mac, currently CUDA 11.7 on Linux - thanks for the correction @Queuecumber). If you want a specific GPU version of torch, you need to do much more than pip install torch (you need to be on linux, need to identify your host's CUDA version, find the right index url, pass the flags, ...). This limitation applies to all of the lock management tools (Pipenv, pip-tools, etc) and is only "possible" with pip because it doesn't tackle these locking/cross env problems at all.

I'd love to see torch distribute separate packages, like torch (cpu only), torch-cu113 (adds CUDA 11.3 support to the base torch package), torch-cu116 (same for CUDA 11.6), ... and then it'd much easier to specify platform specific dependencies (ie: only install torch-cu113 on linux) and help out all of these tools (still tricky to ensure the host has the matching CUDA version, but it would at least reduce the problem). I understand that can be hard with the compiled dependencies, but seems possible with the shared objects they bundle and clever version pinning.

arthur-st commented 1 year ago

@JacobHayes That doesn't quite reconcile with my experience. On Windows and Linux, you just need to refer pip to the respective official package index – easy enough to memorise even, e.g., https://download.pytorch.org/whl/cu116 for CUDA 11.6, /rocm52 for ROCm 5.2 – whereas the accelerated M1 version installs right off the PyPI with plain pip install torch.

Poetry, on the other hand, just spent the two hours of patience I had for revisiting it today on failing poetry add torch in an empty Python 3.8 project created specifically to test that single operation. The tool was alternating between not being able to find any torch packages at all and failing to find platform-specific CUDA dependencies for my M1 Mac, depending on where in the troubleshooting process I was at a given moment. Amusing as it was to see that it correctly identifies the need for a platform-specific dependency (even if that was for a CUDA package of all things), while not even trying to simply install a Mac-specific version of the package itself (the only dependency of which is typing_extensions of any version), the value proposition for using poetry in commercial ML projects is not there yet, and my two cents of feedback are that improving just the docs will unlikely cut it for the ML crowd. That said, I am, of course, just a single person, and could very well have held poetry wrong, and will still be keen to revisit it during my next toolchain review.

Queuecumber commented 1 year ago

Maybe I'm misunderstanding but I think there's some confusion

Ref: https://pytorch.org/get-started/locally/

First of all (on Linux) pip install torch does not install the CPU version, it installs the default cuda version which is currently 11.7. If you want to install a cpu version you need to use a different command: pip3 install torch --extra-index-url https://download.pytorch.org/whl/cpu. This requires an extra index URL but it also identifies a different package version using the + convention.

So you could do something like

[tool.poetry.dependencies]
torch = { version = "^1.12.1+cpu", source = "torchcpu" }

[[tool.poetry.source]]
name = "torchcpu"
url = "https://download.pytorch.org/whl/cpu"
default = false
secondary = true

if you want CPU support, or if you want a specific cuda version:

torch = { version = "^1.12.1+cu116", source = "torchcu116" }

[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true

The issue that is being discussed on this bug report is that it's really slow/throws a bunch of warnings depending on which command you run. Also I have no idea if this works on windows, but you probably shouldnt be using windows.

Do you know if it's possible to specify two different optional versions of torch in the pyproject.toml

I was actually pretty surprised to find that this doesn't work with config groups and I think fixing this should be considered. I think something like the following should be possible:

[tool.poetry.group.gpu]
optional = true

[tool.poetry.group.gpu.dependencies]
torch = { version = "1.12.1+cu116", source = "torchcu116" }

[tool.poetry.group.cpu]
optional = true

[tool.poetry.group.cpu.dependencies]
torch = { version = "1.12.1+cpu", source = "torchcpu" }

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

[[tool.poetry.source]]
name = "torchcpu"
url = "https://download.pytorch.org/whl/cpu"
default = false
secondary = true

[[tool.poetry.source]]
name = "torchcu116"
url = "https://download.pytorch.org/whl/cu116"
default = false
secondary = true

where you can then do poetry install --with cpu for your cpu environment or poetry install --with gpu for your cu116 GPU environment. But the dependencies in all groups need to be consistent so you get Because test depends on both torch (1.12.1+cu116) and torch (1.12.1+cpu), version solving failed.. I dont think that should be enforced for optional groups unless the user tries to actually install two groups with conflicting packages

neersighted commented 1 year ago

Mutually exclusive groups is #1168 -- this is not a matter of "enforcement" but of core design and architecture of the solver; changing it has implications for correctness, maintainability, and performance, but should be possible with significant care and effort. Please keep discussion of that feature on that issue.

JacobHayes commented 1 year ago

@arthur-st I wonder if you ran into https://github.com/pytorch/pytorch/issues/88049 for v1.13. This bit me too so I can sympathize with any annoyance - for now, I pin <1.13, but I just pushed https://github.com/pytorch/pytorch/pull/88826 to fix it for future versions. 🤞 Separately, I have many more issues with tools that don't follow the standards as strictly as poetry (eg: pipenv resolving incompatible flake8 and importlib-metadata versions) and they're usually harder to debug... grass is always greener. 😁

@Queuecumber thanks for the correction on the default torch wheels on linux being CUDA 11.7 currently - I updated my OP.

arthur-st commented 1 year ago

@JacobHayes So it seemed to me, but <1.13 didn't get me any practically closer to having it installed – that was one of several things I tried.

ralbertazzi commented 1 year ago

Although I developed it for another use case, I believe this plugin should help. I'll give it a try tomorrow.

EDIT: the plugin indeed works but all torch distributions (different platforms, different python versions) are indeed downloaded. I would resort to option 1 and fix the URL caching issue. I made a proposal here: https://github.com/python-poetry/poetry/pull/7595

doctorpangloss commented 1 year ago

If there's ever a time to fix torch installation on poetry, it's now.

yingshaoxo commented 1 year ago

Same problem here.

I got a series of warning message when install pytorch:

Resolving dependencies... (156.0s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/pexpect/
Resolving dependencies... (156.8s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/stack-data/
Resolving dependencies... (157.5s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/pygments/
Resolving dependencies... (158.2s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/prompt-toolkit/
Resolving dependencies... (159.0s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/pickleshare/
Resolving dependencies... (159.7s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/jedi/
Resolving dependencies... (160.4s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/backcall/
Resolving dependencies... (160.8s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/parso/
Resolving dependencies... (161.6s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/wcwidth/
Resolving dependencies... (162.4s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/pure-eval/
Resolving dependencies... (163.1s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/asttokens/
Resolving dependencies... (163.9s)Source (pytorch-cpu): Authorization error accessing https://download.pytorch.org/whl/cpu/executing/
doctorpangloss commented 1 year ago

In case anyone else is visiting this, it is not currently possible to have one pyproject.toml with accelerated torch for multiple platforms. I got this working in setup.py and it's a doozy.

yingshaoxo commented 1 year ago

In case anyone else is visiting this, it is not currently possible to have one pyproject.toml with accelerated torch for multiple platforms. I got this working in setup.py and it's a doozy.

I think by installing a cpu version pytorch is good enough.

The problem is, even if we simply want to install a cpu version of pytorch, it is still not easy when cross platform with poetry. (It will have errors or warnings

einarpersson commented 1 year ago

Too bad, this was the reason Poetry caught my attention (the promise of not having these kinds of troubles)

doctorpangloss commented 1 year ago

Here is the setup.py solution I authored:

https://github.com/comfyanonymous/ComfyUI/blob/fa019a82049c7279fce5de66bda06dcf77abd058/setup.py

The bones of this works for any project trying to tackle the Pytorch Issue

As you can see this PR wasn't merged anyway.

Python is a special ecosystem.

ralbertazzi commented 1 year ago

Hi everyone, https://github.com/python-poetry/poetry/issues/2415 has just been closed and will likely get into the 1.5 release. That is, for the next release Option 1 (url dependencies) suggested by @davidgilbertson should work fully cached!

As per Option 2 (separate source), another interesting PR will see the light in 1.5. You'll be able to set the torch source as explicit, thus calling that source only for packages that you'll explicitly configure as such. Note how this solution will still download all matching wheels for any OS and Python version, which is still not ideal. In order to fix this a solution to https://github.com/python-poetry/poetry/issues/4952 should be proposed.

Poetry 1.5 should greatly improve the PyTorch experience. Fingers crossed 🤞

ralbertazzi commented 1 year ago

I also encourage you to upvote and/or contribute to https://github.com/pytorch/pytorch/issues/76557 and https://github.com/pytorch/builder/issues/1347 which would make Option 2 (explicit Poetry source) work like a charm without further development Poetry side.

dalazx commented 1 year ago

with the latest version of poetry this seems to work for me:

[tool.poetry.dependencies]
torch = [
     {version = "^2.0.1", platform = "darwin"},
     {version = "^2.0.1", platform = "linux", source = "torch"},
     {version = "^2.0.1", platform = "win32", source = "torch"},
 ]
 sympy = [
     {version = "^1.12", platform = "linux", extras = ["mpmath"]},
     {version = "^1.12", platform = "win32", extras = ["mpmath"]},
 ]

[[tool.poetry.source]]
 name = "torch"
 url = "https://download.pytorch.org/whl/cpu"
 priority = "explicit"

note that for darwin I used PyPI instead since the PyTorch source did not have macOS CPU-specific wheels.

doctorpangloss commented 1 year ago

note that for darwin I used PyPI instead since the PyTorch source did not have macOS CPU-specific wheels.

You can install pytorch using poetry.

I also encourage you to upvote and/or contribute to pytorch/pytorch#76557 and pytorch/builder#1347 which would make Option 2 (explicit Poetry source) work like a charm without further development Poetry side.

With these changes, it is still not possible to author a pyproject.toml that (1) installs the correct version of pytorch that is most accelerated for the user's platform (2) using a single, immutable command (like pip install -e .) that is the same for every platform.

ralbertazzi commented 1 year ago

it is still not possible to author a pyproject.toml that (1) installs the correct version of pytorch that is most accelerated for the user's platform

I agree that's an interesting use case, but please consider that it's not the only use case. For instance:

Therefore I believe that it should not be Poetry's responsibility to do that choice instead of you. If any, I much rather prefer the current deterministic way.

If interested, I'm sure you can hack your way around it by developing a plugin. That's what they are there for, if you wish to alter or expand Poetry’s functionality with your own, as the documentation says.

ralbertazzi commented 1 year ago

Also, posting it here for visibility: https://github.com/python-poetry/poetry/issues/8002

alexeyshockov commented 1 year ago

@dalazx, unfortunately your example does not work for me, as Poetry resolves 2.0.1+cpu version, which is not available on PyPI.

Package operations: 0 installs, 1 update, 0 removals

  • Updating torch (2.0.1 -> 2.0.1+cpu): Failed

  RuntimeError

  Unable to find installation candidates for torch (2.0.1+cpu)

I ended up using wheels for different platform directly (via url), but this is definitely more a hack than a solution...

dalazx commented 1 year ago

@alexeyshockov yeah, the CPU wheels are located at https://download.pytorch.org/whl/cpu which is referenced in the torch source in my example.

chunleng commented 1 year ago

@alexeyshockov yeah, the CPU wheels are located at https://download.pytorch.org/whl/cpu which is referenced in the torch source in my example.

Seems like on macos, it resolves to 2.0.1+cpu even if you don't source for platform="darwin". I think poetry just resolves to just one resource, so it currently can't do both 2.0.1 for macos and 2.0.1+cpu for the rest in poetry version 1.5 yet.

radandreicristian commented 1 year ago

with the latest version of poetry this seems to work for me:

[tool.poetry.dependencies]
torch = [
     {version = "^2.0.1", platform = "darwin"},
     {version = "^2.0.1", platform = "linux", source = "torch"},
     {version = "^2.0.1", platform = "win32", source = "torch"},
 ]
 sympy = [
     {version = "^1.12", platform = "linux", extras = ["mpmath"]},
     {version = "^1.12", platform = "win32", extras = ["mpmath"]},
 ]

[[tool.poetry.source]]
 name = "torch"
 url = "https://download.pytorch.org/whl/cpu"
 priority = "explicit"

note that for darwin I used PyPI instead since the PyTorch source did not have macOS CPU-specific wheels.

This does not work as expected. It downloads all the binaries for all the Python versions for each specific torch version/platform combination which matches the constraint of the Python version in the toml file.

For my particular setup, which is py3.10, torch 1.13.1 cu117, with the config above (linux and windows only) it downloads

It's absurd that it has to download more than 1 single torch binary for my local/Docker setup, regardless of what I run on. Particularly the fact that it downloads files for multiple python versions. I shouldn't have to wait a non trivial amount of time and use a non trivial amount of memory to run this out of the box.

ralbertazzi commented 1 year ago

This is done by design as Poetry creates reproducible platform-independent lockfiles. Since the torch index is not fully PEP 503 compliant, Poetry will download every distribution to compute its hash, which is then written in the lockfile. If you want to see this fixed, please upvote or contribute to these issues: https://github.com/python-poetry/poetry/issues/6409#issuecomment-1546617247