Closed yonimedhub closed 2 years ago
[tool.poetry.dependencies]
torch = { version = "=1.9.0+cu111", source = "pytorch" }
torchvision = { version = "=0.10.0+cu111", source = "pytorch" }
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu111/"
secondary = true
This works for me with poetry v1.2.0a1
(poetry install
is so slow due to #4035, though).
this is my pyproject.toml
:
[tool.poetry]
name = ""
version = "0.1.0"
description = ""
authors = ["Your Name <you@example.com>"]
[tool.poetry.dependencies]
python = "3.8.5"
numpy = "^1.21.0"
matplotlib = "^3.4.2"
torch = { version = "=1.9.0+cu111", source = "pytorch" }
torchvision = { version = "=0.10.0+cu111", source = "pytorch" }
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu111/"
secondary = true
[tool.poetry.dev-dependencies]
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
with this I get the following error (with poetry update
):
SolverProblemError
Because torchvision (0.10.0+cu111) depends on torch (1.9.0)
and depends on torch (=1.9.0+cu111), torchvision is forbidden.
So, because depends on torchvision (=0.10.0+cu111), version solving failed.
and if I put both torch and torchvision in comments I still get the following error:
RepositoryError
403 Client Error: Forbidden for url: https://download.pytorch.org/whl/cu111/matplotlib/
This is a duplicate of #2543, #3855, #3306 and some others.
An ugly workaround while this is not fixed is:
[tool.poetry]
name = "test"
version = "0.1.0"
description = ""
authors = ["author <author@author.com>"]
[tool.poetry.dependencies]
python = "^3.9"
[tool.poe.tasks]
## PyTorch with CUDA 11.1. If PyTorch is imported first, importing Tensorflow will detect CUDA + cuDNN bundled with PyTorch
## Run with the command "poe force-cuda11"
## See https://github.com/python-poetry/poetry/issues/2543
force-cuda11 = "pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/torch_stable.html"
[tool.poetry.dev-dependencies]
black = "^21.6b0"
poethepoet = "^0.10.0"
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = "poetry.core.masonry.api"
@scherzocrk when I "poetry install", the torch will be " • Updating torch (1.9.0+cu111 -> 1.9.0)"
@wangm23456 I wrote the hack above using the task-runner, I have the same issue, poetry always rolls back to regular torch. It gets me crazy..
[tool.poetry.dependencies] torch = { version = "=1.9.0+cu111", source = "pytorch" } torchvision = { version = "=0.10.0+cu111", source = "pytorch" } [[tool.poetry.source]] name = "pytorch" url = "https://download.pytorch.org/whl/cu111/" secondary = true
This works for me with poetry
v1.2.0a1
(poetry install
is so slow due to #4035, though).
Just tested it on v1.2.0a2 and with the secondary source poetry install is still very slow and takes more than 18m in my machine.
Also worse than not being able to install torchvision
, for the same reason you cannot install any ML package which depends on torch
. For example allennlp
requires torch>=1.6.0,<1.11.0
and torchvision>=0.8.1,<0.12.0
, so the result is:
Because no versions of allennlp match >2.8.0,<3.0.0
and allennlp (2.8.0) depends on torchvision (>=0.8.1,<0.12.0), allennlp (>=2.8.0,<3.0.0) requires torchvision (>=0.8.1,<0.12.0).
Thus, allennlp (>=2.8.0,<3.0.0) requires torch (1.7.0 || 1.7.1 || 1.8.0 || 1.8.1 || 1.9.0 || 1.9.1 || 1.10.0 || 1.10.0+cu102).
So, because disambiguation depends on both torch (1.9.1+cu111) and allennlp (^2.8.0), version solving failed.
Coming back half a year later it seems there is still no solution for Pytorch related packages in poetry. Will this ever get solved? Otherwise need to move to other solutions.
yeah, about that. I want to add this functionality, so my team can use poetry. Otherwise, we'll stick to conda, which is not optimal, bc poetry plays so well with CI & CD pipelines, espescially for packaging software
Yeah, we unfortunately had to decide now to move away from poetry to good old pipenv. Let's hope there will be a solution as some point for poetry.
@psinger here is the good news for you. I just tested with poetry version 1.2.0b1
for cpu and cuda version of pytorch and both are working fine for me. Here is the my dependencies
[tool.poetry.dependencies]
python = "^3.8"
fastapi = "^0.75.0"
gunicorn = "^20.1.0"
loguru = "^0.6.0"
torch = {url = "https://download.pytorch.org/whl/cpu/torch-1.11.0%2Bcpu-cp38-cp38-linux_x86_64.whl"}
torchaudio = {url = "https://download.pytorch.org/whl/cpu/torchaudio-0.11.0%2Bcpu-cp38-cp38-linux_x86_64.whl"}
torchvision = {url = "https://download.pytorch.org/whl/cpu/torchvision-0.12.0%2Bcpu-cp38-cp38-linux_x86_64.whl"}
transformers = "^4.17.0"
@Kavan72 thx for the update, it works.
However the dependencies resolving time is absolutely huge (600+ seconds), would it be possible to ignore dependency solving for a package, or hard code the dependencies so that poetry doesn't have to full scan a 1.5Go package ?
The solution of @Kavan72 is cool but unfortunately still a work around as you need to target a specific python version, and OS which goes against the whole idea of Poetry.
My ugly workaround (with pip install
) is not better either...
@kikohs I ended up using poe the poet as a task runner too in the end. To avoid rolling back to cpu torch, you have to install every torch-using lib with pip through the task-runner too (e.g. transformers)
This issue is blocking "unlocking the poetry" potentials when it comes to leveraging poetry in the stack that uses PyTorch and its ecosystem.
I have tried with following: toml setting
[[tool.poetry.source]]
name = "torch"
url = "https://download.pytorch.org/whl/cu113"
secondary = true
default = false
but the default false
is not recognized for some reason and ends up getting the 403:
Error:
403 Client Error: Forbidden for url: https://download.pytorch.org/whl/cpu/mypy
As mentioned in this ticket, https://github.com/python-poetry/poetry/issues/4704 this is a known issue. However, amongst all possible ways to address this issue, this solution of using secondary sources seems to be the ideal fix for the issue in question.
As a short-term interim, I have also tried platform and version-specific settings. This would work fine if PyTorch was my leaf dependency. Because my setup involves using PyTorch, Torchvision, and Pytorch lightening. Because more dependencies rely on PyTorch, just specifying torch wheels in toml fails to solve the dependency: toml setting
torch = [
{ url="https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl", python=">=3.7,<3.8", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp38-cp38-linux_x86_64.whl", python=">=3.8,<3.9", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp39-cp39-linux_x86_64.whl", python=">=3.9,<3.10", markers="sys_platform == 'linux'"},
{ version = "=1.11.0", markers = "sys_platform == 'darwin' or sys_platform == 'win32'" },
]
torchvision = [
{ url="https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp37-cp37m-linux_x86_64.whl", python=">=3.7,<3.8", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp38-cp38-linux_x86_64.whl", python=">=3.8,<3.9", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp39-cp39-linux_x86_64.whl", python=">=3.9,<3.10", markers="sys_platform == 'linux'"},
{ version = "=0.12.0", markers = "sys_platform == 'darwin' or sys_platform == 'win32'" },
]
Error:
SolverProblemError
Because torchvision (0.12.0+cu113) depends on torch (1.11.0)
and XXXX-app depends on torch (1.11.0+cu113), torchvision is forbidden.
So, because XXXX-app depends on torchvision (0.12.0+cu113), version solving failed.
I have been in knots with this one, particularly because there are so many issues open around this issue: https://github.com/python-poetry/poetry/issues/2543 https://github.com/python-poetry/poetry/issues/4231 https://github.com/python-poetry/poetry/issues/3855 https://github.com/python-poetry/poetry/issues/2613 https://github.com/python-poetry/poetry/issues/4704 https://github.com/python-poetry/poetry/issues/2339
The only solution that works cross-platform is https://github.com/nat-n/poethepoet but that is not a great solution either (not lining up with lock file, not using same cache etc, the need for additional pip run!). It would be great if we can fix this issue.
(this is a duplicate comment from https://github.com/python-poetry/poetry/issues/4704#issuecomment-1109465915, posting again as this is very relevant ticket again)
Thanks for the detailed post @suneeta-mall. I want to try and break out targetted improvements to help with the pytorch use case being better supported. Let me try and addres a few things in your post with that intent.
but the default false is not recognized for some reason and ends up getting the 403
This is expected in that default = false
is the default for legacy sources. The way the default
setting works is more like "disable PyPI", ie. if set to true
the project disables PyPI and makes it the "fallback" among all leacy sources (govered by secondary = true
in your case). Personally, I think this need to reworked to make it clearer in Poetry.
Also see https://github.com/python-poetry/poetry/issues/4704#issuecomment-1111380597.
Regarding the local build tag resolving issue, the I suspect this is the same as being taked about in https://github.com/python-poetry/poetry/issues/4729#issuecomment-1110930059. That, in theory atleast, should fix this issue.
From the Poetry side once the local tag solving is fixed, I suspect things will improve. If that is not the case please do let me know.
For those working with the PyTorch community, if you can work with the PyTorch team to get the following added/fixed, your experience when using PyTorch with Poetry might be improved.
#<hashname>=<hashvalue>
. This can be done easily by the index admins I beleive.Requires-Dist
etc. from the wheel). This can be avoided too if PEP 658 gets implemented. I suspect this will need to also be supported by Poetry as well.404
instead of 403
. If I understand correct, this is because they have not set s3:ListBucket
permission for public users (assuming they use S3 for this). PS: One could in theory generate and host files required for 1 and 2 using a CI/CD job + vercel/fastly etc with some packaging code if so motivated. The links can still retain the upstream file link, but with sha256 appended and a new .metadata
file generated and served.
Thanks, @abn for the detailed info.
I have tried with poetry version 1.2.0b1
and wheels URL, and that was no joy as well. I did not get the solver error as mentioned in my earlier comment but the solving was indefinite 33947.4s
and counting.
I have also raised the issue with Pytorch https://github.com/pytorch/pytorch/issues/76557 in line with your recommendation.
@suneeta-mall Was that infinite resolve with a clear cache? A colleague had issues with resolve never finishing but clearing the cache improved the situation. #5442 and #5451 are targeting specific other issues we've seen with using torch from Poetry which lead to very long resolve times.
@tgolsson yeah infinite resolve with clear cache on fresh docker build. Thanks for the PRs, the numbers look promising. In this case, the I left the resolve to go through overnight just for the fun of it and it continued.
Following has been my core of my change in toml b/w working and non-working [infinite resolve] copy, on version 1.2.0b1
. Earlier version Not working snippet
would give me resolve error as mentioned above:
torch = [
{ url="https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp37-cp37m-linux_x86_64.whl", python=">=3.7,<3.8", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp38-cp38-linux_x86_64.whl", python=">=3.8,<3.9", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torch-1.11.0%2Bcu113-cp39-cp39-linux_x86_64.whl", python=">=3.9,<3.10", markers="sys_platform == 'linux'"},
{ version = "=1.11.0", markers = "sys_platform == 'darwin' or sys_platform == 'win32'" },
]
torchvision = [
{ url="https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp37-cp37m-linux_x86_64.whl", python=">=3.7,<3.8", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp38-cp38-linux_x86_64.whl", python=">=3.8,<3.9", markers="sys_platform == 'linux'"},
{ url="https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp39-cp39-linux_x86_64.whl", python=">=3.9,<3.10", markers="sys_platform == 'linux'"},
{ version = "=0.12.0", markers = "sys_platform == 'darwin' or sys_platform == 'win32'" },
]
torch = "^1.11.0"
torchvision = "^0.12.0"
Note that this is largely because most torchvision wheels, including the oens on download.pytorch.org have a dependency on "torch", while they should have a dependency on a pinned version.
eg. https://download.pytorch.org/whl/cu113/torchvision-0.12.0%2Bcu113-cp37-cp37m-linux_x86_64.whl depends on torch==1.11.0 but should depend on torch==1.11.0+cu113
If you build torchvision from source you can set the proper torch dependency as follows:
If you create wheels with this procedure, you can just add a dependency to torchvision in your pyproject.toml and no special tricks are needed anymore to make torch work well with Poetry. (Ofcourse you need to make sure Poetry has access to the wheels, either on a Devpi server or in a local folder)
I was hopeful to try @PieterBlomme 's suggestion building torchvision from source as above, but no joy still...
Updating dependencies
Resolving dependencies... (623.5s)
Because torchvision (0.12.0+cu113) depends on torch (1.11.0+cu113)
and my_package depends on torch (1.11.0+cu113), torchvision is forbidden.
So, because my_package depends on torchvision (0.12.0+cu113), version solving failed.
Finally found a setup that at least is able to resolve. Using a torchvision wheel built from source as @PieterBlomme suggests, adding the secondary source for torch, and switching to poetry 0.12.
python = "^3.9"
torch = { version = "1.11.0+cu113", source = "pytorch" }
torchvision = {file = "/home/cate/git/torchvision/dist/torchvision-0.12.0+cu113-cp39-cp39-linux_x86_64.whl"}
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu113"
secondary = true
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
I've also installed 1.2.0b1
, but the solution proposed by @catesale is just too slow. Poetry cache grows to over 15GB of just torch cache...
I've added explicit sources for each package as pypi, but that does not work as it still tries to download everything from Pytorch repository.
Here's my config:
[tool.poetry.dependencies]
python = "^3.10"
fastapi = {version= '^0.75.1', source = "pypi"}
uvicorn = {version= '^0.17.6', source = "pypi"}
gunicorn = {version= '^20.1.0', source = "pypi"}
requests = {version= "^2.27.1", source = "pypi"}
torch = { version = '1.11.0+cu113', source = "pytorch" }
[tool.poetry.dev-dependencies]
pre-commit = {version= '2.18.1', source = "pypi"}
pytest = {version= '', source = "pypi"}
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu113"
secondary = true
default = false
[build-system]
requires = ["poetry-core>=1.0.0"]
build-backend = 'poetry.masonry.api'
Output:
Creating virtualenv gpu-test-2-9TtSrW0h-py3.10 in /root/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/virtualenv/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/toml/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/pyyaml/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/nodeenv/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/identify/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/cfgv/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/setuptools/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/h11/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/click/
<debug>pytorch:</debug> Authorization error accessing https://download.pytorch.org/whl/cu113/asgiref/
...
@cateseale
switching to poetry 0.12
Don't you mean poetry 1.2.0b1
?
Ehi there, just wanted to share my 50 cents since I may have found a working solution that automatically installs the CUDA version of PyTorch based on your configuration :)
Note that this does not work by just running poetry install
but by using an additional task with poe-the-poet and light-the-torch. Here is a sample configuration:
[tool.poetry.dependencies]
torch = "*"
poethepoet = "*"
[tool.poe.tasks]
install-ltt = "python3 -m pip install light-the-torch"
run-ltt = "python3 -m light_the_torch install --upgrade torch torchaudio torchvision"
autoinstall-torch-cuda = ["install-ltt", "run-ltt"]
Instructions:
poetry install
poetry run poe autoinstall-torch-cuda
I think the authorization issues should probably be fixed once https://github.com/python-poetry/poetry/pull/5442 is merged
The problem originally reported here is fixed as at https://github.com/python-poetry/poetry-core/pull/433/ ie this resolves just fine:
torch = {url = "https://download.pytorch.org/whl/cu111/torch-1.9.0%2Bcu111-cp38-cp38-linux_x86_64.whl"}
torchvision = {url = "https://download.pytorch.org/whl/cu111/torchvision-0.10.0%2Bcu111-cp38-cp38-linux_x86_64.whl"}
this is a long and confusing thread and I am unsure what other things might have become muddled into it: suggest close this (on the grounds that the actually reported problem is solved) and if other tickets are needed then raise them.
In case it's helpful, here are some experiments I ran on two of my machines (Ubuntu 20.04.4). I've found that Config 1 works with 1.2.0b2
but not 1.2.0b3
(regression?). Config 2 is identical to Config 1 except it replaces the exact version requirement with a caret requirement.
In case it's helpful ...
It isn't! One of the following is true
either way, this issue should be closed
You're welcome to open a new issue if you'd like and close this one if you can. I shared my experience for those who, like me, have been struggling to get PyTorch and related libraries working with released versions of Poetry and need a working solution today.
@dimbleby and @abn perhaps it would be helpful for this page is some definitive statement answering the question on everyone's mind: is it at all possible to install PyTorch with CUDA using Poetry? And a pyproject.toml
or the right commands to copy/paste would be great. If it's only possible with a certain version, it would be great to know what version. I see various commits linked to, but it's not clear in what versions of Poetry those will take effect, and there's seems to be a few different issues at play.
(Side note: I tried updating to 1.2.0
and now Poetry doesn't work at all because of this bug identified during preview)
As I've said a couple of times, this issue should be closed. Partly because the thread is long and confusing and going nowhere but most importantly because the problem that it was raised to describe has been fixed.
If you are seeing a problem, please describe it in a new issue, with a way to reproduce it.
I'm going to have to agree -- the original issue is solved and there's discussion of lots of different tangential issues in the thread. I'm going to suggest @davidgilbertson start a new issue for documentation/guidance on best practices for using Pytorch (I'll try to remember to create one if he doesn't get around to it), and close this for now. If you have general questions about using Pytorch with Poetry, a discussion, Discord, or the comments on a documentation issue are likely all appropriate venues. If you are encountering issues with the packages directly (e.g. using a direct URL dep), please open a new issue with a pyproject.toml and reproduction steps.
@neersighted done! https://github.com/python-poetry/poetry/issues/6409
I am using macOS with M1 chip. Met same issue, was similar solutions in the thread:
[tool.poetry.dependencies]
torch = {version = "1.13.0", source = "pytorch"}
[[tool.poetry.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl"
default = false
secondary = true
but no luck.
I ended up with:
[tool.poetry.dependencies]
python = "3.10.x"
torch = {url = "https://download.pytorch.org/whl/cpu/torch-1.13.0-cp310-none-macosx_11_0_arm64.whl"}
You can find all versions at either one of
(No support for Python 3.11 of macOS with M1 chip (arm64) at the time of writing this post)
I am using mac for local development and a docker for publishing. The following allowed me to use a single pyproject file for both, hope it will help someone:
torch = [{markers = "sys_platform == 'macos'", url = "https://download.pytorch.org/whl/cpu/torch-1.13.0-cp310-none-macosx_11_0_arm64.whl"},
{markers = "sys_platform == 'linux'", url="https://download.pytorch.org/whl/torch-1.13.0-cp310-cp310-manylinux2014_aarch64.whl"}]
python version 3.10.6
Thanks @chanansh for that answer. However, to make it succssfully install on my mac with an M1 Pro chip, I had to change the "sys_platform == 'macos'"
to "sys_platform == 'darwin'"
.
Also, note that it would fail to install on both the docker container and mac if I used python version 3.11.0
. But using python version 3.10.9
worked.
darwin
is more accurate I think. I posted a summary at https://stackoverflow.com/a/74794784/2000548
Wish Poetry can be smarter for PyTorch in future so that we do not need manually set them 😃
Here is a copy:
[tool.poetry.dependencies]
python = "3.10.x"
torch = [
{markers = "sys_platform == 'darwin' and platform_machine == 'arm64'", url = "https://files.pythonhosted.org/packages/79/b3/eaea3fc35d0466b9dae1e3f9db08467939347b3aaa53c0fd81953032db33/torch-1.13.0-cp310-none-macosx_11_0_arm64.whl"},
{markers = "sys_platform == 'darwin' and platform_machine == 'x86_64'", url = "https://files.pythonhosted.org/packages/b6/79/ead6840368f294497591af143980372ff956fc4c982c457a8b5610a5a1f3/torch-1.13.0-cp310-none-macosx_10_9_x86_64.whl"},
{markers = "sys_platform == 'linux'", url="https://files.pythonhosted.org/packages/5c/61/b0303b8810c1300e75e8e665d043f6c2b272a4da60e9cc33416cde8edb76/torch-1.13.0-cp310-cp310-manylinux2014_aarch64.whl"}
]
arm64
is for macOS with M1/M2 chip. x86_64
is for macOS with Intel chip.platform_machine
if you are using multiple architectures (In the demo above, it is using aarch64
wheel).You can find all wheel URLs at either one of
You can find your current platform and architecture by using Python command:
> python
>>> import sys
>>> sys.platform
'darwin'
>>> import platform
>>> platform.machine()
'arm64'
You can find the list of sys.platform
at here.
Add an update:
After upgrading to latest Poetry 1.3.1, I can simply use this now:
[tool.poetry.dependencies]
python = "3.10.x"
torch = "1.13.1"
It succeed both on both my MacBook Pro with M1 chip and Linux (Ubuntu) in the pipeline. (I did regenerate poetry.lock
file to make sure it actually works)
Here is the pyproject.toml.
For Linux (Ubuntu) in the pipeline part, you can see this which shows succeed:
@Hongbo-Miao I tried upgrading to Poetry 1.3.1, editing my pyproject.toml file as described, and then running poetry update torch
. It works fine on my M1 Mac, but when I try installing inside a Linux docker container running on this same M1 Mac, I get a bunch of errors like:
RuntimeError
#11 1.629
#11 1.629 Unable to find installation candidates for nvidia-cublas-cu11 (11.10.3.66)
#11 1.629
#11 1.629 at /usr/local/lib/python3.10/site-packages/poetry/installation/chooser.py:105 in choose_for
#11 1.637 101│
#11 1.637 102│ links.append(link)
#11 1.637 103│
#11 1.637 104│ if not links:
#11 1.638 → 105│ raise RuntimeError(f"Unable to find installation candidates for {package}")
#11 1.638 106│
#11 1.639 107│ # Get the best link
#11 1.640 108│ chosen = max(links, key=lambda link: self._sort_key(package, link))
#11 1.640 109│
With poetry==1.3.2 on python 3.9.15 in docker python:3.9.15-slim
on a mac m2 I'm getting
ERROR: torch-1.13.1+cpu-cp39-cp39-linux_x86_64.whl is not a supported wheel on this platform.
Updating to poetry==1.4.0 fixed it !
EDIT:
and I ended up using
torch = [
{ url = "https://download.pytorch.org/whl/cpu/torch-1.13.1%2Bcpu-cp39-cp39-linux_x86_64.whl", markers = "sys_platform == 'linux' and platform_machine != 'aarch64'"},
{ url = "https://download.pytorch.org/whl/cpu/torch-1.13.1%2Bcpu-cp39-cp39-linux_x86_64.whl", markers = "sys_platform == 'darwin' and platform_machine != 'arm64'"},
{ url = "https://download.pytorch.org/whl/cpu/torch-1.13.1-cp39-none-macosx_11_0_arm64.whl", markers = "sys_platform == 'darwin' and platform_machine == 'arm64'"},
{ url = "https://download.pytorch.org/whl/torch-1.13.1-cp39-cp39-manylinux2014_aarch64.whl", markers = "sys_platform == 'linux' and platform_machine == 'aarch64'"},
]
I still have this issue.
confirming issue still exists
The below resolves in a few seconds with poetry 1.5.1
, but unable to work with torch
due to ValueError: libcublas.so.*[0-9] not found in the system path
, also referenced in this issue here.
[tool.poetry]
name = "pytorch-poetry-test"
version = "0.1.0"
description = ""
authors = ["cate <catherineseale@gmail.com>"]
readme = "README.md"
packages = [{include = "pytorch_poetry_test"}]
[tool.poetry.dependencies]
python = "^3.9"
torch = "^2.0.1"
torchvision = "^0.15.2"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
If I remove torchvision and downgrade to torch 2.0.0
, this is successful and can use torch ok, however torchvision then can't resolve:
name = "pytorch-poetry-test"
version = "0.1.0"
description = ""
authors = ["cate <catherineseale@gmail.com>"]
readme = "README.md"
packages = [{include = "pytorch_poetry_test"}]
[tool.poetry.dependencies]
python = "^3.9"
torch = ">=2.0.0, !=2.0.1"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Because no versions of torchvision match >0.15.2,<0.16.0
and torchvision (0.15.2) depends on torch (2.0.1), torchvision (>=0.15.2,<0.16.0) requires torch (2.0.1).
So, because pytorch-poetry-test depends on both torch (>=2.0.0, !=2.0.1) and torchvision (^0.15.2), version solving failed.
Updating from torch==2.0.0
to add both torch and torchvision in again removes a lot of cuda stuff, and setuptools?
• Removing cmake (3.27.2)
• Removing lit (16.0.6)
• Removing nvidia-cublas-cu11 (11.10.3.66)
• Removing nvidia-cuda-cupti-cu11 (11.7.101)
• Removing nvidia-cuda-nvrtc-cu11 (11.7.99)
• Removing nvidia-cuda-runtime-cu11 (11.7.99)
• Removing nvidia-cudnn-cu11 (8.5.0.96)
• Removing nvidia-cufft-cu11 (10.9.0.58)
• Removing nvidia-curand-cu11 (10.2.10.91)
• Removing nvidia-cusolver-cu11 (11.4.0.1)
• Removing nvidia-cusparse-cu11 (11.7.4.91)
• Removing nvidia-nccl-cu11 (2.14.3)
• Removing nvidia-nvtx-cu11 (11.7.91)
• Removing setuptools (68.1.0)
• Removing triton (2.0.0)
• Removing wheel (0.41.1)
name = "pytorch-poetry-test"
Not sure if it is relevant, but I also had issues with setuptools being removed in my anaconda+poetry environment. Updating anaconda with conda update conda
solved my issue.
I think @cateseale is correct. For people failed to install PyTorch by Poetry after May 9, 2023. This is because of this PyTorch issue at https://github.com/pytorch/pytorch/issues/100974
So instead of using
torch = "2.0.1"
torchvision = "0.15.2"
This should work:
torch = "2.0.0"
torchvision = "0.15.1"
Hopefully next PyTorch version will fix this issue. 😃
Is this getting a fix?
Just commenting to confirm that its still a problem.
I use a Docker container with a Torch/Cuda env that shouldnt be touched and poetry config virtualenvs.create false
for deployment.
If I specify torch = { version = "=2.0.1+cu118", source = "cuda" }
with the correct source, poetry will re-download Torch for Python 3.10 (even if Python 3.9. is explicitly specified in the toml and there is no Python 3.10 installed) and if I add torch = { url = "https://download.pytorch.org/whl/cu118/torch-2.0.1%2Bcu118-cp39-cp39-linux_x86_64.whl"}
, Poetry will also redownload Torch instead of using the currently installed vesion.
Is there any solution to use the pre-isntalled version? (I need to fix it in the pyproject.toml
file to prevent other libraries from overwriting it - with pip this works fine....
I'm confused how this remains an issue... shouldn't the newer version just take care of this?
Yes it's broken again for me torch = {version = ">=2.0.0, !=2.0.1"}
worked fine for me (I specifically excluded 2.0.1
because that version was accidentally shipped without the nvidia dependencies, and I specify torch as an explicit dependency before any other packages that themselves require torch such as transformers) until today.
Its started failing again with the release of torch 2.1 - poetry and pip install different things again
pip install torch
installs torch 2.1.0 plus all the various nvidia libs (which install the required cuda version)
but poetry install
on a clean machine only installed torch 2.1.0
- it should have also installed
nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-nccl-cu12, nvidia-curand-cu12, nvidia-cufft-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, nvidia-cusparse-cu12, nvidia-cudnn-cu12 nvidia-cublas-cu12-12.1.3.1 nvidia-cuda-cupti-cu12-12.1.105 nvidia-cuda-nvrtc-cu12-12.1.105 nvidia-cuda-runtime-cu12-12.1.105 nvidia-cudnn-cu12-8.9.2.26 nvidia-cufft-cu12-11.0.2.54 nvidia-curand-cu12-10.3.2.106 nvidia-cusolver-cu12-11.4.5.107 nvidia-cusparse-cu12-12.1.0.106 nvidia-nccl-cu12-2.18.1 nvidia-nvjitlink-cu12-12.3.52 nvidia-nvtx-cu12-12.1.105 triton-2.1.0
So import torch
then fails with missing .so
It works fine when I specifically target torch==2.0.0
- I might raise a separate issue
-vvv
option).Issue
pytorch is now pep503 compliant (https://github.com/pytorch/pytorch/issues/25639#issuecomment-861707149) but I still can't add torch and torchvision. I've tried using this:
but got this:
and if I try using
I end up disabling the pypi and can't download anything else...