If you do not know the root cause of the problem / bug, and wish someone to help you, please
post according to this template:
🐛 Bugs / Unexpected behaviors
Short: When trying to install pytorch3d via conda for gpus, the constraint on pytorch (pytorch==cuda) makes installation impossible.
This constraint should be "pytorch==cuda since pytorch does not necessarily start their build names with cuda but can also allow something to be before cuda.
Long:
I'm trying to install pytorch3d on a cluster setting, i.e. a system, where the installation node does not have a gpu, but gpu support still is required. To achieve this, we commonly use the trick to indicate the build being cuda for pytorch while providing __cuda by pytorch-cuda from the pytorch repo.
However, it seems that pytorch3d indicates the pytorch constraint as pytorch==cuda* (i.e. starting with cuda) which fails since the matching pytorch versions do not start with cuda but only have it in their name.
Instructions To Reproduce the Issue:
On a machine without GPU create the following environment file:
and try to create the environment. It fails with an error:
package pytorch3d-0.7.5-cuda118py310h7e791d5_2 has constraint pytorch * cuda* conflicting with pytorch-2.2.0-py3.8_cuda11.8_cudnn8.7.0_0
while this should be solvable if the constraint would not make the build have a "cuda" prefix, but just contain cuda (apart that this is still non matching python versions, which the solver would be able to reconcile)
The exact command(s) you ran:
mamba env create -n environment.yml
What you observed (including the full logs):
nvidia/linux-64 Using cache
nvidia/noarch Using cache
pytorch/linux-64 Using cache
pytorch/noarch Using cache
conda-forge/linux-64 Using cache
conda-forge/noarch Using cache
anaconda/linux-64 Using cache
anaconda/noarch Using cache
pkgs/main/linux-64 No change
pkgs/r/linux-64 No change
pkgs/r/noarch No change
pkgs/main/noarch No change
If you do not know the root cause of the problem / bug, and wish someone to help you, please post according to this template:
🐛 Bugs / Unexpected behaviors
Short: When trying to install pytorch3d via conda for gpus, the constraint on pytorch (pytorch==cuda) makes installation impossible. This constraint should be "pytorch==cuda since pytorch does not necessarily start their build names with cuda but can also allow something to be before cuda. Long: I'm trying to install pytorch3d on a cluster setting, i.e. a system, where the installation node does not have a gpu, but gpu support still is required. To achieve this, we commonly use the trick to indicate the build being cuda for pytorch while providing __cuda by pytorch-cuda from the pytorch repo. However, it seems that pytorch3d indicates the pytorch constraint as pytorch==cuda* (i.e. starting with cuda) which fails since the matching pytorch versions do not start with cuda but only have it in their name.
Instructions To Reproduce the Issue:
On a machine without GPU create the following environment file:
and try to create the environment. It fails with an error:
package pytorch3d-0.7.5-cuda118py310h7e791d5_2 has constraint pytorch * cuda* conflicting with pytorch-2.2.0-py3.8_cuda11.8_cudnn8.7.0_0
while this should be solvable if the constraint would not make the build have a "cuda" prefix, but just contain cuda (apart that this is still non matching python versions, which the solver would be able to reconcile)Looking for: ['pytorch==2.2.0[build=cuda]', 'pytorch-cuda=11.8', 'pytorch3d=[build=cuda118*]', 'torchvision', 'torchaudio', 'fvcore', 'iopath', 'pip']
Could not solve for environment specs Encountered problems while solving:
The environment can't be solved, aborting the operation