Closed chebee7i closed 1 year ago
I also tried a more specific version for torch: pytorch::pytorch=1.13.0=py3.8_cuda11.6*
.
$ micromamba create -f env.yml -vvvv |& grep pytorch
info libmamba Parsing MatchSpec pytorch::pytorch=1.13.0=py3.8_cuda11.6*
info libmamba Searching index cache file for repo 'https://conda.anaconda.org/pytorch/linux-64/repodata.json'
pytorch/linux-64 Using cache
info libmamba Searching index cache file for repo 'https://conda.anaconda.org/pytorch/noarch/repodata.json'
pytorch/noarch Using cache
info libmamba Reading cache files '/home/username/micromamba/pkgs/cache/ee0ed9e9.*' for repo index 'https://conda.anaconda.org/pytorch/linux-64'
info libmamba Reading cache files '/home/username/micromamba/pkgs/cache/edb1952f.*' for repo index 'https://conda.anaconda.org/pytorch/noarch'
info libmamba Parsing MatchSpec pytorch::pytorch=1.13.0=py3.8_cuda11.6*
info libmamba Parsing MatchSpec pytorch::pytorch=1.13.0=py3.8_cuda11.6*
info libsolv job: install pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0
info libsolv pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1)
info libsolv conflicting pytorch-cuda-11.6-h867d48c_1 (assertion)
info libsolv conflicting pytorch-cuda-11.6-h867d48c_0 (assertion)
info libsolv installing pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 (assertion)
info libsolv propagate decision -2325: !pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv propagate decision -2324: !pytorch-cuda-11.6-h867d48c_0 [2324] Conflict.level1
info libsolv !pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1) Install.level1
info libsolv pytorch-cuda-11.6-h867d48c_0 [2324] (w2) Conflict.level1
info libsolv pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1) Install.level1
info libsolv !pytorch-cuda-11.6-h867d48c_0 [2324] (w1) Conflict.level1
info libsolv !pytorch-cuda-11.6-h867d48c_1 [2325] (w1) Conflict.level1
info libsolv conflicting pytorch-cuda-11.6-h867d48c_1 (assertion)
info libsolv conflicting pytorch-cuda-11.6-h867d48c_0 (assertion)
info libsolv propagate decision -2325: !pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv propagate decision -2324: !pytorch-cuda-11.6-h867d48c_0 [2324] Conflict.level1
info libsolv !pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] (w1)
info libsolv pytorch-cuda-11.6-h867d48c_0 [2324] (w2) Conflict.level1
info libsolv pytorch-cuda-11.6-h867d48c_1 [2325] Conflict.level1
info libsolv -> decided to conflict pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0
info libsolv propagate decision -1078: !pytorch-1.13.0-py3.8_cuda11.6_cudnn8.3.2_0 [1078] Conflict.level1
- nothing provides cuda 11.6.* needed by pytorch-cuda-11.6-h867d48c_0
Great report!
Want to try the new error messages? See https://github.com/mamba-org/mamba/issues/2078
Want to try the new error messages? See #2078
https://github.com/mamba-org/mamba/issues/2078#issuecomment-1368020953
The relevant bit that is new seems to be:
=================================== Experimental messages (new) ====================================
critical libmamba Invalid dependency info: <NULL>
Is this saying that pytorch::pytorch=1.13.0=py3.8_cuda11.6*
is an invalid package specification? The "dependency info" part makes me think not.
does installing cuda 11.6.*
work? Is the bug that it should read cuda 11.6
?
note that the micromamba search
only lists cudatoolkit
and not cuda
.
I think you need the nvidia
channel: https://anaconda.org/nvidia/cuda
Interesting. That does solve the issue:
However, note that when I install with pytorch::pytorch=*=*cuda*
, the successfully built environment has only cudatoolkit
and not cuda
. Also, I have never needed to add the nvidia
channel with conda
(or with the successfully built mamba
environment).
So it seems like there's a discrepancy here that still needs an explanation. Any thoughts?
Not really, if the pytorch-cuda
package depends on cuda
, it depends on cuda. Could be that in the past they did things differently ... or that you got the cuda
package from somewhere else (e.g. defaults channel?).
I can say confidently that my previous environments did not have the cuda package explicitly installed. So maybe it's just a requirements change.
Troubleshooting docs
Search tried in issue tracker
pytorch cuda
Latest version of Mamba
Tried in Conda?
Reproducible with Conda using experimental solver.
Describe your issue
I am having issues specifying a version of pytorch (1.13) while also ensuring that I get the cuda version. See the pasted info in the other forms. The essential error is:
when specifying
pytorch::pytorch=1.13.*=*cuda*
.How can I pin pytorch to 1.13 and force cuda?
The packages look to be available:
When I do try a similar install using
conda
ala:then I get a similar error. Note that I am building on a host that does not have a GPU...this is for later installation from lock file on a host that does have a GPU.
When I do not pin the version and use
pytorch::pytorch=*=*cuda*
instead, then there is a successful install, but it obviously isn't guaranteed to give the package version that I want. In this case, it gives 1.12.1.mamba info / micromamba info
Logs
environment.yml
~/.condarc