When I run an toy example in my computer with PyTorch 1.9.0, while giving the right answer, it also gives me the following warning:
C:\Users\iTom\Desktop\pytorch-lasso\lasso\linear\utils.py:36: UserWarning: torch.cholesky is deprecated in favor of torch.linalg.cholesky and will be removed in a future PyTorch release.
L = torch.cholesky(A)
should be replaced with
L = torch.linalg.cholesky(A)
and
U = torch.cholesky(A, upper=True)
should be replaced with
U = torch.linalg.cholesky(A.transpose(-2, -1).conj()).transpose(-2, -1).conj() (Triggered internally at ..\aten\src\ATen\native\BatchLinearAlgebra.cpp:1284.)
However, when I try to test the same sample with cuda on the server within a PyTorch 1.2 docker container, it raises the following error:
Traceback (most recent call last):
File "test.py", line 2, in <module>
from lasso.linear import dict_learning, sparse_encode
File "/home/tom/codes/tmp.completer/lasso/__init__.py", line 1, in <module>
from . import linear, nonlinear, conv2d
File "/home/tom/codes/tmp.completer/lasso/nonlinear/__init__.py", line 3, in <module>
from .split_bregman import split_bregman_nl
File "/home/tom/codes/tmp.completer/lasso/nonlinear/split_bregman.py", line 5, in <module>
from torch._vmap_internals import _vmap
ModuleNotFoundError: No module named 'torch._vmap_internals'
root@7509fe2f96ac:/home/tom/codes/tmp.completer# python test.py
Traceback (most recent call last):
File "test.py", line 2, in <module>
from lasso.linear import sparse_encode#, dict_learning
File "/home/tom/codes/tmp.completer/lasso/__init__.py", line 1, in <module>
from . import linear, nonlinear, conv2d
File "/home/tom/codes/tmp.completer/lasso/nonlinear/__init__.py", line 3, in <module>
from .split_bregman import split_bregman_nl
File "/home/tom/codes/tmp.completer/lasso/nonlinear/split_bregman.py", line 5, in <module>
from torch._vmap_internals import _vmap
ModuleNotFoundError: No module named 'torch._vmap_internals'
When I run an toy example in my computer with PyTorch 1.9.0, while giving the right answer, it also gives me the following warning:
However, when I try to test the same sample with cuda on the server within a PyTorch 1.2 docker container, it raises the following error:
my code using cuda: