Open cn0xroot opened 2 years ago
This is happening on my 1650 as well. It looks like something is overshooting the maximum memory amount across the board.
File "C:\Users\Mars\Documents\animegan2-pytorch\test.py", line 89, in <module>
test(args)
File "C:\Users\Mars\Documents\animegan2-pytorch\test.py", line 47, in test
out = net(input, args.upsample_align).squeeze(0).permute(1, 2, 0).cpu().numpy()
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Mars\Documents\animegan2-pytorch\model.py", line 106, in forward
out = self.block_e(out)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
input = module(input)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\container.py", line 141, in forward
input = module(input)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "C:\Users\Mars\miniconda3\envs\animegan\lib\site-packages\torch\nn\modules\conv.py", line 442, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 4.68 GiB (GPU 0; 4.00 GiB total capacity; 1.32 GiB already allocated; 294.20 MiB free; 2.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
This is happening on my RTX6000 as well.if use cpu,very slow.
Same on RTX 2080 Ti. It eats so much memory
Same on RTX 3060.
On a single 2080Ti, 640x640 input is OK, use 8~9 GiB memory For a larger image, run it on CPU, take a few seconds
Same on RTX 3080.
I guess the size of image is so large,that make it use so much gpu memory.
model loaded: ./weights/paprika.pt Traceback (most recent call last): File "test.py", line 92, in
test(args)
File "test.py", line 48, in test
out = net(image.to(device), args.upsample_align).cpu()
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, kwargs)
File "/home/init3/Tools/animegan2-pytorch/model.py", line 106, in forward
out = self.block_e(out)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, *kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(input, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 141, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py", line 443, in _conv_forward
self.padding, self.dilation, self.groups)
RuntimeError: CUDA out of memory. Tried to allocate 12.74 GiB (GPU 0; 10.76 GiB total capacity; 1.19 GiB already allocated; 7.09 GiB free; 2.52 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
input: samples/inputs/1.jpg