Closed wingdi closed 3 years ago
Hello!
I have the same problem, with NVIDIA® GeForce® RTX 2060 , with 6GB GDDR6 VRAM.
[MY SOLUTION] My solution was to run it on CPU (my CPU has 16 GB DDR4 2400MHz SDRAM) To do that I changed in apps/recon.py:
It takes a while to process the image, but at least it works!
Hope it will work for you too!
Hello!
I have the same problem, with NVIDIA® GeForce® RTX 2060 , with 6GB GDDR6 VRAM.
[MY SOLUTION] My solution was to run it on CPU (my CPU has 16 GB DDR4 2400MHz SDRAM) To do that I changed in apps/recon.py:
- the line line: cuda = torch.device('cuda:%d' % opt.gpu_id if torch.cuda.is_available() else 'cpu')
- with this line: cuda = torch.device('cpu')
It takes a while to process the image, but at least it works!
Hope it will work for you too!
it works fors me too. mine is 16GB CPU .
it help a lot ! thanks very much !
Working for me too ! Seems like a minimum memory VRAM of GPU of 7GB should be specified in the readme, as it is the minimum enough for it to run. Maybe the issue should be closed? @wingdi
Working for me too ! Seems like a minimum memory VRAM of GPU of 7GB should be specified in the readme, as it is the minimum enough for it to run. Maybe the issue should be closed? @wingdi
ok ~
env: pytorch1.7.0、 cuda 11.0、RTX2060 、win10system.
get this Error: RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 6.00 GiB total capacity; 3.87 GiB already allocated; 187.62 MiB free; 4.01 GiB reserved in total by PyTorch)
is there any method to solve this problem?