Open wzl2611 opened 5 years ago
We utilize im2col as part of the internal computation of our layer, and that requires relatively large memory allocation (but subsequently released). It can be helpful to profile your peak memory usage.
How did you end up fixing memory issues?
Traceback (most recent call last): File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main "main", mod_spec) File "/usr/lib/python3.5/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/arc-wzl2611/mEMC-Net-master/main1.py", line 373, in
main()
File "/home/arc-wzl2611/mEMC-Net-master/main1.py", line 341, in main
log_test = test(model, test_loader, device, last_epoch, init_lr, args.loss, perf_measures, args)
File "/home/arc-wzl2611/mEMC-Net-master/main1.py", line 102, in test
output = apply_model(model, lres, guide, args.factor)
File "/home/arc-wzl2611/mEMC-Net-master/main1.py", line 26, in apply_model
out = net(lres, guide)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(*input, *kwargs)
File "/home/arc-wzl2611/mEMC-Net-master/models.py", line 250, in forward
x = self.up_convts[i](x, guide_cur)
File "/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py", line 477, in call
result = self.forward(input, **kwargs)
File "/home/arc-wzl2611/mEMC-Net-master/pac.py", line 786, in forward
self.output_padding, self.dilation, self.shared_filters, self.native_impl)
File "/home/arc-wzl2611/mEMC-Net-master/pac.py", line 498, in pacconv_transpose2d
shared_filters)
File "/home/arc-wzl2611/mEMC-Net-master/pac.py", line 252, in forward
output = torch.einsum('ijklmn,jokl->iomn', (in_mul_k, weight))
File "/usr/local/lib/python3.5/dist-packages/torch/functional.py", line 243, in einsum
return torch._C._VariableFunctions.einsum(equation, operands)
RuntimeError: CUDA error: out of memory