Open dmitrivainbrand opened 6 years ago
Also it seems that weight update is missing. There is no training, just fprop and bprop in a loop. Probably not important for memory saving example but I'd suggest to add it for completeness.
I get the same error with Pytorch 1.0.0 (along with an OOM error on the VNet model, which seems odd given that saving memory is exactly what gradient checkpointing is supposed to do):
...
Optimized resnet (18): (1541782.471 usecs gpu) (1541732.073 usecs cpu)
Optimized resnet (19): (1544267.822 usecs gpu) (1544241.190 usecs cpu)
.test_memory_optimized.py:119: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.
nn.init.kaiming_normal(m.weight)
/home/gwern/src/pytorch_memonger/models/optimized/vnet_new.py:133: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
out = self.softmax(out)
E/home/gwern/.local/lib/python3.6/site-packages/torch/nn/modules/rnn.py:46: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.2 and num_layers=1
"num_layers={}".format(dropout, num_layers))
E
======================================================================
ERROR: test_vnet_optim (__main__.TestMemoryOptimized)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_memory_optimized.py", line 162, in test_vnet_optim
loss.backward()
File "/home/gwern/.local/lib/python3.6/site-packages/torch/tensor.py", line 102, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/gwern/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: CUDA out of memory. Tried to allocate 1024.00 MiB (GPU 0; 10.91 GiB total capacity; 8.33 GiB already allocated; 1013.19 MiB free; 848.53 MiB cached)
======================================================================
ERROR: test_wlm_optim (__main__.TestMemoryOptimized)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test_memory_optimized.py", line 214, in test_wlm_optim
hidden = self.repackage_hidden(hidden)
File "test_memory_optimized.py", line 178, in repackage_hidden
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in <genexpr>
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in repackage_hidden
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in <genexpr>
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in repackage_hidden
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in <genexpr>
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in repackage_hidden
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in <genexpr>
return tuple(self.repackage_hidden(v) for v in h)
File "test_memory_optimized.py", line 178, in repackage_hidden
return tuple(self.repackage_hidden(v) for v in h)
File "/home/gwern/.local/lib/python3.6/site-packages/torch/tensor.py", line 422, in __iter__
raise TypeError('iteration over a 0-d tensor')
TypeError: iteration over a 0-d tensor
----------------------------------------------------------------------
Ran 4 tests in 68.831s
FAILED (errors=2)
Also, there is an error in the README, where it says
# for checkpointed
python test_memory_optimized.py
# for baseline
python test_memory_optimized.py
presumably the second should actually be python test_memory_baseline.py
? Which however errors out:
$ python test_memory_baseline.py
Traceback (most recent call last):
File "test_memory_baseline.py", line 14, in <module>
import models.baseline.vnet as vnet_baseline
File "/home/gwern/src/pytorch_memonger/models/baseline/vnet.py", line 6, in <module>
import torch.utils.checkpoint_new as checkpoint_new
ModuleNotFoundError: No module named 'torch.utils.checkpoint_new'
I am working with a 0.4.0a0 version and the WLM test is failing in repackage_hidden() function. The error message is copied below. I think it has something to do w/ the fact that the Variable API has been deprecated. Variable functionality works but the type() command returns
torch.Tensor
and notVariable
. I changed therepackage_hidden()
to: ` def repackage_hidden(self, h): """Wraps hidden states in new Variables, to detach them from their history."""if type(h) == Variable:
and it seem to work. Could you please check the issue and validate the fix?
======================================================================= ERROR: test_wlm_baseline (main.TestMemoryBaseline)
..... ..... ..... File "test_memory_baseline.py", line 219, in repackage_hidden return tuple(self.repackage_hidden(v) for v in h) File "/opt/conda/envs/pytorch-py3.6/lib/python3.6/site-packages/torch/tensor.py", line 351, in iter raise TypeError('iteration over a 0-d tensor') TypeError: iteration over a 0-d tensor