yjxiong / tsn-pytorch

Temporal Segment Networks (TSN) in PyTorch
BSD 2-Clause "Simplified" License
1.07k stars 308 forks source link

RuntimeError: cuda runtime error (2) : out of memory #39

Closed Nandan91 closed 6 years ago

Nandan91 commented 6 years ago

While testing the RGBDiff model using the command python test_models.py ucf101 RGBDiff /media/sda/nandan/data/ucf101_rgb_val_split_1.txt ucf101_bninception__rgbdiff_checkpoint.pth.tar --arch BNInception --save_scores SCORE_UCF101_1_RGBDIFF --workers=2 I'm getting this error Traceback (most recent call last): File "test_models.py", line 130, in rst = eval_video((i, data, label)) File "test_models.py", line 117, in eval_video rst = net(input_var).data.cpu().numpy().copy() File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 357, in call result = self.forward(*input, **kwargs) File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 73, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/data_parallel.py", line 83, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/nandan/anaconda2/lib/python2.7/site-packages/torch/nn/parallel/parallel_apply.py", line 67, in parallel_apply raise output RuntimeError: cuda runtime error (2) : out of memory at /opt/conda/conda-bld/pytorch_1518238409320/work/torch/lib/THC/generic/THCStorage.cu:58

I'm using two K40 GPU with each global memory capacity 4742MiB.

Nandan91 commented 6 years ago

@yjxiong : I found that runtime memory problem can be solved either by reducing --test_crops size or by reducing --test_segments size . My question is which one to prefer ? I mean which one won't affect test accuracy ?

wj320 commented 6 years ago

I have met the same problem. How did you solve it? @Nandan532189

yjxiong commented 6 years ago

The test_crops is the preferred one. It does not lead to a drastic decrease in accuracy.