v-iashin / MDVC

PyTorch implementation of Multi-modal Dense Video Captioning (CVPR 2020 Workshops)
https://v-iashin.github.io/mdvc
142 stars 19 forks source link

RuntimeError: CUDA out of memory. #23

Open taruntiwarihp opened 3 years ago

taruntiwarihp commented 3 years ago

(mdvc) root@ever:~/MDVC# python main.py --device_ids 0 log_path: ./log/0606173845 model_checkpoint_path: ./log/0606173845 Preparing dataset for train Preparing dataset for val_1 Preparing dataset for val_2 Preparing dataset for val_1 using SubsAudioVideoGeneratorConcatLinearDoutLinear initialization: xavier Param Num: 178749320 17:42:32 train (0): 0%| | 1/1221 [00:01<39:18, 1.93s/it] Traceback (most recent call last): File "main.py", line 573, in main(cfg) File "main.py", line 276, in main cfg.modality, cfg.use_categories File "/root/MDVC/epoch_loop/run_epoch.py", line 308, in training_loop pred = model(feature_stacks, caption_idx, masks) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, kwargs) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 150, in forward return self.module(*inputs[0], *kwargs[0]) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/root/MDVC/model/transformer.py", line 328, in forward memory_video = self.encoder_video(src_video, src_mask) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, kwargs) File "/root/MDVC/model/transformer.py", line 202, in forward x = layer(x, src_mask) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, *kwargs) File "/root/MDVC/model/transformer.py", line 189, in forward x = self.res_layers[0](x, sublayer0) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/root/MDVC/model/transformer.py", line 152, in forward res = sublayer(res) File "/root/MDVC/model/transformer.py", line 186, in sublayer0 = lambda x: self.self_att(x, x, x, src_mask) File "/root/miniconda3/envs/mdvc/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/root/MDVC/model/transformer.py", line 137, in forward att = attention(Q, K, V, mask) # (B, H, seq_len, d_k) File "/root/MDVC/model/transformer.py", line 101, in attention sm_input = sm_input.masked_fill(mask == 0, -float('inf')) RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 3.94 GiB total capacity; 3.43 GiB already allocated; 12.94 MiB free; 71.07 MiB cached)

v-iashin commented 3 years ago

The set of default hyper-parameters require a 12 GB GPU (1080Ti/2080Ti)

taruntiwarihp commented 3 years ago

@v-iashin Any way to reduce the requirement of GPU in parameters, I have only 4GB GPU. Thanks