thunlp / PEVL

Source code for EMNLP 2022 paper “PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models”
MIT License
47 stars 5 forks source link

runtime error when i run run_vcr_train.py #10

Closed huangsiyong closed 2 years ago

huangsiyong commented 2 years ago

when i run run_vcr_train.py on four 2080ti, i got:

 File "run_vcr_train.py", line 253, in main                                                                                                                                                                                                             
    config, args.training_mode, args, vcr_val_q2a_loader, vcr_val_qa2r_loader)                                                                                                                                                                           
  File "run_vcr_train.py", line 109, in train                                                                                                                                                                                                            
    loss_ita, loss_itm = model(images, text, alpha, itm_labels, mode='finetuning')                                                                                                                                                                       
  File "/anaconda3/envs/pevl/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl                                                                                                                           
    result = self.forward(*input, **kwargs)                                                                                                                                                                                                              
  File "/anaconda3/envs/pevl/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 692, in forward                                                                                                                        
    if self.reducer._rebuild_buckets():                                        
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one.  This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing t
he keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

in my opinion, the parameters of the momentum model are not used in producing loss. how to deal with it?

and the difference between the code of finetune part and pretrain part on vcr task is only deletion of MLM and soft loss. is it true?

looking for your help, thanks!

qyc-98 commented 2 years ago

Hi, please check my updating for run_vcr_train.py, you can set args.find_unused_parameters=True to avoid the error.

For the second question, yes, we only use ITA and ITM loss for fine-tuning.

huangsiyong commented 2 years ago

get it! thanks