wangxiang1230 / OadTR

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".
MIT License
87 stars 12 forks source link

Runtime error on training #16

Open nishanthrachakonda opened 2 years ago

nishanthrachakonda commented 2 years ago

I am getting the following error when I run OadTR train command.

python main.py --num_layers 3 --decoder_layers 5 --enc_layers 64 --output_dir models/en_3_decoder_5_lr_drop_1
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your
module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the 
keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` 
function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel 
module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss 
function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable).

Could you let me know if I am missing something while running this command ?

wangxiang1230 commented 2 years ago

torch.nn.parallel.DistributedDataParallel

No need to use distributed multi-GPUs to train, turn off distributed multi-card training (i.e., torch.nn.parallel.DistributedDataParallel() is redundant).