dvlab-research / Video-P2P

Video-P2P: Video Editing with Cross-attention Control
https://video-p2p.github.io/
355 stars 24 forks source link

run_tuning.py cannot work #4

Closed FunnyClown closed 1 year ago

FunnyClown commented 1 year ago

RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel, and by making sure all forward function outputs participate in calculating loss. If you already have done the above, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). Parameters which did not receive grad for rank 0: down_blocks.2.attentions.0.transformer_blocks.0.attn1.to_q.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn_temp.to_out.0.bias, down_blocks.1.attentions.1.transformer_blocks.0.attn_temp.to_out.0.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn_temp.to_v.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn_temp.to_k.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn_temp.to_q.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn2.to_q.weight, down_blocks.1.attentions.1.transformer_blocks.0.attn1.to_q.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn_temp.to_out.0.bias, down_blocks.1.attentions.0.transformer_blocks.0.attn_temp.to_out.0.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn_temp.to_v.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn_temp.to_k.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn_temp.to_q.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn2.to_q.weight, down_blocks.1.attentions.0.transformer_blocks.0.attn1.to_q.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn_temp.to_out.0.bias, down_blocks.0.attentions.0.transformer_blocks.0.attn1.to_q.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn2.to_q.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn_temp.to_q.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn_temp.to_k.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn_temp.to_v.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn_temp.to_out.0.weight, down_blocks.0.attentions.0.transformer_blocks.0.attn_temp.to_out.0.bias, down_blocks.0.attentions.1.transformer_blocks.0.attn1.to_q.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn2.to_q.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn_temp.to_q.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn_temp.to_k.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn_temp.to_v.weight, down_blocks.0.attentions.1.transformer_blocks.0.attn_temp.to_out.0.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn2.to_q.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn_temp.to_q.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn_temp.to_k.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn_temp.to_v.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn_temp.to_out.0.weight, down_blocks.2.attentions.0.transformer_blocks.0.attn_temp.to_out.0.bias, down_blocks.2.attentions.1.transformer_blocks.0.attn1.to_q.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn2.to_q.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn_temp.to_q.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn_temp.to_k.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn_temp.to_v.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn_temp.to_out.0.weight, down_blocks.2.attentions.1.transformer_blocks.0.attn_temp.to_out.0.bias Parameter indices which did not receive grad for rank 0: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41

ShaoTengLiu commented 1 year ago

I didn't meet this error before. Could you show me your environment and the GPU type? conda env export > env.yml

ShaoTengLiu commented 1 year ago

I think this is caused by different environments. I can help you to debug if you provide more information.

I will temporally close this issue. You are welcome to reopen it if you still have this problem.