williamyang1991 / DualStyleGAN

[CVPR 2022] Pastiche Master: Exemplar-Based High-Resolution Portrait Style Transfer
Other
1.61k stars 249 forks source link

RuntimeError: input must be contiguous #83

Open YiingWei opened 1 year ago

YiingWei commented 1 year ago

当我进行Progressive Fine-Tuning中的Stage 3: Fine-Tune DualStyleGAN on Target Domain时,输入命令:python3 -m torch.distributed.launch --nproc_per_node=2 --master_port=8765 finetune_dualstylegan.py --iter 1500 --batch 1 --ckpt ./checkpoint/generator-pretrain-003000.pt --style_loss 0.25 --CX_loss 0.25 --perc_loss 1 --id_loss 1 --L2_reg_loss 0.015 --augment cartoon 显示报错: Traceback (most recent call last): File "finetune_dualstylegan.py", line 539, in train(args, loader, generator, discriminator, g_optim, d_optim, g_ema, instyles, Simgs, exstyles, vggloss, id_loss, device) File "finetune_dualstylegan.py", line 227, in train r1_loss = d_r1_loss(real_pred, real_img) File "/root/DualStyleGAN-main/util.py", line 72, in d_r1_loss outputs=real_pred.sum(), inputs=real_img, create_graph=True File "/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py", line 204, in grad inputs, allow_unused) File "/usr/local/lib/python3.6/dist-packages/torch/autograd/function.py", line 89, in apply return self._forward_cls.backward(self, *args) # type: ignore File "/root/DualStyleGAN-main/model/stylegan/op/upfirdn2d.py", line 143, in backward ctx.out_size, File "/root/DualStyleGAN-main/model/stylegan/op/upfirdn2d.py", line 42, in forward g_pad_y1, RuntimeError: input must be contiguous

请问这种情况应该怎么解决呢?

tlsdmswn01 commented 8 months ago

I encountered a similar issue as well. How about trying to run it with a batch size greater than 1 instead of 1? I resolved the problem by doing so!

Translation into advanced English vocabulary:

I too have faced a comparable predicament. Have you considered executing the script with a batch size exceeding 1? I personally overcame the issue by adopting this approach!

williamyang1991 commented 8 months ago

https://github.com/williamyang1991/DualStyleGAN/issues/39#issuecomment-1205487616 Here is a solution