NVIDIA / vid2vid

Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.
Other
8.6k stars 1.2k forks source link

The return tensor of CompositeGenerator don't match the tensor in train.py #57

Closed donghaoye closed 6 years ago

donghaoye commented 6 years ago

network.py line 199:
return img_final, flow, weight, img_raw, img_feat, flow_feat, img_fg_feat

train.py line 102: fake_B, fake_B_raw, flow, weight, real_A, real_Bp, fake_B_last = modelG(input_A, input_B, inst_A, fake_B_last)

tcwang0509 commented 6 years ago

Why do they need to match? modelG calls the generator network, but is not the network itself.

donghaoye commented 6 years ago

the 2nd return variable of modelG is flow, but you write it as fake_B_raw.

network.py line 199: return img_final, flow, weight, img_raw, img_feat, flow_feat, img_fg_feat

train.py line 102: fake_B, fake_B_raw, flow, weight, real_A, real_Bp, fake_B_last = modelG(input_A, input_B, inst_A, fake_B_last)

donghaoye commented 6 years ago

Sorry. I get it.