sangyun884 / HR-VITON

Official PyTorch implementation for the paper High-Resolution Virtual Try-On with Misalignment and Occlusion-Handled Conditions (ECCV 2022).
831 stars 171 forks source link

The reason behind the warp block? #22

Open stereomatchingkiss opened 1 year ago

stereomatchingkiss commented 1 year ago

In the file networks.py, line 133~135

flow_norm = torch.cat([flow[:, :, :, 0:1] / ((iW/2 - 1.0) / 2.0), flow[:, :, :, 1:2] / ((iH/2 - 1.0) / 2.0)], 3)
warped_T1 = F.grid_sample(T1, flow_norm + grid, padding_mode='border')

Why do you need to normalize the value of the flow? Why do you sum the flow_norm with grid? If we do not perform norm and/or sum the flow_norm with the grid, what are the impact to the final results? Any paper mention about this warp block? I read your paper but it do not give much explanation about this warp block(The W symbol in the Fig10).

Thanks

Gzzgz commented 1 year ago

I have the same problem. Do you know the code " flow[:, :, :, 0:1] / ((iW/2 - 1.0) / 2.0 " 's mean?

stereomatchingkiss commented 1 year ago

iW/2 I guess is change the value to [0, 2] -1 become [0, 1] /2 again become [0, 0.5]

Because the input value normalize too, so the flow need to normalize too

Edit : just a guess, not sure this is correct or not

stereomatchingkiss commented 1 year ago

I have the same problem. Do you know the code " flow[:, :, :, 0:1] / ((iW/2 - 1.0) / 2.0 " 's mean?

A good explanation