cuiaiyu / dressing-in-order

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik
https://cuiaiyu.github.io/dressing-in-order
Other
513 stars 127 forks source link

Pretrained weights for flownet.pt #19

Closed sujeongcha closed 2 years ago

sujeongcha commented 2 years ago

Hi Aiyu,

Thanks for publishing such an interesting work!

I was playing with demo.ipynb, but got stuck at the point where we need to put the path for the pretrained flownet. The pre-trained models you shared only have three files - latest_net_Flow.pth, latest_net_E_attr.pth, and latest_net_G.pth. Are we supposed to put latest_net_Flow.pth for opt.flownet_path?

I tried this way but got some results inconsistent with yours. For example, this is the output for Layering - Single. image

cf. I used img.zip(low-resolution images), and DIOR_64.

cuiaiyu commented 2 years ago

For flownet: Yes, for demo, please put latest_net_Flow.pth for opt.flownet_path (or you can specify opt.flownet_path='' as well, because it will load the saved checkpoints later anyway. For training, please put the pre-trained flownet for opt.flownet_path.

For the inconsistent results: The input image (the second) looks more skinny than what it is supposed to be? I guess your low resolution images are 256x256, which was proprocessed by padding 40px on left and right side of 256x176 images? Maybe you can try to remove the padding from the images?

sujeongcha commented 2 years ago

Hi Aiyu,

Thanks for pointing that out. I added one more line of transformation self.crop = transforms.CenterCrop((256, 176)) when loading the dataset to remove the paddings. It's working fine now.

Closing this issue. Thanks for the prompt reply :)