cuiaiyu / dressing-in-order

(ICCV'21) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing" by Aiyu Cui, Daniel McKee and Svetlana Lazebnik
https://cuiaiyu.github.io/dressing-in-order
Other
513 stars 127 forks source link

Results of demo output #13

Closed fanchunpeng closed 2 years ago

fanchunpeng commented 2 years ago

The results of demo output are as follows: 图片1 图片2 图片3 图片4 Hello, how to improve the above effect?

  1. The semantic model adopted has no mask information of the neck. What do you suggest to do with the neck mask?

  2. The code distinguishes hair from hat, which is set as background information. Would it be better to combine hat and hair information?

  3. The resolution of the result is low and the image quality is not clear. Is there any magnification to improve the resolution?

  4. From the test results, the face information is inconsistent; Is it recommended to add facial information when changing hair?

cuiaiyu commented 2 years ago
  1. Unfortunately, neck mask is a data issue. The LIP human parse label doesn't have 'neck' class. If you really want to get the mask of neck, maybe you can run some other human parse labels, which has 'neck' as one of its classes.
  2. In deepfashion dataset, 'hat' is very rarely present, so we didn't pick it up as a garment. We'd also be interested to know what would happen if combining hat with hair or treat hat as a separated garment. Let's us know if you try this. ;-)
  3. The resolutions in your result look lower than what is supposed to be. Please double check if you have data input/output setup correctly (maybe there's some compression on your image grid or something). The trained model is supposed to support 256x176 images and should render clear results.
  4. Better regularization of face representation is always a good topic for further investigation.
fanchunpeng commented 2 years ago

Provided pre training model Dior_ 64 get the above effect, it's not the best model, is it? I want to adjust the resolution and definition higher. I just feel that the resolution of 256x176 is relatively low. I hope to improve it to 512x352. Do you have any suggestions?

cuiaiyu commented 2 years ago

Since we only support and test on 256x176, if you want good result on 512x352, maybe you should check those HD virtual try-on work on and combine them with our recurrent pipeline together.

lschaupp commented 2 years ago

Would it be hard to retrain the model for higher resolutions?