lgqfhwy / LM-VTON

The official pytorch implementation of LM-VTON : "Toward Realistic Virtual Try-on Through Landmark-guided Shape Matching"
21 stars 2 forks source link

Bad results for the second stage of try-on images #3

Open Amazingren opened 3 years ago

Amazingren commented 3 years ago

Hi @lgqfhwy , Thanks for your impressive work. I have tried to reproduce your results on VITON(mentioned as Zalando in your paper) datasets. For the first stage, the warped results of the cloth seem okay. However, there are severe bad results for the second stage. After I have finished the preparation of the datasets and the environments. What I did:

I believe I have set all data paths in the correct manner. As a result I got the following try-on results: image image image image

Hence these results are quite bad. I want to figure out whether my training operation is correct? Or is there any suggestions for this situation?

I am looking forward to your reply! Many thanks!

Amazingren commented 3 years ago

Or I wonder is it right to not add warped mask loss for VITON datasets ?:

CUDA_VISIBLE_DEVICES=2 python ../viton_origin_refined_train_cloth_points.py --name RefinedGMM \
                        --datamode train \
                        --gpu_ids 0 \
                        --stage GMM \
                        --model OneRefinedGMM \
                        --keep_step 200000 \
                        --decay_step 200000 \
                        --tensorboard_dir ../tensorboard_results/tensorboard_densepose_add_point_add_vgg_warped_mask_loss_One_model_refined_gmm \
                        --checkpoint_dir ../cp_vton_viton_results/checkpoint_densepose_add_point_add_vgg_warped_mask_loss_One_model_refined_gmm \
                        --dataroot /data0/bren/projects/try-on/LM-VTON/data/viton_data/viton_resize \
                        --add_point_loss \
                        --add_vgg_loss \
                        # --add_warped_mask_loss 
lgqfhwy commented 3 years ago

@Amazingren Sorry for replying late. In the paper we just compare with viton or cp-vton..etc for some good performance, but not guarantee that all results perfect. For the results image you post, I guess that you should adjust some parameters you list. The parameters you list before fit for mpv dataset. I recommend you tried to reproduce results in the mpv dataset first, and compare with cp-vton or other paper results. In the end, we just improved a litter compared with cp-vton or other paper. If you seek for perfect results, I think you should reproduce paper newly. Thank you for your attention, if you have any problem, feel free to ask.