wty-ustc / HairCLIP

[CVPR 2022] HairCLIP: Design Your Hair by Text and Reference Image
GNU Lesser General Public License v2.1
524 stars 68 forks source link

The generated image is quite different from the reference image #9

Closed 1273545169 closed 2 years ago

1273545169 commented 2 years ago

I tested the effect and found that the hair style of the generated image is quite different from that of the reference image. Here is my test script. The reference image is selected from CelebAMask-HQ dataset. Is there a problem in my test process?

python scripts/inference.py \ --exp_dir=../outputs/0321/ \ --checkpoint_path=../pretrained_models/hairclip.pt \ --latents_test_path=../pretrained_models/test_faces.pt \ --editing_type=both \ --input_type=image_image \ --color_ref_img_test_path=../input/16 \ --hairstyle_ref_img_test_path=../input/16 --num_of_ref_img 1

image
wty-ustc commented 2 years ago

As stated in the limitations of our paper, since our hairstyle transfer embedding is provided by the image encoder of CLIP, it may not be good enough for characterizing the fine-grained structure of hairstyles, so the results may sometimes be unsatisfactory. You can try other images, or train HairCLIP specifically for hairstyle transfer, or add optimization strategies.

1273545169 commented 2 years ago

Thank you so much