Closed 1273545169 closed 2 years ago
As stated in the limitations of our paper, since our hairstyle transfer embedding is provided by the image encoder of CLIP, it may not be good enough for characterizing the fine-grained structure of hairstyles, so the results may sometimes be unsatisfactory. You can try other images, or train HairCLIP specifically for hairstyle transfer, or add optimization strategies.
Thank you so much
I tested the effect and found that the hair style of the generated image is quite different from that of the reference image. Here is my test script. The reference image is selected from CelebAMask-HQ dataset. Is there a problem in my test process?
python scripts/inference.py \ --exp_dir=../outputs/0321/ \ --checkpoint_path=../pretrained_models/hairclip.pt \ --latents_test_path=../pretrained_models/test_faces.pt \ --editing_type=both \ --input_type=image_image \ --color_ref_img_test_path=../input/16 \ --hairstyle_ref_img_test_path=../input/16 --num_of_ref_img 1