wty-ustc / HairCLIP

[CVPR 2022] HairCLIP: Design Your Hair by Text and Reference Image
GNU Lesser General Public License v2.1
541 stars 68 forks source link

about color_ref_img_in_domain_path #24

Open Kai-0515 opened 2 years ago

Kai-0515 commented 2 years ago

hello thanks for your talented work. I have a question about color_ref_img_in_domain_path. When I finished pre-train with the argument hairstyle_manipulation_prob=0 --color_manipulation_prob=1 --both_manipulation_prob=0 --hairstyle_text_manipulation_prob=0.5 --color_text_manipulation_prob=1 --. How should I set the color_ref_img_in_domain_path. Is that path should be logs/image_train, but I got this error, I don't know where to find these documents. Looking forward to your reply

the error is FileNotFoundError: [Errno 2] No such file or directory: '/home/code/HairCLIP/logs/images_train/red hair/02951.jpg'

Zlin0530 commented 2 years ago

Hello, I have encountered the same problem. Have you solved it

Zlin0530 commented 1 year ago

Thank you very much for your reply, I still don't quite understand what you mean, where do I generate these images from, from the CelebA-HQ dataset?

wty-ustc commented 1 year ago

You can first train a HairCLIP that only edits the hair color, and then use it to generate these images.

Zlin0530 commented 1 year ago

You can first train a HairCLIP that only edits the hair color, and then use it to generate these images.

Thanks for your reply, it's a great job! I seem to understand that you mean to train a HairCLIP that edits hair colour by text and then use this model to edit the images in the CelebA-HQ dataset to generate the corresponding hair colour images is that right?

wty-ustc commented 1 year ago

Absolutely right.

---Original--- From: @.> Date: Thu, Feb 23, 2023 19:12 PM To: @.>; Cc: @.**@.>; Subject: Re: [wty-ustc/HairCLIP] about color_ref_img_in_domain_path (Issue#24)

You can first train a HairCLIP that only edits the hair color, and then use it to generate these images.

Thanks for your reply, it's a great job! I seem to understand that you mean to train a HairCLIP that edits hair colour by text and then use this model to edit the images in the CelebA-HQ dataset to generate the corresponding hair colour images is that right?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

Zlin0530 commented 1 year ago

Thanks very much

vie131313 commented 1 year ago

Thanks for your reply and your stunning work,I would like to ask if there is anything to be improved in this work,I want to find a way of thinking.I really appreciate it if you can give me some advice and learning direction,thank you very much!!!!!

xuzhi0413 commented 1 year ago

Thanks very much

Hi,can I ask you some questions?I want to know what ”--hairstyle_ref_img_train_path、--hairstyle_ref_img_test_path、--color_ref_img_train_path、--color_ref_img_test_path“ mean?Where to find these pictures?I set --latents_train_path=../pretrained_models/train_faces.pt \ --latents_test_path=../pretrained_models/test_faces.pt \ Why do you need to set the above command