fahadshamshad / Clip2Protect

[CVPR 2023] Official repository of paper titled "CLIP2Protect: Protecting Facial Privacy using Text-Guided Makeup via Adversarial Latent Search".
https://fahadshamshad.github.io/Clip2Protect/
96 stars 11 forks source link

Some questions about the lfw experiment #4

Closed lizi123321123 closed 10 months ago

lizi123321123 commented 1 year ago

Sorry to bother you, looked at your public code and it is very well written. Following your process, I selected 6 pieces of data from the lfw dataset and obtained their own latent code as well as the stitched together implicit code (obtaining the latent code for each requires modifying e4e's code), which I then put into your code for finetune and generation, but the encrypted image that ended up being generated with the red lips is very strange, and has a lot of and the original image has a big difference, and the first phase of the inverse image generated is not very clear, can I ask you where this problem is?

lizi123321123 commented 1 year ago

Sorry, i find that it do not need modify e4e's code to obtain the latent code for each image. But protected images of lfw dataset are still strange.

fahadshamshad commented 1 year ago

Thanks for your interest in our work and for the detailed description of the steps you took. Please make sure that you are doing the StyleGAN preprocessing on the LFW images before inverting them into the latent space. This preprocessing is not required for the CelebA-HQ dataset, as those images are already aligned. While our approach is still capable of delivering a high protection rate compared to baseline methods, the visual results on the LFW dataset might not be as comparable to those of CelebA-HQ due to its challenging nature.

Please let us know if you have any further questions or concerns.