mit-han-lab / anycost-gan

[CVPR 2021] Anycost GANs for Interactive Image Synthesis and Editing
https://hanlab.mit.edu/projects/anycost-gan/
MIT License
776 stars 98 forks source link

Custom image editing #11

Closed zhanghongyong123456 closed 2 years ago

zhanghongyong123456 commented 3 years ago

Question 1: how to generate latent image code in custom image editing Question 2: when customizing image editing properties, can i use all 40 properties to modify without needing to be retrained?I see that demo.py uses eight properties

tonylins commented 3 years ago

Hi,

  1. To project a custom image, you can first perform face alignment using tools/align_face.py, and then use tools/project.py to project the image into the latent space.
  2. Yes, you can use any of the 40 attributes by setting https://github.com/mit-han-lab/anycost-gan/blob/5be666daf0eed6189e792a3381c285c749bb4b1e/demo.py#L206. Notice that some attributes may not lead to good editing results.
zhanghongyong123456 commented 3 years ago

First,Thank you very much for your prompt reply. Yes, I tried all the attribute edits and some of them didn't work as well as they should.In addition, I have a puzzle about which of the three models is more realistic for the generation of latent code,In fact, I can only use two models to generate latent code, {anycost-car-config-f and anycost-ffhq-config-f}, Some time ago I tried using StyleGan 2 to get a hidden code generation of actual images, but the resulting images were too unreal

tonylins commented 2 years ago

Hi, we do not have a very solid benchmark about the image reversion part, since it is not the focus in this paper. However, we do notice that the FFHQ dataset (faces) is easier to project, while the car dataset is more difficult. You can also refer to some other work that specifically studies the image projection process.

I will close the issue for now. Feel free to reopen.