Closed dadaxxxx closed 1 year ago
We directly use the editing vectors (https://github.com/genforce/interfacegan/tree/master/boundaries, https://github.com/zhujiapeng/LowRankGAN/tree/master/directions/ffhq1024) found by other papers. Please refer to our paper and their papers for the details.
but i find that the editing_w_age.pt is not same with stylegan_ffhq_age_w_boundary.npy and stylegan_celebahq_age_w_boundary.npy?
While I am trying to train video editing model, I also have the same question. It seems editing_w vector such as age and hair this repo provides has [2,18,512] while the feature vector from InterFaceGAN has [1,512] dimension. Could you explain what needs to do in this stage?
I just replicate the [1,512] 18 times to obtain a [18,512]. Since different layers contol different styles, like the last few layers control the color and textures, the first few layers control the pose. If you just want to change the expression without changing the color/textures or pose, you can set these layers to 0.
directions = torch.tensor(np.load('./directions/expression.npy')).reshape(1,1,512).repeat(1,18,1).float().to(device) * 2
directions:,0:3] = 0 # don't want to change pose
directions[:,7:] = 0 # don't want to change color or textures
What is the function of editing_w?is it train from different age data ?
or styleganex_edit_age.pt is it train from different age data ?
how to learn age change, What is the principle, if want to learn model use my own different age data , What should I do