Closed IQ17 closed 4 years ago
How about setting --latent_space_type=w
, since you are using the boundary from W space.
Yes, I tried with --latent_space_type=w or --latent_space_type=W
but the outputs are not face anymore.
I use stylegan_ffhq to generate the images, and the generated images are great. Thus I think the model is correct.
That is because the current codebase will randomly sample w codes subject to Gaussian distribution instead of the actual distribution. There are basically two solutions
(1) Run generate_data.py
first, which will save w.npy
in the output folder. Then use -i
option in edit.py
to load the generated w codes.
(2) See HiGAN which provides a more robust generator, which will sample codes from the actual distribution.
Thanks! Using the solution 1, the results are good!
Hi, thanks for the paper and the results are impressive!
I tested the code with "stylegan_ffhq" model and "stylegan_ffhq_pose_boundary.npy or stylegan_ffhq_pose_w_boundary.npy", with the default settings, but the results are not very good.
The person identity, age, even gender changed simultaneously with the pose. Regarding to the "stylegan_ffhq_pose_w_boundary.npy", the degree of pose changes are more or less ignorable.
python edit.py -m stylegan_ffhq -o results/stylegan_ffhq_pose_w_boundary -b ./boundaries/stylegan_ffhq_pose_w_boundary.npy -n 10
Is there anything that I have to adjust?