oneThousand1000 / Facial-Structure-Editing-of-Portrait-Images-via-Latent-Space-Classifications

(SIGGRAPH 2021) Coarse-to-Fine: Facial Structure Editing of Portrait Images via Latent Space Classifications.
GNU General Public License v3.0
55 stars 12 forks source link

请问怎么通过StyleGAN2 projector生成{name}_wp.npy数据呢,本项目内有脚本吗? #1

Closed wslyyy closed 2 years ago

wslyyy commented 2 years ago

如题

oneThousand1000 commented 2 years ago

Please see https://github.com/NVlabs/stylegan2/blob/master/run_generator.py or https://github.com/danielroich/PTI/tree/main/training/projectors

wslyyy commented 2 years ago

How long does it take to warp the image with the GPU? with CPU 13 seconds, with GPU I test each image takes 4 seconds, is this normal?

wslyyy commented 2 years ago

Please see https://github.com/NVlabs/stylegan2/blob/master/run_generator.py or https://github.com/danielroich/PTI/tree/main/training/projectors

Sorry, I'm a beginner, I used the Gs.pth and vgg16.pth you provided, and then I used the run_projector.py project_real_images in the project, but I got an error RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x64 and 1728x1). then I found ./styleGAN2_model/stylegan2_pytorch/stylegan2/external_models/lpips.py, line 91: dist += linear(torch.mean((_x0 - _x1) ** 2, dim=[-1, -2])), you can Tell me what should I do to generate {name}_wp.npy with my self-aligned face image? Not a random face

oneThousand1000 commented 2 years ago

Please see https://github.com/NVlabs/stylegan2/blob/master/run_generator.py or https://github.com/danielroich/PTI/tree/main/training/projectors

Sorry, I'm a beginner, I used the Gs.pth and vgg16.pth you provided, and then I used the run_projector.py project_real_images in the project, but I got an error RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x64 and 1728x1). then I found ./styleGAN2_model/stylegan2_pytorch/stylegan2/external_models/lpips.py, line 91: dist += linear(torch.mean((_x0 - _x1) ** 2, dim=[-1, -2])), you can Tell me what should I do to generate {name}_wp.npy with my self-aligned face image? Not a random face

Hi! Please try to use the pretrained stylegan2 and lpips models in https://github.com/NVlabs/stylegan2, and see the projecting guidance in https://github.com/NVlabs/stylegan2#projecting-images-to-latent-space

oneThousand1000 commented 2 years ago

The Gs.pth and vgg16.pth I provided may not be compatible with the code in https://github.com/NVlabs/stylegan2.

wslyyy commented 2 years ago

The Gs.pth and vgg16.pth I provided may not be compatible with the code in https://github.com/NVlabs/stylegan2.

Thanks!But I use styleGAN2_model/stylegan2_pytorch in your project directory, Gs.pth and vgg16.pth which you provided do not be compatible with the code?

oneThousand1000 commented 2 years ago

The Gs.pth and vgg16.pth I provided may not be compatible with the code in https://github.com/NVlabs/stylegan2.

Thanks!But I use styleGAN2_model/stylegan2_pytorch in your project directory, Gs.pth and vgg16.pth which you provided do not be compatible with the code?

The pretrained model is compatible with styleGAN2_model/stylegan2_pytorch in my project but possibly can not be loaded by the official stylegan2 repo in https://github.com/NVlabs/stylegan2 .(I haven't check the compatibility, so that you'd better run the whole official stylegan2 project to get wp latent code.)

wslyyy commented 2 years ago

see

Hi! I follow the projecting guidance in https://github.com/NVlabs/stylegan2#projecting-images-to-latent-space, First, i run "python3 dataset_tool.py create_from_images ~/datasets/my-custom-dataset ~/my-custom-images", Then i run " python3 run_projector.py project-real-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --dataset=my-custom-dataset --data-dir=~/datasets", But i got Local submit - run_dir: results/00009-project-real-images dnnlib: Running run_projector.project_real_images() on localhost... Loading networks from "gdrive:networks/stylegan2-ffhq-config-f.pkl"... Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Loading... Done. Setting up TensorFlow plugin "upfirdn_2d.cu": Preprocessing... Loading... Done. Loading images from "my-custom-dataset"... Traceback (most recent call last): File "run_projector.py", line 146, in <module> main() File "run_projector.py", line 141, in main dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs) File "/root/stylegan2/dnnlib/submission/submit.py", line 343, in submit_run return farm.submit(submit_config, host_run_dir) File "/root/stylegan2/dnnlib/submission/internal/local.py", line 22, in submit return run_wrapper(submit_config) File "/root/stylegan2/dnnlib/submission/submit.py", line 280, in run_wrapper run_func_obj(**submit_config.run_func_kwargs) File "/root/stylegan2/run_projector.py", line 62, in project_real_images dataset_obj = dataset.load_dataset(data_dir=data_dir, tfrecord_dir=dataset_name, max_label_size=0, repeat=False, shuffle_mb=0) File "/root/stylegan2/training/dataset.py", line 192, in load_dataset dataset = dnnlib.util.get_obj_by_name(class_name)(**kwargs) File "/root/stylegan2/training/dataset.py", line 53, in __init__ assert os.path.isdir(self.tfrecord_dir) AssertionError Should I download the FFHQ dataset instead of generating a custom dataset?

oneThousand1000 commented 2 years ago

see

Hi! I follow the projecting guidance in https://github.com/NVlabs/stylegan2#projecting-images-to-latent-space, First, i run "python3 dataset_tool.py create_from_images ~/datasets/my-custom-dataset ~/my-custom-images", Then i run " python3 run_projector.py project-real-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --dataset=my-custom-dataset --data-dir=~/datasets", But i got Local submit - run_dir: results/00009-project-real-images dnnlib: Running run_projector.project_real_images() on localhost... Loading networks from "gdrive:networks/stylegan2-ffhq-config-f.pkl"... Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Loading... Done. Setting up TensorFlow plugin "upfirdn_2d.cu": Preprocessing... Loading... Done. Loading images from "my-custom-dataset"... Traceback (most recent call last): File "run_projector.py", line 146, in <module> main() File "run_projector.py", line 141, in main dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs) File "/root/stylegan2/dnnlib/submission/submit.py", line 343, in submit_run return farm.submit(submit_config, host_run_dir) File "/root/stylegan2/dnnlib/submission/internal/local.py", line 22, in submit return run_wrapper(submit_config) File "/root/stylegan2/dnnlib/submission/submit.py", line 280, in run_wrapper run_func_obj(**submit_config.run_func_kwargs) File "/root/stylegan2/run_projector.py", line 62, in project_real_images dataset_obj = dataset.load_dataset(data_dir=data_dir, tfrecord_dir=dataset_name, max_label_size=0, repeat=False, shuffle_mb=0) File "/root/stylegan2/training/dataset.py", line 192, in load_dataset dataset = dnnlib.util.get_obj_by_name(class_name)(**kwargs) File "/root/stylegan2/training/dataset.py", line 53, in __init__ assert os.path.isdir(self.tfrecord_dir) AssertionError Should I download the FFHQ dataset instead of generating a custom dataset?

Hi, please skip the "Preparing datasets" step, this step is used to prepare the training dataset (FFHQ) of stylegan2.

The dataset_obj at Line 62 in is used to load your target image, you can directly read the target image instead of using dataset.load_dataset, for example:

read the target image:

images = cv2.resize(cv2.imread(path),(1024,1024))[:,:,::-1][np.newaxis].transpose(0,3,1,2)/255*2-1

then project_image:

project_image(proj, targets=images, png_prefix=dnnlib.make_run_dir_path('image%04d-' % image_idx), num_snapshots=num_snapshots)
wslyyy commented 2 years ago

see

Hi! I follow the projecting guidance in https://github.com/NVlabs/stylegan2#projecting-images-to-latent-space, First, i run "python3 dataset_tool.py create_from_images ~/datasets/my-custom-dataset ~/my-custom-images", Then i run " python3 run_projector.py project-real-images --network=gdrive:networks/stylegan2-ffhq-config-f.pkl --dataset=my-custom-dataset --data-dir=~/datasets", But i got Local submit - run_dir: results/00009-project-real-images dnnlib: Running run_projector.project_real_images() on localhost... Loading networks from "gdrive:networks/stylegan2-ffhq-config-f.pkl"... Setting up TensorFlow plugin "fused_bias_act.cu": Preprocessing... Loading... Done. Setting up TensorFlow plugin "upfirdn_2d.cu": Preprocessing... Loading... Done. Loading images from "my-custom-dataset"... Traceback (most recent call last): File "run_projector.py", line 146, in <module> main() File "run_projector.py", line 141, in main dnnlib.submit_run(sc, func_name_map[subcmd], **kwargs) File "/root/stylegan2/dnnlib/submission/submit.py", line 343, in submit_run return farm.submit(submit_config, host_run_dir) File "/root/stylegan2/dnnlib/submission/internal/local.py", line 22, in submit return run_wrapper(submit_config) File "/root/stylegan2/dnnlib/submission/submit.py", line 280, in run_wrapper run_func_obj(**submit_config.run_func_kwargs) File "/root/stylegan2/run_projector.py", line 62, in project_real_images dataset_obj = dataset.load_dataset(data_dir=data_dir, tfrecord_dir=dataset_name, max_label_size=0, repeat=False, shuffle_mb=0) File "/root/stylegan2/training/dataset.py", line 192, in load_dataset dataset = dnnlib.util.get_obj_by_name(class_name)(**kwargs) File "/root/stylegan2/training/dataset.py", line 53, in __init__ assert os.path.isdir(self.tfrecord_dir) AssertionError Should I download the FFHQ dataset instead of generating a custom dataset?

Hi, please skip the "Preparing datasets" step, this step is used to prepare the training dataset (FFHQ) of stylegan2.

The dataset_obj at Line 62 in is used to load your target image, you can directly read the target image instead of using dataset.load_dataset, for example:

read the target image:

images = cv2.resize(cv2.imread(path),(1024,1024))[:,:,::-1][np.newaxis].transpose(0,3,1,2)/255*2-1

then project_image:

project_image(proj, targets=images, png_prefix=dnnlib.make_run_dir_path('image%04d-' % image_idx), num_snapshots=num_snapshots)

Thanks! It works! I successfully got xxx_wp.npy, but it runs too slowly, it runs for several minutes on the gpu machine, and prints during the run 0 / 1000 ... 2022-08-15 06:43:20.744993: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.

wslyyy commented 2 years ago

Hi, In order to improve the speed of GAN's inverse mapping of real faces, I intend to replace the official iterative optimization method through encoder inference and super network, Such as https://arxiv.org/pdf/2008.00951.pdf, or this https://arxiv.org/abs/2111.15666 Have you done any research in this area? Could you give me some advice? Thank you very much!

oneThousand1000 commented 2 years ago

Hi, In order to improve the speed of GAN's inverse mapping of real faces, I intend to replace the official iterative optimization method through encoder inference and super network, Such as https://arxiv.org/pdf/2008.00951.pdf, or this https://arxiv.org/abs/2111.15666 Have you done any research in this area? Could you give me some advice? Thank you very much!

Hi, I recommend you to use the encoder4editing, as this encoder can control the proximity of the inversions to regions that StyleGAN was originally trained on. I think eladrich/pixel2style2pixel also works. But I haven't tried to apply encoder4editing or other encoders to our method, so the inversions may lead to some unexpected problems (misalignment, overall color change, etc). I used the encoder4editing for image inversion in my another paper and proposed a blending method to integrate the new-generated image into the original image. For more information please refer to http://www.cad.zju.edu.cn/home/jin/cvpr2022/cvpr2022.htm.