rosinality / stylegan2-pytorch

Implementation of Analyzing and Improving the Image Quality of StyleGAN (StyleGAN 2) in PyTorch
MIT License
2.74k stars 623 forks source link

Converting to NVLabs stylegan2-pytorch or stylegan2-pytorch-ada #277

Open sarmientoj24 opened 2 years ago

sarmientoj24 commented 2 years ago

Is there a script for converting the weights to NVLabs stylegan2-pytorch or stylegan2-pytorch-ada?

rosinality commented 2 years ago

Currently does not have it.

sarmientoj24 commented 2 years ago

How different is the Generator structure from the stylegan2-pytorch or stylegan2-pytorch-ada?

rosinality commented 2 years ago

It should be same. I think it could be directly convertible if keys are matches.

sarmientoj24 commented 2 years ago

But the NVLabs SG2 has this SynthesisNetwork and MappingNetwork which can be seen here

        self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
        self.num_ws = self.synthesis.num_ws
        self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)

Could you indicate which parts of your SG2 architecture is the SynthesisNetwork and MappingNetwork?

sarmientoj24 commented 2 years ago

Can you advise me how do I convert the Generator part from your code to the NVLabs one?

rosinality commented 2 years ago

MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is similar to the tensorflow implementations.

sarmientoj24 commented 2 years ago

I have already matched the MappingNetwork albeit it took me some time.

On the SynthesisNetwork, I can see this affine FC layer here

self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)

Do you know what this is?

To add, there is this difference:
Your implementation's one SynthesisLayer

'convs.1.conv.weight',
'convs.1.conv.modulation.weight',
'convs.1.conv.modulation.bias',
'convs.1.noise.weight',
'convs.1.activate.bias',
'noises.noise_1',

which is equivalent to theirs

'synthesis.b16.conv1.weight',
'synthesis.b16.conv1.noise_strength',
'synthesis.b16.conv1.bias',
'synthesis.b16.conv1.resample_filter',
'synthesis.b16.conv1.noise_const',
'synthesis.b16.conv1.affine.weight',
'synthesis.b16.conv1.affine.bias',

Any idea on the counterparts?

rosinality commented 2 years ago

affine corresponds to modulation. Noise weight and noise will corresponds to noise strength and noise const.

bayndrysf commented 2 years ago

MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is similar to the tensorflow implementations.

What do G.synthesis.num_ws and G.synthesis.block_resolutions correspond to?

rosinality commented 2 years ago

@bayndrysf I think it is a constant that not required in this implementation.

james-oldfield commented 2 years ago

Hi @sarmientoj24! Did you manage to generate a script that can convert these checkpoints to this repo's architecture in the end? If so, would you be kind enouh to share it please :)?

sarmientoj24 commented 2 years ago

@james-oldfield Unfortunately, I went with a different approach but you can possibly do that. I just restructured the code to make it a bit similar to NVLabs's version where there are two networks and you can produce the W, W+, S latent space.

usmancheema89 commented 2 years ago

@rosinality Can you kindly help me out with some issues regarding porting stylegan2-ada weights There are some layers left over after converting layers as per your reference code.

Your model:

convs.0.conv.blur.kernel torch.Size([4, 4]) convs.2.conv.blur.kernel torch.Size([4, 4]) convs.4.conv.blur.kernel torch.Size([4, 4]) convs.6.conv.blur.kernel torch.Size([4, 4]) convs.8.conv.blur.kernel torch.Size([4, 4]) convs.10.conv.blur.kernel torch.Size([4, 4]) to_rgbs.0.upsample.kernel torch.Size([4, 4]) to_rgbs.1.upsample.kernel torch.Size([4, 4]) to_rgbs.2.upsample.kernel torch.Size([4, 4]) to_rgbs.3.upsample.kernel torch.Size([4, 4]) to_rgbs.4.upsample.kernel torch.Size([4, 4]) to_rgbs.5.upsample.kernel torch.Size([4, 4])


StyleGan-ada synthesis.b4.resample_filter torch.Size([4, 4]) synthesis.b4.conv1.resample_filter torch.Size([4, 4]) synthesis.b8.resample_filter torch.Size([4, 4]) synthesis.b8.conv0.resample_filter torch.Size([4, 4]) synthesis.b8.conv1.resample_filter torch.Size([4, 4]) synthesis.b16.resample_filter torch.Size([4, 4]) synthesis.b16.conv0.resample_filter torch.Size([4, 4]) synthesis.b16.conv1.resample_filter torch.Size([4, 4]) synthesis.b32.resample_filter torch.Size([4, 4]) synthesis.b32.conv0.resample_filter torch.Size([4, 4]) synthesis.b32.conv1.resample_filter torch.Size([4, 4]) synthesis.b64.resample_filter torch.Size([4, 4]) synthesis.b64.conv0.resample_filter torch.Size([4, 4]) synthesis.b64.conv1.resample_filter torch.Size([4, 4]) synthesis.b128.resample_filter torch.Size([4, 4]) synthesis.b128.conv0.resample_filter torch.Size([4, 4]) synthesis.b128.conv1.resample_filter torch.Size([4, 4]) synthesis.b256.resample_filter torch.Size([4, 4]) synthesis.b256.conv0.resample_filter torch.Size([4, 4]) synthesis.b256.conv1.resample_filter torch.Size([4, 4]) mapping.w_avg torch.Size([512])

I would be grateful if you could kindly help me in figuring out the right mapping :)

usmancheema89 commented 2 years ago

here's my code, which might come in handy for someone in the future convert_weights.txt

garg-aayush commented 1 year ago

@rosinality or others, Anyone able to figure out how to convert stylegan-2-ada pytorch weights to @rosinality implementation weights?

usmancheema89 commented 1 year ago

https://github.com/yuval-alaluf/stylegan3-editing has some resources to convert stylegan-3 to rosinality style generator. Stylegan-3 has ada support so that might be useful for you @garg-aayush

garg-aayush commented 1 year ago

@usmancheema89 Actually, we found the following script https://github.com/rosinality/stylegan2-pytorch/issues/206#issuecomment-812273460 that allows you to convert official stylegan2-ada-pytorch weights to rosinality implementation.

I checked the script last night and it works great!.

Thanks

zhanghongyong123456 commented 1 year ago

@usmancheema89 Actually, we found the following script #206 (comment) that allows you to convert official stylegan2-ada-pytorch weights to rosinality implementation.

I checked the script last night and it works great!.

Thanks

hello, I found out that the transformation only contains the g_ema parameter, so what about the other parameters(g parameter d parameter) image image