Open sarmientoj24 opened 2 years ago
Currently does not have it.
How different is the Generator structure from the stylegan2-pytorch
or stylegan2-pytorch-ada
?
It should be same. I think it could be directly convertible if keys are matches.
But the NVLabs SG2 has this SynthesisNetwork
and MappingNetwork
which can be seen here
self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
self.num_ws = self.synthesis.num_ws
self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)
Could you indicate which parts of your SG2 architecture is the SynthesisNetwork
and MappingNetwork
?
Can you advise me how do I convert the Generator part from your code to the NVLabs one?
MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is similar to the tensorflow implementations.
I have already matched the MappingNetwork albeit it took me some time.
On the SynthesisNetwork, I can see this affine
FC layer here
self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)
Do you know what this is?
To add, there is this difference:
Your implementation's one SynthesisLayer
'convs.1.conv.weight',
'convs.1.conv.modulation.weight',
'convs.1.conv.modulation.bias',
'convs.1.noise.weight',
'convs.1.activate.bias',
'noises.noise_1',
which is equivalent to theirs
'synthesis.b16.conv1.weight',
'synthesis.b16.conv1.noise_strength',
'synthesis.b16.conv1.bias',
'synthesis.b16.conv1.resample_filter',
'synthesis.b16.conv1.noise_const',
'synthesis.b16.conv1.affine.weight',
'synthesis.b16.conv1.affine.bias',
Any idea on the counterparts?
affine corresponds to modulation. Noise weight and noise will corresponds to noise strength and noise const.
MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is similar to the tensorflow implementations.
What do G.synthesis.num_ws and G.synthesis.block_resolutions correspond to?
@bayndrysf I think it is a constant that not required in this implementation.
Hi @sarmientoj24! Did you manage to generate a script that can convert these checkpoints to this repo's architecture in the end? If so, would you be kind enouh to share it please :)?
@james-oldfield Unfortunately, I went with a different approach but you can possibly do that. I just restructured the code to make it a bit similar to NVLabs's version where there are two networks and you can produce the W, W+, S latent space.
@rosinality Can you kindly help me out with some issues regarding porting stylegan2-ada weights There are some layers left over after converting layers as per your reference code.
Your model:
convs.0.conv.blur.kernel torch.Size([4, 4]) convs.2.conv.blur.kernel torch.Size([4, 4]) convs.4.conv.blur.kernel torch.Size([4, 4]) convs.6.conv.blur.kernel torch.Size([4, 4]) convs.8.conv.blur.kernel torch.Size([4, 4]) convs.10.conv.blur.kernel torch.Size([4, 4]) to_rgbs.0.upsample.kernel torch.Size([4, 4]) to_rgbs.1.upsample.kernel torch.Size([4, 4]) to_rgbs.2.upsample.kernel torch.Size([4, 4]) to_rgbs.3.upsample.kernel torch.Size([4, 4]) to_rgbs.4.upsample.kernel torch.Size([4, 4]) to_rgbs.5.upsample.kernel torch.Size([4, 4])
StyleGan-ada synthesis.b4.resample_filter torch.Size([4, 4]) synthesis.b4.conv1.resample_filter torch.Size([4, 4]) synthesis.b8.resample_filter torch.Size([4, 4]) synthesis.b8.conv0.resample_filter torch.Size([4, 4]) synthesis.b8.conv1.resample_filter torch.Size([4, 4]) synthesis.b16.resample_filter torch.Size([4, 4]) synthesis.b16.conv0.resample_filter torch.Size([4, 4]) synthesis.b16.conv1.resample_filter torch.Size([4, 4]) synthesis.b32.resample_filter torch.Size([4, 4]) synthesis.b32.conv0.resample_filter torch.Size([4, 4]) synthesis.b32.conv1.resample_filter torch.Size([4, 4]) synthesis.b64.resample_filter torch.Size([4, 4]) synthesis.b64.conv0.resample_filter torch.Size([4, 4]) synthesis.b64.conv1.resample_filter torch.Size([4, 4]) synthesis.b128.resample_filter torch.Size([4, 4]) synthesis.b128.conv0.resample_filter torch.Size([4, 4]) synthesis.b128.conv1.resample_filter torch.Size([4, 4]) synthesis.b256.resample_filter torch.Size([4, 4]) synthesis.b256.conv0.resample_filter torch.Size([4, 4]) synthesis.b256.conv1.resample_filter torch.Size([4, 4]) mapping.w_avg torch.Size([512])
I would be grateful if you could kindly help me in figuring out the right mapping :)
here's my code, which might come in handy for someone in the future convert_weights.txt
@rosinality or others, Anyone able to figure out how to convert stylegan-2-ada pytorch weights to @rosinality implementation weights?
https://github.com/yuval-alaluf/stylegan3-editing has some resources to convert stylegan-3 to rosinality style generator. Stylegan-3 has ada support so that might be useful for you @garg-aayush
@usmancheema89 Actually, we found the following script https://github.com/rosinality/stylegan2-pytorch/issues/206#issuecomment-812273460 that allows you to convert official stylegan2-ada-pytorch weights to rosinality implementation.
I checked the script last night and it works great!.
Thanks
@usmancheema89 Actually, we found the following script #206 (comment) that allows you to convert official stylegan2-ada-pytorch weights to rosinality implementation.
I checked the script last night and it works great!.
Thanks
hello, I found out that the transformation only contains the g_ema parameter, so what about the other parameters(g parameter d parameter)
Is there a script for converting the weights to NVLabs stylegan2-pytorch or stylegan2-pytorch-ada?