We use the weights from here to initialize our painter generator.
The pretrained weights are from a SPADE model that does not have 7 upsampling layers as in the original paper but only 4, and the latent dim is 1024, and they have 182 classes. Thus we can only match the weights of the conv layers but not the weights for mlp beta and gamma parameters.
We try several approaches:
Latent dim 512 : we match the conv layers of 3 levels of upsampling. This means we initialize the first upsample levels of our model with the last upsampling levels of the the pretrained model : Experiment The texture of the water is too homogeneous (repeating horizontal ripples patter)
Latent dim 1024 : match everything we can except the conv image bias term : Experiment
Latent dim 1024: match everything we can Experiment
We use the weights from here to initialize our painter generator. The pretrained weights are from a SPADE model that does not have 7 upsampling layers as in the original paper but only 4, and the latent dim is 1024, and they have 182 classes. Thus we can only match the weights of the conv layers but not the weights for mlp beta and gamma parameters. We try several approaches: