Open cyrilzakka opened 4 years ago
pre-trained model vgg16_zhang_perceptual.pkl is trained for color images
@sandhyalaxmiK that's what I thought so I tried looking around for greyscale implementations but to no avail. Closest thing I could figure out was to sum of the weights of the first convolutional layer kernel in the pretrained VGG-16 and go from there but I have no clue what the implications of this are.
Edit 1: Will attempt this and report back. Really hoping I don't have to retrain my GAN using RGB https://github.com/RohitSaha/VGG_Imagenet_Weights_GrayScale_Images/blob/master/convert_vgg_grayscale.py
I also encountered the same problem. Is there any greyscale implementations under pytorch?
After successfully training StyleGAN 2 on a dataset of B&W images (1, 128, 128), I tried using the network for projecting real images into latent space. Unfortunately it seems like it was only coded with RGB in mind because I'm getting an error related to channels (1 vs. 3), as shown below. Unfortunately I can't find a place in the code where the 3 channels are defined. Is there any way of modifying the projector for B&W images?