Open mmittek opened 6 years ago
I remember I’ve tried convtranspose in v1 model, but I dont remember why I switched back to sub-pix (pixelshuffler)... Maybe it is because pixelshuffler is faster. I think it will not be much different in terms of final output quality though.
The upscale block uses pixel shuffler after 4x convolution. I understand that this is a neat way of increasing the number of coefficients and then nicely reshaping everything to bring it one resolution step up, but why this and not Conv2DTranspose?
def upscale_ps(filters, use_norm=True): def block(x): x = Conv2D(filters*4, kernel_size=3, use_bias=False, kernel_initializer=RandomNormal(0, 0.02), padding='same' )(x) x = LeakyReLU(0.1)(x) x = PixelShuffler()(x) return x return block