Justin-Tan / generative-compression

TensorFlow Implementation of Generative Adversarial Networks for Extreme Learned Image Compression
MIT License
511 stars 108 forks source link

Questions about the quantizer/encoder implementation? #23

Closed hancy16 closed 4 years ago

hancy16 commented 5 years ago

In current implementation the "quantizer" quantizes the feature maps into 5 centers {-2,-1,0,1,2}. However, the encoder adopts relu as activation, after which negatives can never be achieved. So { -2,-1 } become useless here. Is there anything wrong with my understanding?

Justin-Tan commented 5 years ago

I think it should be effective as long as there is a convolutional layer before the first Relu nonlinearity. You can try setting the centers to positive values and see what happens.

hancy16 commented 5 years ago

Maybe I did not express myself very well. In the end of the encoder there is one relu nonlinearity, so feature maps fed into quantizer are non-negatives. Thus only three levels of quantization exist(this can be verified in the tensorboard). In this case the actual entropy should be smaller than the H/16W/16log(5)*C upper bound by a non-ignorable margin. But the authors claim that this upper bound is tight. This is confusing. Look forward to your favourable reply.

Justin-Tan commented 5 years ago

Sorry, I originally thought you were referring to the decoder...

Good point, I believe you are correct. I think the relu should be omitted in the final layer of the encoder for the bound to make sense! Good find.