I'm confused about the implementation of the quantization process in the 'quantization' method in 'EntropyModel' class.
In the method, uniform noises are added to the latent y in training stage (mode='noise'). it's fine to understand.
But in the case of 'dequantize' or 'symbols', firstly a substitution with the 'mean' values is performed and then rounding operation is performed.
I think the order of the two operation should be reversed, so that the output of the last layer of Encoder (Analysis network) is same with the input of the first layer of Decoder (Synthesis network).
Could you explain what i missed (concept or process)?
Hi, thank you for great work.
I'm confused about the implementation of the quantization process in the 'quantization' method in 'EntropyModel' class.
In the method, uniform noises are added to the latent y in training stage (mode='noise'). it's fine to understand.
But in the case of 'dequantize' or 'symbols', firstly a substitution with the 'mean' values is performed and then rounding operation is performed.
I think the order of the two operation should be reversed, so that the output of the last layer of Encoder (Analysis network) is same with the input of the first layer of Decoder (Synthesis network).
Could you explain what i missed (concept or process)?
Thank you in advance.