InterDigitalInc / CompressAI

A PyTorch library and evaluation platform for end-to-end compression research
https://interdigitalinc.github.io/CompressAI/
BSD 3-Clause Clear License
1.19k stars 232 forks source link

A question about implementation of the quantization process for latent y #135

Closed herok97 closed 2 years ago

herok97 commented 2 years ago

Hi, thank you for great work.

I'm confused about the implementation of the quantization process in the 'quantization' method in 'EntropyModel' class.

In the method, uniform noises are added to the latent y in training stage (mode='noise'). it's fine to understand.

But in the case of 'dequantize' or 'symbols', firstly a substitution with the 'mean' values is performed and then rounding operation is performed.

I think the order of the two operation should be reversed, so that the output of the last layer of Encoder (Analysis network) is same with the input of the first layer of Decoder (Synthesis network).

Could you explain what i missed (concept or process)?

Thank you in advance.

herok97 commented 2 years ago

I am attaching a picture because there seems to be a lack of explanation. fig

herok97 commented 2 years ago

It's resolved.