InterDigitalInc / CompressAI

A PyTorch library and evaluation platform for end-to-end compression research
https://interdigitalinc.github.io/CompressAI/
BSD 3-Clause Clear License
1.19k stars 232 forks source link

Quantization in `JointAutoregressiveHierarchicalPriors` #6

Closed kktsubota closed 3 years ago

kktsubota commented 3 years ago

Thank you for your great work.

I noticed that torch.round is directly used in the implementation of JointAutoregressiveHierarchicalPriors. https://github.com/InterDigitalInc/CompressAI/blob/master/compressai/models/priors.py#L549 I think the quantization should be performed using self.gaussian_conditional like other classes extended from CompressionModel. More specifically, self.gaussian_conditional._quantize should be used.

This implementation is consistent with that for dequantization. https://github.com/InterDigitalInc/CompressAI/blob/master/compressai/models/priors.py#L637

jbegaint commented 3 years ago

hey, thanks for the comment!

It's equivalent in this case but you are right that it would be more consistent to use _quantize. I'll update this, thanks!