I noticed that torch.round is directly used in the implementation of JointAutoregressiveHierarchicalPriors.
https://github.com/InterDigitalInc/CompressAI/blob/master/compressai/models/priors.py#L549
I think the quantization should be performed using self.gaussian_conditional like other classes extended from CompressionModel. More specifically, self.gaussian_conditional._quantize should be used.
Thank you for your great work.
I noticed that
torch.round
is directly used in the implementation ofJointAutoregressiveHierarchicalPriors
. https://github.com/InterDigitalInc/CompressAI/blob/master/compressai/models/priors.py#L549 I think the quantization should be performed usingself.gaussian_conditional
like other classes extended fromCompressionModel
. More specifically,self.gaussian_conditional._quantize
should be used.This implementation is consistent with that for dequantization. https://github.com/InterDigitalInc/CompressAI/blob/master/compressai/models/priors.py#L637