Closed Googolxx closed 2 years ago
The quantization actually happens at each location in y at the line: https://github.com/InterDigitalInc/CompressAI/blob/278f45e6f444448d3c44d973e51f9723768afa18/compressai/models/priors.py#L551 I think I understand your concern, the implementation might be misleading: the (zero) padded y is sort of used as a placeholder, but the contexts use y_q in the end, "unquantized" y is not used. We could start with a "zeros" tensor like in "decompress". Did I get your point? Feel free to comment if you still think there is a bug.
Closing this for now, since I don't think anything is broken. Feel free to reopen or submit a PR if a fix is needed.
In mbt2018, y is quantized to y_hat, then y_hat(which has been quantized) is feeded into the context model.However, in https://github.com/InterDigitalInc/CompressAI/blob/278f45e6f444448d3c44d973e51f9723768afa18/compressai/models/priors.py#L535-L552, y has not been quantized(which means "y_hat" in the code has not been quantized), directly feeded into the context model, y is quantized when encoding. When decompress, the y_hat which is feeded into the context model is quantized. Is there an error here?