Closed JPEG2021 closed 3 years ago
Hi, we don't have plan to implement it soon but we'd gladly accept a pull request :-). So far the quick experiments I've done haven't shown better results for this class of model, so it hasn't been a priority.
The update
function computes a finite list of quantized CDFs from the scales table for faster encoding/decoding. I'm not sure how this would apply in the GMM case since the pmf/cdf of each latent element would be dependent not only on the scale but also the weights of the GMM.
As a first experiment you can skip the update
function and modify compress
and decompress
to manually compute the pmf/cdf for each element. Then there might be a way to compute a table for faster processing.
Thank you for the answer.
Is there any way to use current rans implenmentation without cdf
, cdf_lengths
and offsets
?
I can manually compute the pmf/cdf for each element, but these variables are used currently in compress
and decompress
functions.
Thanks!
Not at the moment, you'll have to write new bindings to the C api (https://github.com/InterDigitalInc/CompressAI/blob/master/third_party/ryg_rans/rans64.h). You can also use Lucas Theis rangecoder instead.
I see. Thanks.
Hi, thanks for nice works! I have some questions about GMM based entropy model in the framework of CompressAI.
Do you have any plans to implement GMM based entropy model?
If it's not a matter, could you give me some hints to implement it myself?
I have implemented GMM based entropy model for training, but not for test (real encoding/decoding). That is, I successfully modified
entropy_parameter
module inCheng2020
model and_likelihood()
ofGaussianConditional
, but I have no idea how to modifyupdate()
ofGaussianConditional
. How should I changeupdate
function for real compression?Thanks.