Closed zihanzheng-sjtu closed 11 months ago
The basic models provided are all intended to target a single R-D point only, using a single fixed lambda value. This lambda value is only used during training. It is ignored during evaluation.
For the dynamic rate control, I'm not sure what scheme you're using, but some prior works in this area that may be of interest:
Thank you for such a quick reply. After testing, I found that when I set lambda to 0, I can still recover my pre-compression data almost accurately, unlike what you said. When I set the lambda to be very large, even 1000, it does not change the size of the compressed feature. During training, I observed that the rate-distortion loss decreases normally and stabilizes after reaching a certain value. But the declining part is only the bpp loss part, not the mse loss part of distortion. In fact, distrotion loss has barely changed at all.
The above problem has been solved. However, I have a new question. I sent a [1,18,35,31,31] float32 all-zero matrix (2.4M) into the entropy model for compression, but found that the compression result was still about 400K. I think this is not a satisfactory result. Do you have any ideas?
My architecture is similar to the one you gave, since my input is already a feature, so I input it as y. I removed channel 1 and used channel 18 as batch size. How can I solve this problem you mentioned? I heard others say that compressai's probability table is fixed and it seems that I need to update it. Does this view have anything to do with my problem?
You should call .update()
before running compress/decompress, as is done in compressai.utils.eval_model
. This ensures that the runtime encoder/decoder use the same distributions as the ones the model was trained for.
Thanks for your help! I have seen the solution you gave in other issues. After trying it it didn't work. Fortunately, I have solved this problem by changing the encoding method of ans to rangecoder and changing the relevant source code. A relatively satisfactory result was achieved. Thank you again for your help! I will close this issue.
Hi, Great info on this thread. How did you build the rangecoder? It keeps asking me for a filepath.
Documentation
I have some features that I hope can be compressed and decompressed through the entropy model of compress AI, and dynamic code rate control can be achieved. I hope to make my features low-entropy by adding rate-distortion loss to my main program. But unfortunately, no matter how I adjust the lambda value in rate-distortion loss, I cannot change the size of the compressed feature. This is the result even if I compress the feature directly without training. Do you have any ideas?