InterDigitalInc / CompressAI

A PyTorch library and evaluation platform for end-to-end compression research
https://interdigitalinc.github.io/CompressAI/
BSD 3-Clause Clear License
1.17k stars 232 forks source link

Questions about using entropy model to compress feature #267

Closed zihanzheng-sjtu closed 9 months ago

zihanzheng-sjtu commented 9 months ago

Documentation

I have some features that I hope can be compressed and decompressed through the entropy model of compress AI, and dynamic code rate control can be achieved. I hope to make my features low-entropy by adding rate-distortion loss to my main program. But unfortunately, no matter how I adjust the lambda value in rate-distortion loss, I cannot change the size of the compressed feature. This is the result even if I compress the feature directly without training. Do you have any ideas?

YodaEmbedding commented 9 months ago

The basic models provided are all intended to target a single R-D point only, using a single fixed lambda value. This lambda value is only used during training. It is ignored during evaluation.

  1. Did you train the model using the R-D loss with a single fixed lambda? Did you observe the rate/distortion changing during training? Are all the layers receiving the distortion gradients? Are all the encoder layers receiving the rate gradients?
  2. Did you try a reasonably large range of lambdas? e.g., $\lambda = 0$ should give a model with rate=0 and 0 dB PSNR. For image compression, training with $\lambda > 0.1$ should result in fairly high-rate high-PSNR model. If this isn't the case, then something is wrong with the definition of the model or criterion or the training loop.

For the dynamic rate control, I'm not sure what scheme you're using, but some prior works in this area that may be of interest:

zihanzheng-sjtu commented 9 months ago

Thank you for such a quick reply. After testing, I found that when I set lambda to 0, I can still recover my pre-compression data almost accurately, unlike what you said. When I set the lambda to be very large, even 1000, it does not change the size of the compressed feature. During training, I observed that the rate-distortion loss decreases normally and stabilizes after reaching a certain value. But the declining part is only the bpp loss part, not the mse loss part of distortion. In fact, distrotion loss has barely changed at all.

zihanzheng-sjtu commented 9 months ago

The above problem has been solved. However, I have a new question. I sent a [1,18,35,31,31] float32 all-zero matrix (2.4M) into the entropy model for compression, but found that the compression result was still about 400K. I think this is not a satisfactory result. Do you have any ideas?

YodaEmbedding commented 9 months ago
zihanzheng-sjtu commented 9 months ago

My architecture is similar to the one you gave, since my input is already a feature, so I input it as y. I removed channel 1 and used channel 18 as batch size. How can I solve this problem you mentioned? I heard others say that compressai's probability table is fixed and it seems that I need to update it. Does this view have anything to do with my problem?

YodaEmbedding commented 9 months ago

You should call .update() before running compress/decompress, as is done in compressai.utils.eval_model. This ensures that the runtime encoder/decoder use the same distributions as the ones the model was trained for.

zihanzheng-sjtu commented 9 months ago

Thanks for your help! I have seen the solution you gave in other issues. After trying it it didn't work. Fortunately, I have solved this problem by changing the encoding method of ans to rangecoder and changing the relevant source code. A relatively satisfactory result was achieved. Thank you again for your help! I will close this issue.