DeepMC-DCVC / DCVC

Apache License 2.0
60 stars 9 forks source link

About lambda in training #11

Open fanqiNO1 opened 2 years ago

fanqiNO1 commented 2 years ago

Because the loss is \lambda D + R, when I even use \lambda=256 with MSE loss, the \lambda D part is much more than R part, which make bpp_mv_y be 0.

I wonder how can I solve th problem?

Besides, I follow the progressive training steps. But in step 1, I meet the problem.

tzayuan commented 1 year ago

Hi, @fanqiNO1

Do you have any update? I'm trying progressive training, but the loss of bpp_y and bpp_mv_y couldn't descent, and the loss of bpp_mv_z and bpp_z keep in zero. Do you have any idea? Thanks.

BRs, tzayuan