Open fanqiNO1 opened 2 years ago
Hi, @fanqiNO1
Do you have any update? I'm trying progressive training, but the loss of bpp_y and bpp_mv_y couldn't descent, and the loss of bpp_mv_z and bpp_z keep in zero. Do you have any idea? Thanks.
BRs, tzayuan
Because the loss is \lambda D + R, when I even use \lambda=256 with MSE loss, the \lambda D part is much more than R part, which make bpp_mv_y be 0.
I wonder how can I solve th problem?
Besides, I follow the progressive training steps. But in step 1, I meet the problem.