Open shreyaskamathkm opened 2 years ago
Hi,
Thank you for pointing this out. This might actually be a bug (also present when AdaBins was trained). OnecycleLR Scheduler should get the lrs for all param groups as it overwrites the originals!
However, this also means that original AdaBins was actually trained on settings intended by "same_lr".
Hi,
Thank you so much for publishing the code. This work is excellent and has achieved state-of-the-art performance. However, when I was trying to retrain the model with my training code built with pytorch lightning, I was unable to reach the results obtained in the paper.
I think I have found a bug in the code, which may be why I could not achieve the desired result. When the
args.same_lr
flag is set to false, the encoder and decoder are supposed to have different learning rates. However, the learning rate is still constant.The OneCycleLR used in this line expects a list of lrs to use different learning rates for encoder and decoder. But as an integer is used, the lr is the same for both encoder and decoder even when
args.same_lr
is set to false.When I print out the
scheduler.optimizer
, I receive the following.When I replace the section with the following:
Notice that the output of
scheduler.optimizer
changes to the followingWould you please let me know if this behavior is intentional?
Thank you, Best, SK