Closed ravinkohli closed 3 years ago
yeah, I think quite probably the results of the new autopytorch will even be better than the old one now ;) We should delegate the fix from the cocktail branch as fast as possible
Especially considering the range of T_0
is from 0 to 20
Hey, LR Schedulers dependent on their type should be updated either epoch-wise or batch-wise. Currently, they are only happening batch-wise which is incompatible with CosineAnnealingWarmRestarts, see here.
One possible solution to this, make
{'step'='epochs'}
property of each scheduler, and use this to decide in the base trainer when to make a step.I think this is an urgent fix needed.