Hi there, I recently came across a small hiccup while using the luz package (kudos on the great work! 😃). It seems that when attempting to employ the reduce_on_plateau learning rate scheduler, an exception is triggered due to the luz_callback_lr_scheduler not factoring in the current_loss for specific schedulers.
This issue bears a resemblance to a similar one I encountered in the tabnet package, https://github.com/mlverse/tabnet/pull/120). I've crafted a corresponding solution and submitted this pull request to solve this bug.
Please let me know if there's anything else I can provide or do to support this pull request.
Hi there, I recently came across a small hiccup while using the
luz
package (kudos on the great work! 😃). It seems that when attempting to employ thereduce_on_plateau
learning rate scheduler, an exception is triggered due to theluz_callback_lr_scheduler
not factoring in thecurrent_loss
for specific schedulers. This issue bears a resemblance to a similar one I encountered in the tabnet package, https://github.com/mlverse/tabnet/pull/120). I've crafted a corresponding solution and submitted this pull request to solve this bug. Please let me know if there's anything else I can provide or do to support this pull request.