Closed rfriedman22 closed 2 years ago
Hi Ryan,
Sorry for the late response! Yes I think that's an appropriate way to add it for now - do you have some working code for this already you could make a pull request with?
Thanks Kathy! I wrote a hack to do this in my own code and it works. I haven't incorporated it into the actual SDK yet -- was waiting to get confirmation that my approach seemed reasonable -- but I will work on this and make a PR soon.
I was hoping to use a different learning rate scheduler than the default implementation (reduce by 0.8 after a validation loss plateau with patience of 16 epochs). Unfortunately it seems like adding this to the codebase might be rather hairy. My current idea is to add the scheduler information (except for the optimizer) in the config file, then pass a
_Proxy
object with that information toTrainModel
. Then the optimizer can be bound to the proxy and a scheduler object instance can be created when_init_train
is called. Does this seem like an appropriate approach, or is there a better design solution?It would also be useful to add support for early stopping. As I see it, this seems fairly straightforward to me -- all that is needed is to add information in the configs about which metric should be used for early stopping and how much patience. That information can be bound to
TrainModel
and utilized withintrain_and_validate
.