Closed jan1854 closed 3 years ago
While we are on the topic of ModelTrainer, it would be nice if the threshold for improvement could be specified when calling ModelTrainer.train()
. Right now threshold
is a parameter of ModelTrainer.maybe_get_best_weights()
, but not of ModelTrainer.train()
. Since different applications deal with different scales of evaluation scores (and relative improvements of these scores), it would be nice to have a bit more flexibility here.
Good point, I never considered this particular case. Do you want to submit a pull request? You've been reporting bugs/fixes for a while, might as well get some contribution credit :)
https://github.com/facebookresearch/mbrl-lib/blob/621832fe321a427480a7ce0323caaf56212705a9/mbrl/models/model_trainer.py#L220 The above calculation of the relative improvement of the evaluation score in ModelTrainer seems to be wrong for negative evaluation scores. This can be fixed by adding a
torch.abs()
around the divisor.Steps to reproduce
Observed Results
model_trainer.maybe_get_best_weights()
returnsNone
, which should indicate that the evaluation value did not improve fromprevious_eval_value
tocurrent_eval_value
.Expected Results
The relative improvement from
previous_eval_value
tocurrent_eval_value
is 900%. Thus,model_trainer.maybe_get_best_weights()
should return the parameters of the model, which would indicate that the evaluation value improved.