Open arthur-thuy opened 1 year ago
That make sense! Something like
wrapper.train_and_test_on_datasets(eval_set='val')
?
For backward compatibility, we would still keep test
as the default.
What do you think?
That would be a good solution in my opinion!
Is your feature request related to a problem? Please describe. The MetricMixin class only creates "train" and "test" metrics in the
add_metric
method. This works fine when only using a training and test set.However, when also using a validation set such as in the snippets below, this presents a problem.
Here, the true validation metrics are recorded as "test" and are later overwritten by the true test metrics also recorded in "test".
Describe the solution you'd like It would be nice if the
test_on_dataset
andtrain_and_test_on_datasets
functions have an argument to specify which metric is written ("val" or "test").Describe alternatives you've considered A simple but cumbersome solution is to create a dict and copy all the "test" metrics corresponding to the true validation metrics in the dict as "val", as follows:
Additional context /