Is your feature request related to a problem? Please describe.
I need a way to visualize how my model is learning during training, which is a comparison between training loss and test loss.
Describe the solution you'd like
An event handler that enables the ability to extract loss during training.
Describe alternatives you've considered
Running the model for x epochs, evaluating the model, then retraining the model in a loop. This unfortunately does not work for all models, such as LightGbm that can't be retrained.
var kfold = ctx.BinaryClassification.CrossValidate(training, estimator, param.kfold);
var bestModel = kfold.OrderByDescending(p => p.Metrics.Accuracy).Select(p => p.Model).First();
var testOutput = bestModel.Transform(test);
var metrics = ctx.BinaryClassification.Evaluate(testOutput);
// This code doesn't work as estimator is IEstimator<ITransform> and bestModel is ITransform. Not sure how you would do this...
estimator = bestModel;
Additional context
Ultimately, I am trying to analyze what the models I'm comparing are actually doing and so far I haven't found any documentation or any straightforward way to do it.
Is your feature request related to a problem? Please describe. I need a way to visualize how my model is learning during training, which is a comparison between training loss and test loss.
Describe the solution you'd like An event handler that enables the ability to extract loss during training.
Describe alternatives you've considered Running the model for x epochs, evaluating the model, then retraining the model in a loop. This unfortunately does not work for all models, such as LightGbm that can't be retrained.
Additional context Ultimately, I am trying to analyze what the models I'm comparing are actually doing and so far I haven't found any documentation or any straightforward way to do it.![image](https://github.com/dotnet/machinelearning/assets/33850520/79ca7603-a42b-4746-a8b3-76e7f2c3aa2a)