Open letmaik opened 4 years ago
Actually, for some reason the mse_obs
list of metric observations is always of size 1 which corresponds to the metrics of the last epoch, and weirdly on the console this metric value is reported for best step 0. It seems a bit like keras tuner has no full visibility into the training process but only the final result. Am I doing something wrong? Is there a setting to record metrics for each epoch? Also, are steps == epochs?
@letmaik Thanks for the issue!
Could you provide more context on your use-case? It's possible that handling this via a Callback during training is the easiest way to get at that information
My context is that I'm not only interested in the last/best metric values but rather also want to see for a given trial what the training curve looks like. I'm not sure how I would do this via callbacks since I somehow need to keep track of which trial I'm in. It seems like this should be handled by the MetricsTracker/MetricHistory. Let me know though if this doesn't make sense. It could be that I'm completely off here.
I'm also interested in getting the values for each epoch the tuner uses in its search. I'm interested in the variability within the data not just the average score for each combination of hyperparameters.
I need to retrieve the per-epoch training history of the metrics for each trial. I'm aware that this is not exposed in the public API yet, but since I need it I had a go anyway:
This works, but seems quite cumbersome. Are there plans to expose the histories in a more high-level way? Something along the lines of
trial.get_history(aggregate='mean')
which simply returns a dictionary that maps metric name to list of floats. If aggregate is not given orNone
then it could return a dictionary from metric name to list of list of floats, where the outer list is per execution.