Closed nerkulec closed 3 months ago
Is this still a problem? If so, which data/training script was used?
The parameters.running.after_before_training_metric
should not affect what is happening here, since it is only evaluated after and before the training. This must be related to the during_training
metric, if the error is within the training loop.
I tested this with my data and an example script on my machine and saw no problems, so maybe you could provide additional information on how to reproduce the error?
When running TPE hyperparameter optimization with
validation data loss (during training) is always 0
This leads to always terminating the training after
early_stopping_epochs
as the validation loss is not improving.