Closed DanielYang59 closed 1 year ago
Hi @sushreebarsa , thanks for following up.
I realized yesterday that this might not be an issue with Keras Tuner, instead it seems to be expected as I was adjusting the number of ConV layers during tuning, as a result variance in training time should be normal, if I understand correctly?
Thanks for your time and wishing you all the best.
Regards, Haoyu
System information.
Describe the problem.
Get
WARNING:tensorflow:Callback method "on_train_batch_end" is slow compared to the batch time (batch time: 0.1608s vs "on_train_batch_end" time: 0.2945s). Check your callbacks.
warning when no callbacks was set.Training is significantly slowed down and training time varies randomly between trials.
Describe the current behavior. Training is significantly slowed down and training time varies significantly between trials.
Describe the expected behavior. Training slow should be stable and no significant variance in training time is expected between epochs.
Contributing.
Standalone code to reproduce the issue.
In the "hp_model", a hypermodel with eight hyperparameters is defined (should I search so many parameters at the same time?) like this, the complete source code is enclosed as "hp_model.py":
Source code / logs.
This is the log file for training process tunerlog.txt.
Here is the source code for the hyper model and tuning process src.zip