Currently, the output of the pipeline is saved per each epoch. This is useful, yet this loss history is saved in the models loss_history variable. It is much more relevant to save the best r2 per run, after the run has finished.
[x] Modify the fit_model.py pipeline to write the final row per n_epoch e.g. n_epochs: [10, 20, 30] is equivalent two three rows per n_epochs, and that times the number of kernels / learning rates attempted.
[x] include the other parameters to the table i.e. learning/rate/running_time/others that could be convenient.
[x] include a parameter --outdense in fit_model.py that would print all rows as implemented now, with default False. That way, if we want to print the entries per individual epoch later, we'll turn that to True.
Currently, the output of the pipeline is saved per each epoch. This is useful, yet this loss history is saved in the models loss_history variable. It is much more relevant to save the best r2 per run, after the run has finished.
n_epochs: [10, 20, 30]
is equivalent two three rows per n_epochs, and that times the number of kernels / learning rates attempted.--outdense
infit_model.py
that would print all rows as implemented now, with default False. That way, if we want to print the entries per individual epoch later, we'll turn that to True.