Closed gngdb closed 9 years ago
Simplest solution to this is just to write them back into the run settings json after training. We'll be doing this first.
Then, we want a pandas dataframe for cross validation results which we can wrap with holoviews.
OK, have got this working for sklearn, but rewriting the run settings will also save a bunch of things that are automatically added to run settings during loading (it was a convenient place to store them). Could think of this as a feature because it tells you where the results were last run, or a bug because people might get confused and think they have to set all these options by hand when they're just going to get overwritten next time the run settings are run.
Having problems getting Pylearn2 to output it's validation score. Closing this will involving resolving this question from the work repository.
Found example of reading channels in Pylearn2's print_monitor.py
. Don't know why I didn't look there first.
These just get stored in notebooks, only way to look at all the different things that are likely to be important (no point just storing NLL values).
Results from all training runs should be stored for every run_settings file, even for repeats. The resulting database should be easy to query. Probably the easiest way will be to just have a csv and some scripts to query it for whatever we might want to know.
Related to this issue in the work repo.