Closed labarababa closed 4 years ago
Thanks for opening this issue! I'm afraid I'm not understanding your question. Could you please be a bit more specific, or post some code to illustrate what you want to do?
Regarding validation predictions, all of the repo's examples show how HH creates validation sets automatically for you via the Environment
kwargs cv_type
and cv_params
. HH also automatically makes predictions for the out-of-fold (OOF) datasets and evaluates those predictions, which you can see during Experiment logging. At the end of the Experiment/OptPro, all results (including OOF/Holdout/Test predictions) are automatically saved in the directory given to Environment
's results_path
, so you can find them there.
If you're wondering about Holdout/Test datasets, rather than validation, the holdout_test_datasets_example
should be helpful.
For saving models, each library has different methods of doing this, but you can make a custom lambda_callback
to save your models. Here's a simple lambda_callback_example
.
As far as using your saved Experiments, their results are all stored in the directory above, so you're free to use them however you would normally use your results: Ensembling, averaging predictions, checking the Leaderboard to compare performance, etc.
Sorry I'm not quite getting your question. Please let me know if I missed anything, and thanks again for asking!
For saving models, each library has different methods of doing this, but you can make a custom lambda_callback to save your models. Here's a simple lambda_callback_example.
Thats it. So I can just write a function for saving the best model and use it together with a lambda_callback (on_run_end?) and dumb my model with joblib.
Thanks for your help.
@labarababa, I've just pushed an example detailing how to make a lambda_callback
for model-saving.
You can find the new example in PR #198. The broken Travis build is due to an unrelated issue.
Would you mind checking out the new example, and letting me know if that helps answer your question?
Yup, this solves the problem and its understandable und very detailed Very good addtition to the examples as well.
Thank you for your efforts.
Thanks for the great suggestion! I'll close this issue once it's merged. If you have any other questions or ideas, I'd love to hear them! Thanks for your time!
Closed by #198
Hello,
do you have any examples on how to use the (best) experiments? E.g saving the model making predictions with a valdiation set,, etc.
Kind Regards