An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
Describe the issue: I followed the examples/tutorials/hpo_quickstart_pytorch and modify it to optimize for some regression model. In each trail, I set up an epoch loop to 1000 epoches and report the itermidiaate losses in every iteration. Does the tuner wait till the end of each trail and evaluate its metric or does it stop it earlier if I believes it had enough for that trail? Which document describe this?
Environment:
NNI version: 2.7
Training service (local|remote|pai|aml|etc): local
Describe the issue: I followed the examples/tutorials/hpo_quickstart_pytorch and modify it to optimize for some regression model. In each trail, I set up an epoch loop to 1000 epoches and report the itermidiaate losses in every iteration. Does the tuner wait till the end of each trail and evaluate its metric or does it stop it earlier if I believes it had enough for that trail? Which document describe this?
Environment:
Configuration:
Log message:
How to reproduce it?: