You need to use custom metrics.
But it doesn't seem to work well.
In the whole iteration, it seems that the middle stop phenomenon is not detected and cannot be checked.
i define the metric name is "mean_test_score".
i think it is not effect
my result is below.
please help me
and i change max_iters , it is worked.
but i have a question
I wonder the exact meaning of max_iters.
In a neural network, if epoch is set to 1000 and max_iters is set to 10, does it mean that one trial is performed 10 times?
## https://docs.ray.io/en/master/tune/tutorials/tune-sklearn.html
The early_stopping parameter allows us to terminate unpromising configurations. If early_stopping=True, TuneGridSearchCV will default to using Tune’s ASHAScheduler. You can pass in a custom algorithm - see Tune’s documentation on schedulers here for a full list to choose from. max_iters is the maximum number of iterations a given hyperparameter set could run for; it may run for fewer iterations if it is early stopped.
Try running this compared to the GridSearchCV equivalent, and see the speedup for yourself!
You need to use custom metrics. But it doesn't seem to work well. In the whole iteration, it seems that the middle stop phenomenon is not detected and cannot be checked.
i define the metric name is "mean_test_score". i think it is not effect
my result is below.
please help me
and i change
max_iters
, it is worked.but i have a question
I wonder the exact meaning of max_iters.
In a neural network, if epoch is set to 1000 and max_iters is set to 10, does it mean that one trial is performed 10 times?