Is your feature request related to a problem? Please describe.
AutoML tasks created like mlContext.Auto().CreateRegressionExperiment(), when .Execute is invoked, are not cancellable. The cancellation token passed in the the settings on the Create call is ignored. Therefore a training run cannot be cancelled gracefully.
Describe the solution you'd like
Add an optional parameter to AutoMLExperiment.Run() that is a cancellation token with a default (identical to .RunAsync).
Passed the new token on to .RunAsync.
Update the callers to pass along their settings token. For example, in RegressionExperiment.Execute, the call to _experiment.Run() would become _experiment.Run(Settings.CancellationToken).
Describe alternatives you've considered
I do not see any alternatives.
Additional context
I am prepared to offer a PR for this. It would include connecting this functionality for RegressionExperiment, BinaryClassificationExperiment, and MulticlassClassificationExperiment, all of which suffer the same issue. For test coverage it appears something similar to AutoMLExperimentTests.AutoMLExperiment_throw_timeout_exception_when_ct_is_canceled_and_no_trial_completed_Async; invoking the synchronous version via another test would cover ensuring the token is passed along. For the individual experiment types, I don't see any tests that cover running these experiments via this api style, except for GridSearchTest.TestGridSearchTrialRunner2. I can attempt to follow a similar pattern such as this for each of the three experiment types.
Is your feature request related to a problem? Please describe. AutoML tasks created like mlContext.Auto().CreateRegressionExperiment(), when .Execute is invoked, are not cancellable. The cancellation token passed in the the settings on the Create call is ignored. Therefore a training run cannot be cancelled gracefully.
Describe the solution you'd like
Describe alternatives you've considered I do not see any alternatives.
Additional context I am prepared to offer a PR for this. It would include connecting this functionality for RegressionExperiment, BinaryClassificationExperiment, and MulticlassClassificationExperiment, all of which suffer the same issue. For test coverage it appears something similar to AutoMLExperimentTests.AutoMLExperiment_throw_timeout_exception_when_ct_is_canceled_and_no_trial_completed_Async; invoking the synchronous version via another test would cover ensuring the token is passed along. For the individual experiment types, I don't see any tests that cover running these experiments via this api style, except for GridSearchTest.TestGridSearchTrialRunner2. I can attempt to follow a similar pattern such as this for each of the three experiment types.