Open eerhardt opened 3 years ago
There is a Try/Catch around the fitting of the pipeline in AutoML. Does this catch the MLK error? If so, AutoML will keep using the trainers remaining trainers.
Based on that issue thread it didn't look like it was catching the error. Maybe the error is coming from loading one of the transformers instead of when fit
is called directly?
One thought, maybe in the PipelineSuggester.GetNextInferredPipeline
we can do those checks and not even return those pipelines? Or would that be too hard to check for so it would just be better to figure out exactly where its happening and just catch it there?
Following up from https://github.com/dotnet/machinelearning/issues/3903#issuecomment-739542538.
We should consider not failing an AutoML experiment because the dependencies necessary for MKL to load are not available on the current machine. We could log a warning or tell the user some other way that MKL can't be loaded. But this shouldn't block the user, and force them to figure out how to exclude the problematic trainer (Ols in the case above).