Closed sedol1339 closed 8 months ago
While you cannot have the automl benchmark software evaluate the predictions at the different stages, you can definitely store those predictions and write your own code to load and evaluate them. For example, auto-sklearn saves a number of models. Anything written to the subdirectory generated by from frameworks.shared.callee import output_subdir
will be stored and available after the benchmark run. By convention we use a _save_artifacts
script parameter to specify which information the integrated framework should store (in addition to the minimum that are returned by the results(...)
call).
Thank you!
Hello! My model has several stages, and hence exec.py produces several predictions. I would like to evaluate them all. How to do this? I see that run_in_venv should return only a single prediction for every test sample, is there a way to bypass this limitation?