Open faxu opened 4 years ago
In reality Step #2 and #3 are both optional:
best_run, onnx_mdl = remote_run.get_output (return_onnx_model=True)
is not needed, but optional.OnnxConverter.save_onnx_model(onnx_mdl, onnx_fl_path)
) is not needed, but optional.The ONNX model file is ready in the /outputs folder (if ONNX was set in the AutoML confing) so it can also be downloaded with:
best_run.download_file('outputs/model.onnx')
As you can also download other files created by AutoML:
best_run.download_file('outputs/model.pkl')
best_run.download_file('outputs/conda_env_v_1_0_0.yml')
best_run.download_file('outputs/env_dependencies.json')
best_run.download_file('outputs/scoring_file_v_1_0_0.py')
best_run.download_file('pipeline_graph.json')
When trying to get an ONNX model from AutoML, you need to set configurations in 3 places.
Ideally this should just be controlled in 1 place, perhaps when getting the model (step 2). Step #1 should go away once we have 100% ONNX support for AutoML models, so for short term it's ok. It's unclear why step 3 is needed with a separate ONNXConverter. Can this step be merged with #2? The mechanism/convention to save an ONNX model should be the same as saving a non-ONNX model.
(reference notebook: https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features/auto-ml-classification-bank-marketing-all-features.ipynb)