Currently, users train the model with Keras-MXNet, save the model as MXNet model using save_mxnet_model() API and use MXNet engine for inference.
However, if we can support a new API - save_onnx_model(), users can train with Keras-MXNet and export the model as ONNX model, and use different ONNX toolchain for inference.
High level implementation details:
Update mxnet.contrib.onnx.export_model() to accept symbol and params or module object as the input. (current it expects the symbol and params file).
Add new API in in keras/engine/saving.py - save_onnx_model() -> That calls internals of save_mxnet_model() to fetch MXNet native model details, calls mxnet.contrib.onnx.export_model() and saves the model as onnx.
Note:
For this, we might require protobuf and other dynamic dependency for onnx. Need to dive deep on how we can handle this.
Currently, users train the model with Keras-MXNet, save the model as MXNet model using save_mxnet_model() API and use MXNet engine for inference.
However, if we can support a new API - save_onnx_model(), users can train with Keras-MXNet and export the model as ONNX model, and use different ONNX toolchain for inference.
High level implementation details:
Note: For this, we might require protobuf and other dynamic dependency for onnx. Need to dive deep on how we can handle this.