microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.68k stars 2.93k forks source link

Using onnxruntime server for model deployment #12044

Open debjyoti003 opened 2 years ago

debjyoti003 commented 2 years ago

Is there any way we can save the model with the registered custom ops, so that each time when we load the onnx model we don't have to register the custom ops? Right now every time we load the model, we need to register the custom ops. Actually I need to deploy the model on onnxruntime server, currently what is happening is when I am deploying it on onnxruntime server, it is throwing [Load model from /location/to/the/onnx_model failed:Fatal error: StringRegexReplace is not a registered function/op]

And below is the code I have to use every time to register the custom op:

import onnx
import onnxruntime as ort
from onnxruntime import InferenceSession, SessionOptions

so = ort.SessionOptions()
so.register_custom_ops_library(get_library_path())
sess = ort.InferenceSession("model_name.onnx", so) 
faxu commented 2 years ago
  1. ONNX Runtime Server is no longer supported/maintained, so should only be used at your own risk.
  2. @pranavsharma - any suggestion for custom op registration?
debjyoti003 commented 2 years ago

@faxu yeah we won't use it in production, we are searching for some other alternatives now.

faxu commented 2 years ago

We recommend Triton Inference Server with ORT Backend for a serving solution for ORT.