Closed JTunis closed 1 month ago
onnx_op defines a function kernel to run the onnx node on inference. Without this onnx_op function body, the ORT inference session cannot find this Python function for running the custom op node. Alternatively, you can create a C++ function for running the node and register it in ORT-extensions DLL/shared library, It also works but you need to build ort-extensions by yourself.
Makes sense to me, thanks
I have an ONNX model composed of custom ops registered via the
@onnx_op
decorator like the following:If I try to load the model into an inferenceSession outside of the Python session in which the ops were registered (in a service that doesn't contain the
@onnx_op
decorated methods, for example) I get the following error:Is there a way to export the Custom Ops Library after registering new ops with it via the
@onnx_op
decorator such that I'm able to use those custom ops without their implementations needing to be present in the inference service?