I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.
Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??
Hi ONNXRT team,
I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.
Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??
I have made it different version with
if-else
conditions in this function: https://github.com/onnx/tutorials/blob/ae0202ea5431f67ecfac03afc9987d67581f2809/PyTorchCustomOperator/ort_custom_op/custom_op.h#L38 to run for different height.So, whenever I want to run for a particular dims, I will pass the args here: https://github.com/onnx/tutorials/blob/ae0202ea5431f67ecfac03afc9987d67581f2809/PyTorchCustomOperator/ort_custom_op/custom_op_test.cc#L89 as
CustomOp custom_op(implem, ih)
, implem is in my control, so no worries about that, but ih is dependent on the height of input tensor.So, the main thing I want to do here is to execute the custom op dynamically based on the height of the input tensor.
I have referred to this tutorial for adding the custom op in ONNXRT: https://github.com/onnx/tutorials/tree/master/PyTorchCustomOperator
Look forward to your reply
Thanks!