onnx / tutorials

Tutorials for creating and using ONNX models
Apache License 2.0
3.34k stars 626 forks source link

Regarding the dynamism for custom op in ONNXRT #269

Open Darshvino opened 2 years ago

Darshvino commented 2 years ago

Hi ONNXRT team,

I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.

Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??

I have made it different version with if-else conditions in this function: https://github.com/onnx/tutorials/blob/ae0202ea5431f67ecfac03afc9987d67581f2809/PyTorchCustomOperator/ort_custom_op/custom_op.h#L38 to run for different height.

struct CustomOp
    : Ort::CustomOpBase<CustomOp, Kernel<int64_t>>
{

private:
  std::string implem;
  unsigned int ih;

public:
  CustomOp(std::string implem, unsigned int ih) : implem(implem), ih(ih) {}
  void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const
  {
    if (ih == 54)
    {
        return new Kernel_1<int64_t>(api, info);  
     }
    else if(ih == 50)
    {
        return new Kernel_2<int64_t>(api, info);  

    } 
    .....

}    

So, whenever I want to run for a particular dims, I will pass the args here: https://github.com/onnx/tutorials/blob/ae0202ea5431f67ecfac03afc9987d67581f2809/PyTorchCustomOperator/ort_custom_op/custom_op_test.cc#L89 asCustomOp custom_op(implem, ih), implem is in my control, so no worries about that, but ih is dependent on the height of input tensor.

So, the main thing I want to do here is to execute the custom op dynamically based on the height of the input tensor.

I have referred to this tutorial for adding the custom op in ONNXRT: https://github.com/onnx/tutorials/tree/master/PyTorchCustomOperator

Look forward to your reply

Thanks!