Closed schyun9212 closed 4 years ago
ATen operator is included in ROIAlign a custom operator of maskrcnn-benchmark. I registered the custom operator to export but there were unexpected additional ATen operator after ROIAlign in graph.
%109 = RoiAlign[output_height = 7, output_width = 7, sampling_ratio = 2, spatial_scale = 0.0625](%feature_2, %108, %106)
%110 = Cast[to = 1](%109)
%111 = Cast[to = 7](%101)
%112 = Constant[value = <Scalar Tensor []>]()
%113 = ATen[operator = 'index_put'](%94, %111, %110, %112)
Reference about index_put https://discuss.pytorch.org/t/torchscript-indexing-question-filling-nans/53100
Suspected cause
# pooler.py
idx_in_level = torch.nonzero(levels.type(torch.int32) == level).squeeze(1)
rois_per_level = rois[idx_in_level]
result[idx_in_level] = pooler(per_level_feature, rois_per_level).to(dtype) # <---
This issue is solved by replacing ATen related operator to supported operator in ffe5ded.
🐛 Bug
The exported roi_head has ATen operator. But it is not supported in ONNX.
To Reproduce
Expected behavior
Environment
PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2
Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 440.48.02 cuDNN version: Probably one of the following: /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7
Versions of relevant libraries: [pip3] numpy==1.18.1 [pip3] onnx==1.6.0 [pip3] onnxruntime==1.1.0 [pip3] Pillow==6.2.2 [pip3] torch==1.3.1 [pip3] torchvision==0.4.2 [conda] Could not collect