Closed schyun9212 closed 4 years ago
The zero valued output problem is fixed in cf74a0db5483c514d526239df2e8880bcab0b1fa It seems that self overwriting during tracing is not traced.
But 90% of output value is still difference.
The result using onnx exported rpn module's output did not differ significantly from what was expected. This difference seems to be caused by difference between nms of ONNX and that of PyTorch.
🐛 Bug
I tested two approach, calculating input of rpn_post_processor in Module and giving input as constant. The results are same. bbox is zero and objectness is incorrect. It seems that values are lossed during inference.
To Reproduce
Expected behavior
Environment
PyTorch version: 1.3.1 Is debug build: No CUDA used to build PyTorch: 10.1.243
OS: Ubuntu 18.04.3 LTS GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0 CMake version: version 3.10.2
Python version: 3.7 Is CUDA available: Yes CUDA runtime version: 10.1.243 GPU models and configuration: GPU 0: GeForce RTX 2080 Ti Nvidia driver version: 440.48.02 cuDNN version: Probably one of the following: /usr/local/cuda-10.0/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.1/targets/x86_64-linux/lib/libcudnn.so.7 /usr/local/cuda-10.2/targets/x86_64-linux/lib/libcudnn.so.7
Versions of relevant libraries: [pip3] numpy==1.18.1 [pip3] onnx==1.6.0 [pip3] onnxruntime==1.1.0 [pip3] Pillow==6.2.2 [pip3] torch==1.3.1 [pip3] torchvision==0.4.2 [conda] Could not collect