Closed YuSuen closed 3 years ago
A mIou of 6.45 sounds like guessing - there must be something wrong in our pipeline or in your environment on the Jetson. In our experiments, we observed only a minimal deviation in the 3rd decimal place in the logits. Even if Float16 is used, the class decisions are allmost equal. However the entire optimization process is quite sensetive to the correct version of the related libraries. We were not able to get matching segmentations prior to Jetpack 4.4 due to incompatibilities in the upsampling operations. That is the reason, why we stick to this version on our robots currently, we did not test never versions so far.
Which version of PyTorch, cuDNN, TensorRT, and ONNX Opset are you using? Did you tried to use the argument "--export_outputs" to inference_time_whole_model.py to check that outputs?
I converted it into an engine file according to the process and evaled the nyuv2 data, I get mIOU 6.45. I have changed multiple ways of onnx2trt, but none of them resolved