marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.38k stars 343 forks source link

Why is it INT64 instead of INT32 after converting to onnx? #498

Open tms2003 opened 6 months ago

tms2003 commented 6 months ago

Why is it INT64 instead of INT32 after converting to onnx? I can understand that converting to trt will reduce accuracy, but the reduction process is quite obvious. I think it has something to do with onnx being 64-bit? During the conversion process it will be displayed:

WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.