Why is it INT64 instead of INT32 after converting to onnx?
I can understand that converting to trt will reduce accuracy, but the reduction process is quite obvious. I think it has something to do with onnx being 64-bit?
During the conversion process it will be displayed:
WARNING: [TRT]: onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
WARNING: [TRT]: Tensor DataType is determined at build time for tensors not marked as input or output.
Why is it INT64 instead of INT32 after converting to onnx? I can understand that converting to trt will reduce accuracy, but the reduction process is quite obvious. I think it has something to do with onnx being 64-bit? During the conversion process it will be displayed: