cyrusbehr / YOLOv8-TensorRT-CPP

YOLOv8 TensorRT C++ Implementation
MIT License
567 stars 69 forks source link

onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. #30

Closed HXB-1997 closed 1 year ago

HXB-1997 commented 1 year ago

nvidia@ubuntu:~/Desktop/HXB/11-4/YOLOv8-TensorRT-CPP/build$ ./detect_object_image --model /home/nvidia/Desktop/HXB/11-4/yolov8n_1527.onnx --input ./bus2.jpg Searching for engine file with name: yolov8n_1527.engine.NVIDIATegraX2.fp16.1.1 Engine not found, generating. This could take a while... onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. terminate called after throwing an instance of 'std::runtime_error' what(): Error: Unable to build the TensorRT engine. Try increasing TensorRT log severity to kVERBOSE (in /libs/tensorrt-cpp-api/engine.cpp). Aborted (core dumped) @ltetrel

HXB-1997 commented 1 year ago

modify the pytorch2onnx.py: model.export(format="onnx",opset=12,simplify = True)

cyrusbehr commented 1 year ago

Ok I updated the script.