Closed jkjung-avt closed 2 years ago
The issue you met with trtexec will be fixed in a future release.
And for the problem you raised on NVIDIA Developer Forum for your own sample code, what calibrator you use for your YOLOEntropyCalibrator? Is it IInt8EntropyCalibrator2. We try trtexec with --calib flag and met a different error.
Could you share your sample with us to verify?
Yes, it is a IInt8EntropyCalibrator2. The full implementation is here: https://github.com/jkjung-avt/tensorrt_demos/blob/master/yolo/calibrator.py#L87
Thanks for the information, Can I directly use https://github.com/jkjung-avt/tensorrt_demos/blob/master/yolo/onnx_to_tensorrt.py to verify it?
Yes, you could refer to the step-by-step guides: Demo #5: YOLOv4 and Demo #6: Using INT8 and DLA core.
Basically, you need to:
$ cd tensorrt_demos/yolo
$ wget https://raw.githubusercontent.com/AlexeyAB/darknet/master/cfg/yolov4-tiny.cfg
$ mv yolov4-tiny.cfg yolov4-tiny-416.cfg
$ cp <path>/yolov4-tiny-416.onnx .
$ python3 onnx_to_tensorrt.py -v --int8 --dla_core 0 -m yolov4-tiny-416
Is there any updates on this? I get the same Assertion Error in operator(): 0 (et.region->getType() == RegionType::kNVM)
error when trying to build an engine for DLA of the trt_pose model (https://github.com/NVIDIA-AI-IOT/trt_pose). It works fine for building a GPU engine.
I've tried trtexec, torch2trt and my own tensorrt script and they all get the same error.
Hi everybody, I also have the same questions, I try many times to convert my model using DLA.
But after converting , it's always fail, and its many mistakes.
YOLOv4-tiny and Open Pose always both have these questions.
@jkjung-avt Could you try TRT 8.2/8.4 and see if the issue still exists? If it does, we will debug it. Thanks
@nvpohanh I am not able to reproduce the problem on my Jetson Xavier NX DevKit with JetPack 5.0.1 DP. I guess it's been fixed in the latest TensorRT.
JetPack 5.0.1 DP:
so it has been fixed in the latest TRT version. Thanks for checking that.
I am going to close this for now. Please feel free to reopen if this issue still exists. Thanks
Description
Cannot build a "yolov4-tiny" TensorRT engine for DLA core on Jetson Xavier NX due to the following error. (I have previously reported this issue on NVIDIA Developer Forum but do not get a response for over 2 weeks. So I'm re-posting the issue here.)
Environment
TensorRT Version: 7.1.3.4 GPU Type: Jetson Xavier NX Nvidia Driver Version: JetPack-4.4 CUDA Version: 10.2 CUDNN Version: 8 Operating System + Version: 4 4.9.140-tegra Python Version (if applicable): 3.6 Baremetal or Container (which commit + image + tag): baremetal
Relevant Files
yolov4-tiny-416.onnx
Steps To Reproduce
Download yolov4-tiny-416.onnx and try generating an INT8 TensorRT engine for DLA core 0 with "trtexec".
Note that the same "trtexec" command runs successully if I remove the "--useDLACore=0" option (build the INT8 engine for GPU instead of DLA core).