NVIDIA / TensorRT

NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
https://developer.nvidia.com/tensorrt
Apache License 2.0
10.65k stars 2.12k forks source link

Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry #1866

Closed jylink closed 1 year ago

jylink commented 2 years ago

Description

Hi, I tried to convert onnx to trt on Jetson NX (jetpack 4.6, trt 8.2.1, cuda 10.2) but got an Internal Error, I googled but cannot find any clue about this error message.

FYI, this onnx can be successfully converted to trt on my Jetson Nano (jetpack 4.5, trt 7.1.3, cuda 10.2) and Windows PC (trt 8.2.1, cuda 11.0)

trt version 8.2.1.8

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped

[03/18/2022-16:54:18] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)

Traceback (most recent call last):

  File "tools/export_trt.py", line 77, in <module>

    f.write(engine.serialize())

AttributeError: 'NoneType' object has no attribute 'serialize'

Environment

TensorRT Version: 8.2.1.8 NVIDIA GPU: Jetson NX (jetpack 4.6) NVIDIA Driver Version: CUDA Version: 10.2 CUDNN Version: 8.2.1 Operating System: Python Version (if applicable): Tensorflow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if so, version):

ttyio commented 2 years ago

Hello @jylink , seems a bug in trt. could you provide us with the onnx model for debug? thanks!

jylink commented 2 years ago

Hello @jylink , seems a bug in trt. could you provide us with the onnx model for debug? thanks!

https://github.com/jylink/tmp/blob/main/ace-8-best.onnx

ttyio commented 2 years ago

Thanks @jylink , the fix will be available in next Jetpack release.

zsw360720347 commented 2 years ago

Thanks @jylink , the fix will be available in next Jetpack release.

So,which Jetpack release could solve this bug?I got the same problem with the Author, and my jetsonNX jetpack is 4.6.1.

zsw360720347 commented 2 years ago

Description

Hi, I tried to convert onnx to trt on Jetson NX (jetpack 4.6, trt 8.2.1, cuda 10.2) but got an Internal Error, I googled but cannot find any clue about this error message.

FYI, this onnx can be successfully converted to trt on my Jetson Nano (jetpack 4.5, trt 7.1.3, cuda 10.2) and Windows PC (trt 8.2.1, cuda 11.0)

trt version 8.2.1.8

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

[03/18/2022-16:54:16] [TRT] [W] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped

[03/18/2022-16:54:18] [TRT] [E] 2: [utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12653},)

Traceback (most recent call last):

  File "tools/export_trt.py", line 77, in <module>

    f.write(engine.serialize())

AttributeError: 'NoneType' object has no attribute 'serialize'

Environment

TensorRT Version: 8.2.1.8 NVIDIA GPU: Jetson NX (jetpack 4.6) NVIDIA Driver Version: CUDA Version: 10.2 CUDNN Version: 8.2.1 Operating System: Python Version (if applicable): Tensorflow Version (if applicable): PyTorch Version (if applicable): Baremetal or Container (if so, version):

Hi, Have you solve the problem?

Kailthen commented 2 years ago

Same error when running official onnx model: /usr/src/tensorrt/bin/trtexec --onnx=./data/resnet50/ResNet50.onnx

[utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 32}, 25535},)

nvpohanh commented 2 years ago

This should have been fixed in TRT 8.4 GA.

zsw360720347 commented 2 years ago

TRT8.4 not support jetpck4.6.1? So I still need to update jetpack? Mine is 4.6.1 now.

https://github.com/NVIDIA/TensorRT/tree/release/8.4#prerequisites

zerollzeng commented 2 years ago

TRT 8.4 should be in Jetpack 5.0, which will release soon.

Zephyr69 commented 2 years ago

Is there a workaround for this problem on Jetpack 4.6.1? I'm having the exact same issue, and migrating the entire project to ubuntu 20.04 just for trt is really not an option.

Zephyr69 commented 2 years ago

Building tensorrt from source resulted in the same issue too.

zerollzeng commented 2 years ago

This is indeed a bug, I think upgrading to Jetpack 5.0 is the only option.

Zephyr69 commented 2 years ago

@zerollzeng Is there some way to downgrade to Jetpack 4.5.x?

zerollzeng commented 2 years ago

reflash it? but I think you will have this issue in 4.5 too. and perhaps there might be even no JP4.5 version for this devce.

Zephyr69 commented 2 years ago

@jylink On my NX 8gb ram EMMC module with Jetpack 4.6 and Tensorrt 8.0.1.6, it works fine. But on the 16bg ram module with the same software config as yours, it doesn't work. I haven't tried to downgrade the 16gb module tho.

Which NX module is yours?

whaosoft commented 1 year ago

Jetson NX (jetpack 4.6.2) TensorRT :8.2.1.8 CUDA :10.2 CUDNN :8.2.1 Same error when running official onnx model: /usr/src/tensorrt/bin/trtexec --onnx=./data/resnet50/ResNet50.onnx

[utils.cpp::checkMemLimit::380] Error Code 2: Internal Error (Assertion upperBound != 0 failed. Unknown embedded device detected. Please update the table with the entry: {{1794, 6, 16}, 12660},)

rida-xavor commented 1 year ago

@jylink On my NX 8gb ram EMMC module with Jetpack 4.6 and Tensorrt 8.0.1.6, it works fine. But on the 16bg ram module with the same software config as yours, it doesn't work. I haven't tried to downgrade the 16gb module tho.

Which NX module is yours?

Were you able to resolve this issue? I am having the same problem.

Zephyr69 commented 1 year ago

@jylink On my NX 8gb ram EMMC module with Jetpack 4.6 and Tensorrt 8.0.1.6, it works fine. But on the 16bg ram module with the same software config as yours, it doesn't work. I haven't tried to downgrade the 16gb module tho. Which NX module is yours?

Were you able to resolve this issue? I am having the same problem.

No, it turned out the 16gb ram module cannot be downgraded. Since there seems to be no more support on this, you must choose the 8gb ram module and not the 16gb ram module if you are sticking to ubuntu 18.04.

zerollzeng commented 1 year ago

Hi guys, we just release the TensorRT_8.2.1.9_Patch_for_Jetpack4.6_Jetson_NX_16GB.tar.gz for this issue, please see https://developer.nvidia.com/embedded/linux-tegra-r3272

Audrey528 commented 1 year ago

@zerollzeng For Jetson NX 16GB, Jetpack4.6.1, TensorRT8.2.1.8,how can I solve the problems? Must I upgrade to Jepack5.0? I need your help. Thanks very much.

zerollzeng commented 1 year ago

Just replace the TRT with the above package, there is also a readme on how to install it, please also uninstall the pre-installed one first.

ttyio commented 1 year ago

closing since no activity for more than 3 weeks, please reopen if you still have question, thanks!

mzacri commented 6 months ago

Hi everyone,

I'm having the same issue with a newer configuration:

Hardware configuration Jetson: Orin AGX 32Gb Board: Custom Board MIC-733-AO GPU: 1792-core NVIDIA Ampere GPU with 56 Tensor

Software configuration JetPack 5.1 (R35 (release), REVISION: 2.1) TensorRT 8.5.2.2-1+cuda11.4 Torch 2.1.0a0+41361538.nv23.6

Issue export_trt_issue

Explanation I have trained detection .pt weights tested without TensorRT and working. Next, i have converted these weights to ONNX model without issues. And when i try to convert the ONNX model to a TRT engine, i get this error.

Would be grateful to your help !

zerollzeng commented 6 months ago

@mzacri Could you please try latest JP?

mzacri commented 6 months ago

Hi @zerollzeng,

Thanks for your response. I will give it a shot and come back to you with results.

Regards