Open harishkool opened 6 months ago
try to
/usr/src/tensorrt/bin/trtexec --onnx=model_gn.onnx --shapes=input:32x3x32x32 --saveEngine=model_gn.engine --exportProfile=model_gn.json --best --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun
Same
[05/10/2024-12:14:41] [E] Error[10]: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/cnn/cnn.0/Conv]}.)
[05/10/2024-12:14:41] [E] Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[05/10/2024-12:14:41] [E] Engine could not be created from network
[05/10/2024-12:14:41] [E] Building engine failed
[05/10/2024-12:14:41] [E] Failed to create engine from model or file.
[05/10/2024-12:14:41] [E] Engine set up failed
You can find the verbose log here https://drive.google.com/file/d/17o5k7_1ZPEd_iNScTUOKKRsa167VWjWs/view?usp=drive_link.
Check your conv layer match condition or not ? The layer support and restrictions to the specified layers while running on DLA, see https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla-lay-supp-rest
On the other way, you can update the latest version of trt.
I took the example model from Jetson DLA tutorial https://github.com/NVIDIA-AI-IOT/jetson_dla_tutorial, it supports.
Any updates on this issue ?
Description
TensorRT engine build failed with error
Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/cnn/cnn.0/Conv]}
.Environment
TensorRT Version: 8.5.2
NVIDIA GPU: Volta GPU
CUDA Version: 11.4
CUDNN Version: 8.6
Operating System: Ubuntu 20
Platform : Jetson Xavier NX
Relevant Files
Model link: https://drive.google.com/file/d/1K5kQxR0IR-SGF6Ry1V44R-bmfwF4NPPx/view?usp=sharing
Steps To Reproduce
https://github.com/NVIDIA-AI-IOT/jetson_dla_tutorial
/usr/src/tensorrt/bin/trtexec --onnx=model_gn.onnx --shapes=input:32x3x32x32 --saveEngine=model_gn.engine --exportProfile=model_gn.json --int8 --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun
Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/cnn/cnn.0/Conv]}
Have you tried the latest release?: N/A
Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (
polygraphy run <model.onnx> --onnxrt
): Yes