NVIDIA-AI-IOT / deepstream_lpr_app

Sample app code for LPR deployment on DeepStream
MIT License
209 stars 62 forks source link

TLT CONVERTER #1

Open necatiCelik opened 3 years ago

necatiCelik commented 3 years ago

Hi, I get an error while tlt_convert. Although I changed "-p" to "-d", "./tlt-converter -k nvidia_tlt -d image_input, 1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnine_b terminate called after throwing an instance of 'std :: invalid_argument' what (): stoi Aborted (core dumped) "error. Can you help?

mhaj98 commented 3 years ago

you need to remove "image_input," i.e. it became ./tlt-converter -k nvidia_tlt -p 1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnine_b

mhaj98 commented 3 years ago

However, when I doing this I am getting another issue:

[ERROR] UffParser: Could not parse MetaGraph from /tmp/file8doOYv [ERROR] Failed to parse the model, please check the encoding key to make sure it's correct [ERROR] Network must have at least one output [ERROR] Network validation failed. [ERROR] Unable to create engine

I am using tensorRT 7.0 and cuda10.2. What's the issue then?

necatiCelik commented 3 years ago

you need to remove "image_input," i.e. it became ./tlt-converter -k nvidia_tlt -p 1x3x48x96,4x3x48x96,16x3x48x96 ./us_lprnet_baseline18_deployable.etlt -t fp16 -e lpr_us_onnine_b16.engine

it true, thanks. But when I doing it, I am getting your error:

[ERROR] UffParser: Could not parse MetaGraph from /tmp/filea498W4 [ERROR] Failed to parse the model, please check the encoding key to make sure it's correct [ERROR] Network must have at least one output [ERROR] Network validation failed. [ERROR] Unable to create engine Segmentation fault (core dumped)

ClaudioCampuzano commented 3 years ago

i have the same problem from x86 with NVIDIA GPU

123Vincent2018 commented 2 years ago

“ ./tlt-converter -k nvidia_tlt -p image_input,1x3x48x96,4x3x48x96,16x3x48x96 /Workspacelpr_app/models/LP/LPR/ch_lprnet_baseline18_deployable.etlt -t fp16 -e models/LP/LPR/lpr_ch_onnx_b16.engine [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] Tensor DataType is determined at build time for tensors not marked as input or output. [WARNING] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to INT32. [WARNING] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result. [WARNING] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result. [WARNING] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result. [WARNING] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result. [WARNING] Calling isShapeTensor before the entire network is constructed may result in an inaccurate result. [INFO] Detected input dimensions from the model: (-1, 3, 48, 96) [INFO] Model has dynamic shape. Setting up optimization profiles. [INFO] Using optimization profile min shape: (1, 3, 48, 96) for input: image_input [INFO] Using optimization profile opt shape: (4, 3, 48, 96) for input: image_input [INFO] Using optimization profile max shape: (16, 3, 48, 96) for input: image_input [INFO] Detected 1 inputs and 2 output network tensors.” i implement this command and then get the following info,it seems successful but there is no .engine file generated in targeted directory. How is it happened? is there any wrong?