NVIDIA-ISAAC-ROS / isaac_ros_dnn_stereo_depth

NVIDIA-accelerated, deep learned stereo disparity estimation
https://developer.nvidia.com/isaac-ros-gems
Apache License 2.0
92 stars 10 forks source link

Error of converting ess model via tao converter #11

Closed Zion-Go closed 1 year ago

Zion-Go commented 1 year ago

Hello!

When I followed the usage of README and tried to run this: /opt/nvidia/tao/tao-converter -k ess -t fp16 -e /.. I got error: no input dimension given. When I added the args, -d 3,576,960, for instance. I got a bunch of errors: [INFO] [MemUsageChange] Init CUDA: CPU +564, GPU +0, now: CPU 1929, GPU 277 (MiB) [INFO] [MemUsageChange] Init builder kernel library: CPU +516, GPU +116, now: CPU 2498, GPU 393 (MiB) [WARNING] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars [WARNING] The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible. [ERROR] UffParser: Unsupported number of graph 0 [ERROR] Failed to parse the model, please check the encoding key to make sure it's correct [ERROR] 4: [network.cpp::validate::2738] Error Code 4: Internal Error (Network must have at least one output) [ERROR] Unable to create engine Segmentation fault (core dumped)

Can I ask ess model is still valid for tao converter?

Thank you!

jaiveersinghNV commented 1 year ago

Could you please provide the full output from the first run of tao-converter, before you specified the input dimensions?

Assuming that the ess.etlt file has been correctly downloaded, TAO Converter should be able to read the model file and correctly identify the input dimensions automatically.

The relevant snippet of the logs should look something like this:

[INFO] [MemUsageChange] Init CUDA: CPU +565, GPU +0, now: CPU 577, GPU 264 (MiB)
[INFO] [MemUsageChange] Init builder kernel library: CPU +517, GPU +116, now: CPU 1146, GPU 380 (MiB)
[WARNING] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[INFO] ----------------------------------------------------------------
[INFO] Input filename:   /tmp/filecItCOA
[INFO] ONNX IR version:  0.0.7
[INFO] Opset version:    13
[INFO] Producer name:    pytorch
[INFO] Producer version: 1.10
[INFO] Domain:           
[INFO] Model version:    0
[INFO] Doc string:       
[INFO] ----------------------------------------------------------------
[WARNING] onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[INFO] Detected input dimensions from the model: (1, 3, 576, 960)
[INFO] Detected input dimensions from the model: (1, 3, 576, 960)
[INFO] Model has no dynamic shape.
Zion-Go commented 1 year ago

Sure! I have ess.etlt here. Screenshot from 2023-07-31 08-56-57

And the full output before I specified the input dimensions. Screenshot from 2023-07-31 08-59-12

swapnesh-wani-nvidia commented 1 year ago

I think the error is because of the incorrect path to the model file. From the screenshot, I can see that you have cloned isaac_ros_dnn_stereo_disparity at this path /workspaces/isaac_ros-dev/isaac_ros_dnn_stereo_disparity which misses a src directory which is the value of -e flag of the command that you are running.

Zion-Go commented 1 year ago

Yes. You are right. I linked my folders wrongly... Thank you very much for your time.