dusty-nv / ros_deep_learning

Deep learning inference nodes for ROS / ROS2 with support for NVIDIA Jetson and TensorRT
879 stars 257 forks source link

Error: ROS node can't find model files ! #32

Closed JeyP4 closed 4 years ago

JeyP4 commented 4 years ago

Hello I follow the instruction of ros_deep_learning.

I installed jetson-inference at some other location by following command cmake -DCMAKE_INSTALL_PREFIX:PATH=~/jetson-inference ../

And then appended required paths export CPLUS_INCLUDE_PATH=$CPLUS_INCLUDE_PATH:/home/nano/jetson-inference/include export CMAKE_PREFIX_PATH=$CMAKE_PREFIX_PATH:/home/nano/jetson-inference/share/jetson-utils/cmake:/home/nano/jetson-inference/share/jetson-inference/cmake export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/nano/jetson-inference/lib

Testing is successful ./detectnet-console images/street.jpg street.jpg

Problem: ROS node can't detect models?

nano@nano:~$ rosrun ros_deep_learning detectnet /imagenet/image_in:=/image_publisher/image_raw _model_name:=ssd-mobilenet-v2

detectNet -- loading detection network model from: -- model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff -- input_blob 'Input' -- output_blob 'NMS' -- output_count 'NMS_1' -- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt -- threshold 0.500000 -- batch_size 1

[TRT] TensorRT version 6.0.1 [TRT] loading NVIDIA plugins... [TRT] Plugin Creator registration succeeded - GridAnchor_TRT [TRT] Plugin Creator registration succeeded - GridAnchorRect_TRT [TRT] Plugin Creator registration succeeded - NMS_TRT [TRT] Plugin Creator registration succeeded - Reorg_TRT [TRT] Plugin Creator registration succeeded - Region_TRT [TRT] Plugin Creator registration succeeded - Clip_TRT [TRT] Plugin Creator registration succeeded - LReLU_TRT [TRT] Plugin Creator registration succeeded - PriorBox_TRT [TRT] Plugin Creator registration succeeded - Normalize_TRT [TRT] Plugin Creator registration succeeded - RPROI_TRT [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT [TRT] Could not register plugin creator: FlattenConcat_TRT in namespace: [TRT] completed loading NVIDIA plugins. [TRT] detected model format - UFF (extension '.uff') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file .1.1.GPU.FP16.engine [TRT] cache file not found, profiling network model on device GPU

error: model file 'networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff' was not found. if loading a built-in model, maybe it wasn't downloaded before.

    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh

detectNet -- failed to initialize. [ERROR] [1583696440.892525823]: failed to load detectNet model nano@nano:~$ `

dusty-nv commented 4 years ago

Hi @JeyP4, can you check if you can find this file on your system: /usr/local/bin/networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff

If it's not there, perhaps you need to do a sudo make install under <jetson-inference>/build or use the Model Downloader tool to download that model again.

JeyP4 commented 4 years ago

To fix everything I again performed operation installing in default location

cmake -DCMAKE_INSTALL_PREFIX:PATH=/usr/local ../
make -j4
sudo make install

Now it is able to find the models.

Thank you for your prompt help.