dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.89k stars 2.99k forks source link

my-detection.py "ssd_mobilenet_v2_coco.uff' was not found." error. #540

Closed canozcivelek closed 1 year ago

canozcivelek commented 4 years ago

Hi, I've been working with the amazing jetson inference projects for several weeks now. And it was working great until I suddenly started to receive an error regarding the pretrained model file. I can confirm that the ssd_mobilenet_v2_coco.uff file exists under networks directory. I'm on Jetson Nano, with a compatible Logitech USB Webcam.

The exact console output is as follows:

jetson.inference.init.py jetson.inference -- initializing Python 3.6 bindings... jetson.inference -- registering module types... jetson.inference -- done registering module types jetson.inference -- done Python 3.6 binding initialization jetson.utils.init.py jetson.utils -- initializing Python 3.6 bindings... jetson.utils -- registering module functions... jetson.utils -- done registering module functions jetson.utils -- registering module types... jetson.utils -- done registering module types jetson.utils -- done Python 3.6 binding initialization jetson.inference -- PyTensorNet_New() jetson.inference -- PyDetectNet_Init() jetson.inference -- detectNet loading build-in network 'ssd-mobilenet-v2'

detectNet -- loading detection network model from: -- model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff -- input_blob 'Input' -- output_blob 'NMS' -- output_count 'NMS_1' -- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt -- threshold 0.500000 -- batch_size 1

[TRT] TensorRT version 5.1.6 [TRT] loading NVIDIA plugins... [TRT] Plugin Creator registration succeeded - GridAnchor_TRT [TRT] Plugin Creator registration succeeded - NMS_TRT [TRT] Plugin Creator registration succeeded - Reorg_TRT [TRT] Plugin Creator registration succeeded - Region_TRT [TRT] Plugin Creator registration succeeded - Clip_TRT [TRT] Plugin Creator registration succeeded - LReLU_TRT [TRT] Plugin Creator registration succeeded - PriorBox_TRT [TRT] Plugin Creator registration succeeded - Normalize_TRT [TRT] Plugin Creator registration succeeded - RPROI_TRT [TRT] Plugin Creator registration succeeded - BatchedNMS_TRT [TRT] completed loading NVIDIA plugins. [TRT] detected model format - UFF (extension '.uff') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file .1.1.GPU.FP16.engine [TRT] cache file not found, profiling network model on device GPU

error: model file 'networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff' was not found. if loading a built-in model, maybe it wasn't downloaded before.

    Run the Model Downloader tool again and select it for download:

       $ cd <jetson-inference>/tools
       $ ./download-models.sh

detectNet -- failed to initialize. jetson.inference -- detectNet failed to load built-in network 'ssd-mobilenet-v2' PyTensorNet_Dealloc() Traceback (most recent call last): File "detect.py", line 4, in net = jetson.inference.detectNet('ssd-mobilenet-v2', threshold=0.5) Exception: jetson.inference -- detectNet failed to load network

I cannot seem to find a solution and hope to hear from you soon.

Thank you very much!

irfwas commented 4 years ago

I am getting the same error.. Have you managed to solve it @canozcivelek ?

huansu commented 3 years ago

Hey! You can download it in here. https://github.com/dusty-nv/jetson-inference/releases choose that which you need, and unzip it under data/networks.