dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.81k stars 2.98k forks source link

issue running imagenet-console.py #369

Closed JCP13 closed 5 years ago

JCP13 commented 5 years ago

Greetings,

I am having an error when running imagenet-console.py. Please see error below.

jetson-inference/build/aarch64/bin$ ./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg jetson.inference.init.py Traceback (most recent call last): File "./imagenet-console.py", line 24, in import jetson.inference File "/usr/lib/python2.7/dist-packages/jetson/inference/init.py", line 4, in from jetson_inference_python import * ImportError: libjetson-utils.so: cannot open shared object file: No such file or directory

Also, all the python scripts are giving errors but the all the C++ example are working as expected.

Any help will greatly be appreciated.

dusty-nv commented 5 years ago

Hi there, did you run "sudo make install" before trying this?

Could you also try running "sudo ldconfig"?

Thanks, Dusty


From: JCP_13 notifications@github.com Sent: Sunday, July 21, 2019 6:27:12 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [dusty-nv/jetson-inference] issue running imagenet-console.py (#369)

Greetings,

I am having an error when running imagenet-console.py. Please error below.

jetson-inference/build/aarch64/bin$ ./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg jetson.inference.init.py Traceback (most recent call last): File "./imagenet-console.py", line 24, in import jetson.inference File "/usr/lib/python2.7/dist-packages/jetson/inference/init.py", line 4, in from jetson_inference_python import * ImportError: libjetson-utils.so: cannot open shared object file: No such file or directory

Also, all the python scripts are giving errors but the all the C++ example are working as expected.

Any help will greatly be appreciated.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/369?email_source=notifications&email_token=ADVEGK5OM3V7YG5V3TTAGMTQATPEBA5CNFSM4IFTTCW2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HAQKNGA, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADVEGK5OE5MEAPINO5IQFLTQATPEBANCNFSM4IFTTCWQ.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

JCP13 commented 5 years ago

Thank you for your reply. Yes, I did run "sudo make install" without any errors that I could see.

$ cd jetson-inference/build # omit if pwd is already /build from above $ make $ sudo make install

As for as "sudo ldconfig", should I run it in "jetson-inference/build"?

dusty-nv commented 5 years ago

I dont think it matters where you run ldconfig from, but try running it from build/aarch64/bin/


From: JCP_13 notifications@github.com Sent: Sunday, July 21, 2019 7:00:33 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] issue running imagenet-console.py (#369)

Thank you for your reply. Yes, I did run "sudo make install" without any errors that I could see.

$ cd jetson-inference/build # omit if pwd is already /build from above $ make $ sudo make install

As for as "sudo ldconfig", should I run it in "jetson-inference/build"?

— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/369?email_source=notifications&email_token=ADVEGK2QUSGQICW6UTKFNJ3QATTBDA5CNFSM4IFTTCW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2ONM2A#issuecomment-513594984, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADVEGK2XJUOTYJ6MD5SCDHTQATTBDANCNFSM4IFTTCWQ.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

JCP13 commented 5 years ago

That did it! thank you Dusty!! :)

JCP13 commented 5 years ago

Hi Dusty,

Sorry, I am running into another error, please see below.

./detectnet-camera --camera=/dev/video1 --width=640 --height=480 [gstreamer] initialized gstreamer, version 1.14.4.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera /dev/video1 [gstreamer] gstCamera pipeline string: v4l2src device=/dev/video1 ! video/x-raw, width=(int)640, height=(int)480, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera /dev/video1

detectnet-camera: successfully initialized camera device width: 640 height: 480 depth: 24 (bpp)

detectNet -- loading detection network model from: -- prototxt networks/ped-100/deploy.prototxt -- model networks/ped-100/snapshot_iter_70800.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- mean_binary NULL -- class_labels networks/ped-100/class_labels.txt -- threshold 0.500000 -- batch_size 1

[TRT] TensorRT version 5.0.6 [TRT] loading NVIDIA plugins... [TRT] completed loading NVIDIA plugins. [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine [TRT] loading network profile from engine cache... networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine [TRT] device GPU, networks/ped-100/snapshot_iter_70800.caffemodel loaded [TRT] device GPU, CUDA engine context initialized with 3 bindings [TRT] binding -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 (CHANNEL) -- dim #1 512 (SPATIAL) -- dim #2 1024 (SPATIAL) [TRT] binding -- index 1 -- name 'coverage' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1 (CHANNEL) -- dim #1 32 (SPATIAL) -- dim #2 64 (SPATIAL) [TRT] binding -- index 2 -- name 'bboxes' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 4 (CHANNEL) -- dim #1 32 (SPATIAL) -- dim #2 64 (SPATIAL) [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=512 w=1024) size=6291456 [TRT] binding to output 0 coverage binding index: 1 [TRT] binding to output 0 coverage dims (b=1 c=1 h=32 w=64) size=8192 [TRT] binding to output 1 bboxes binding index: 2 [TRT] binding to output 1 bboxes dims (b=1 c=4 h=32 w=64) size=32768 device GPU, networks/ped-100/snapshot_iter_70800.caffemodel initialized. detectNet -- number object classes: 1 detectNet -- maximum bounding boxes: 0 detectnet-camera: failed to load detectNet model

It fails with same error with both Python and C++, Any thoughts?

dusty-nv commented 5 years ago

Hi @JCP13, this was a temporary bug that I fixed in https://github.com/dusty-nv/jetson-inference/commit/f98f6b401f996b92788b883cba6ea9582568b813, so try pulling from master and re-compiling, or re-cloning the repo.

JCP13 commented 5 years ago

Thank you dusty, It is now working.

Is there a reason why I cannot load "SSD_MOBILENET_V1"? ./detectnet-camera.py --network=SSD_MOBILENET_V1 --camera=/dev/video1

Thank you.

edthezombie commented 5 years ago

Had the same issue as the originator. Ran "sudo ldconfig" in jetson-inference/build/aarch64/bin and was able to successfully run imagenet-console.py.

JCP13 commented 5 years ago

@dusty-nv, my bad, as it turns out ssd_mobilenet_v1 needed to be all lower case.

csmanu commented 4 years ago

When I run, $./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg inside the "jetson-inference/build/aarch64/bin" folder, I get the following error.

Traceback (most recent call last): File "/home/user_name/my_jetson_py3env/lib/python3.6/site.py", line 67, in import os File "/home/user_name/my_jetson_py3env/lib/python3.6/os.py", line 409 yield from walk(new_path, topdown, onerror, followlinks) ^ SyntaxError: invalid syntax

But earlier,

$./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg used to run before as shown below:

inference_googlenet_object_classification

Appreciate help.

csmanu commented 4 years ago

Hi @dusty-nv I was to run by prefixing with 'sudo' as sudo ./imagenet-console.py --network=googlenet images/orange_0.jpg output_0.jpg

May I please know what has to be done to get it running in normal mode as before? Please note that the issue exists only while running python codes, C++ works as normal.

Any help is appreciated. Thanks in advance.