Closed JCP13 closed 5 years ago
Hi there, did you run "sudo make install" before trying this?
Could you also try running "sudo ldconfig"?
Thanks, Dusty
From: JCP_13 notifications@github.com Sent: Sunday, July 21, 2019 6:27:12 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [dusty-nv/jetson-inference] issue running imagenet-console.py (#369)
Greetings,
I am having an error when running imagenet-console.py. Please error below.
jetson-inference/build/aarch64/bin$ ./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg jetson.inference.init.py Traceback (most recent call last): File "./imagenet-console.py", line 24, in import jetson.inference File "/usr/lib/python2.7/dist-packages/jetson/inference/init.py", line 4, in from jetson_inference_python import * ImportError: libjetson-utils.so: cannot open shared object file: No such file or directory
Also, all the python scripts are giving errors but the all the C++ example are working as expected.
Any help will greatly be appreciated.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/369?email_source=notifications&email_token=ADVEGK5OM3V7YG5V3TTAGMTQATPEBA5CNFSM4IFTTCW2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HAQKNGA, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADVEGK5OE5MEAPINO5IQFLTQATPEBANCNFSM4IFTTCWQ.
Thank you for your reply. Yes, I did run "sudo make install" without any errors that I could see.
$ cd jetson-inference/build # omit if pwd is already /build from above $ make $ sudo make install
As for as "sudo ldconfig", should I run it in "jetson-inference/build"?
I dont think it matters where you run ldconfig from, but try running it from build/aarch64/bin/
From: JCP_13 notifications@github.com Sent: Sunday, July 21, 2019 7:00:33 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] issue running imagenet-console.py (#369)
Thank you for your reply. Yes, I did run "sudo make install" without any errors that I could see.
$ cd jetson-inference/build # omit if pwd is already /build from above $ make $ sudo make install
As for as "sudo ldconfig", should I run it in "jetson-inference/build"?
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/369?email_source=notifications&email_token=ADVEGK2QUSGQICW6UTKFNJ3QATTBDA5CNFSM4IFTTCW2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2ONM2A#issuecomment-513594984, or mute the threadhttps://github.com/notifications/unsubscribe-auth/ADVEGK2XJUOTYJ6MD5SCDHTQATTBDANCNFSM4IFTTCWQ.
That did it! thank you Dusty!! :)
Hi Dusty,
Sorry, I am running into another error, please see below.
./detectnet-camera --camera=/dev/video1 --width=640 --height=480 [gstreamer] initialized gstreamer, version 1.14.4.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVARGUS, camera /dev/video1 [gstreamer] gstCamera pipeline string: v4l2src device=/dev/video1 ! video/x-raw, width=(int)640, height=(int)480, format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2, camera /dev/video1
detectnet-camera: successfully initialized camera device width: 640 height: 480 depth: 24 (bpp)
detectNet -- loading detection network model from: -- prototxt networks/ped-100/deploy.prototxt -- model networks/ped-100/snapshot_iter_70800.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- mean_binary NULL -- class_labels networks/ped-100/class_labels.txt -- threshold 0.500000 -- batch_size 1
[TRT] TensorRT version 5.0.6 [TRT] loading NVIDIA plugins... [TRT] completed loading NVIDIA plugins. [TRT] detected model format - caffe (extension '.caffemodel') [TRT] desired precision specified for GPU: FASTEST [TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8 [TRT] native precisions detected for GPU: FP32, FP16 [TRT] selecting fastest native precision for GPU: FP16 [TRT] attempting to open engine cache file networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine [TRT] loading network profile from engine cache... networks/ped-100/snapshot_iter_70800.caffemodel.1.1.GPU.FP16.engine [TRT] device GPU, networks/ped-100/snapshot_iter_70800.caffemodel loaded [TRT] device GPU, CUDA engine context initialized with 3 bindings [TRT] binding -- index 0 -- name 'data' -- type FP32 -- in/out INPUT -- # dims 3 -- dim #0 3 (CHANNEL) -- dim #1 512 (SPATIAL) -- dim #2 1024 (SPATIAL) [TRT] binding -- index 1 -- name 'coverage' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 1 (CHANNEL) -- dim #1 32 (SPATIAL) -- dim #2 64 (SPATIAL) [TRT] binding -- index 2 -- name 'bboxes' -- type FP32 -- in/out OUTPUT -- # dims 3 -- dim #0 4 (CHANNEL) -- dim #1 32 (SPATIAL) -- dim #2 64 (SPATIAL) [TRT] binding to input 0 data binding index: 0 [TRT] binding to input 0 data dims (b=1 c=3 h=512 w=1024) size=6291456 [TRT] binding to output 0 coverage binding index: 1 [TRT] binding to output 0 coverage dims (b=1 c=1 h=32 w=64) size=8192 [TRT] binding to output 1 bboxes binding index: 2 [TRT] binding to output 1 bboxes dims (b=1 c=4 h=32 w=64) size=32768 device GPU, networks/ped-100/snapshot_iter_70800.caffemodel initialized. detectNet -- number object classes: 1 detectNet -- maximum bounding boxes: 0 detectnet-camera: failed to load detectNet model
It fails with same error with both Python and C++, Any thoughts?
Hi @JCP13, this was a temporary bug that I fixed in https://github.com/dusty-nv/jetson-inference/commit/f98f6b401f996b92788b883cba6ea9582568b813, so try pulling from master and re-compiling, or re-cloning the repo.
Thank you dusty, It is now working.
Is there a reason why I cannot load "SSD_MOBILENET_V1"? ./detectnet-camera.py --network=SSD_MOBILENET_V1 --camera=/dev/video1
Thank you.
Had the same issue as the originator. Ran "sudo ldconfig" in jetson-inference/build/aarch64/bin and was able to successfully run imagenet-console.py.
@dusty-nv, my bad, as it turns out ssd_mobilenet_v1 needed to be all lower case.
When I run, $./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg inside the "jetson-inference/build/aarch64/bin" folder, I get the following error.
Traceback (most recent call last):
File "/home/user_name/my_jetson_py3env/lib/python3.6/site.py", line 67, in
But earlier,
$./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg used to run before as shown below:
Appreciate help.
Hi @dusty-nv I was to run by prefixing with 'sudo' as
sudo ./imagenet-console.py --network=googlenet images/orange_0.jpg output_0.jpg
May I please know what has to be done to get it running in normal mode as before? Please note that the issue exists only while running python codes, C++ works as normal.
Any help is appreciated. Thanks in advance.
Greetings,
I am having an error when running imagenet-console.py. Please see error below.
jetson-inference/build/aarch64/bin$ ./imagenet-console.py --network=googlenet orange_0.jpg output_0.jpg jetson.inference.init.py Traceback (most recent call last): File "./imagenet-console.py", line 24, in
import jetson.inference
File "/usr/lib/python2.7/dist-packages/jetson/inference/init.py", line 4, in
from jetson_inference_python import *
ImportError: libjetson-utils.so: cannot open shared object file: No such file or directory
Also, all the python scripts are giving errors but the all the C++ example are working as expected.
Any help will greatly be appreciated.