Closed S4WRXTTCS closed 1 year ago
It seems to fail on:
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error. [gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming stopped, reason not-negotiated (-4)
Here is the complete log:
nvidia@jetson-0423018055236:~/jetson-inference/build/aarch64/bin$ ./imagenet-camera googlenet imagenet-camera args (2): 0 [./imagenet-camera] 1 [googlenet]
[gstreamer] initialized gstreamer, version 1.14.1.0 [gstreamer] gstCamera attempting to initialize with GST_SOURCE_NVCAMERA [gstreamer] gstCamera pipeline string: v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink [gstreamer] gstCamera successfully initialized with GST_SOURCE_V4L2
imagenet-camera: successfully initialized video device width: 1280 height: 720 depth: 24 (bpp)
imageNet -- loading classification network model from: -- prototxt networks/googlenet.prototxt -- model networks/bvlc_googlenet.caffemodel -- class_labels networks/ilsvrc12_synset_words.txt -- input_blob 'data' -- output_blob 'prob' -- batch_size 2
[TRT] TensorRT version 5.0.0 [TRT] attempting to open cache file networks/bvlc_googlenet.caffemodel.2.tensorcache [TRT] loading network profile from cache... networks/bvlc_googlenet.caffemodel.2.tensorcache [TRT] platform has FP16 support. [TRT] networks/bvlc_googlenet.caffemodel loaded [TRT] CUDA engine context initialized with 2 bindings [TRT] networks/bvlc_googlenet.caffemodel input binding index: 0 [TRT] networks/bvlc_googlenet.caffemodel input dims (b=2 c=3 h=224 w=224) size=1204224 [cuda] cudaAllocMapped 1204224 bytes, CPU 0x21c05e000 GPU 0x21c05e000 [TRT] networks/bvlc_googlenet.caffemodel output 0 prob binding index: 1 [TRT] networks/bvlc_googlenet.caffemodel output 0 prob dims (b=2 c=1000 h=1 w=1) size=8000 [cuda] cudaAllocMapped 8000 bytes, CPU 0x21c25e000 GPU 0x21c25e000 networks/bvlc_googlenet.caffemodel initialized. [TRT] networks/bvlc_googlenet.caffemodel loaded imageNet -- loaded 1000 class info entries networks/bvlc_googlenet.caffemodel initialized. default X screen 0: 1920 x 1080 [OpenGL] glDisplay display window initialized [OpenGL] creating 1280x720 texture loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x21c45e000 GPU 0x21c45e000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x21c260000 GPU 0x21c260000 [gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer msg new-clock ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0 [gstreamer] gstreamer msg stream-start ==> pipeline0 [gstreamer] gstCamera onEOS [gstreamer] gstreamer v4l2src0 ERROR Internal data stream error. [gstreamer] gstreamer Debugging info: gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming stopped, reason not-negotiated (-4) [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
imagenet-camera: camera open for streaming
imagenet-camera: failed to capture frame imagenet-camera: failed to convert from NV12 to RGBA imageNet::Classify( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4)imgRGBA, make_float2(0.0f, 255.0f), (float4)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /home/nvidia/jetson-inference/imagenet-camera/imagenet-camera.cpp:193 [cuda] registered 14745600 byte openGL texture for interop access (1280x720)
imagenet-camera: failed to capture frame imagenet-camera: failed to convert from NV12 to RGBA imageNet::Classify( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4)imgRGBA, make_float2(0.0f, 255.0f), (float4)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /home/nvidia/jetson-inference/imagenet-camera/imagenet-camera.cpp:193
Just want to add that I tested it successfully within gstreamer with the folllowing pipeline.
gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=640, height=480, format=(string)YUY2" ! xvimagesink -e
Thanks @S4WRXTTCS , can you see if changing gstCamera.cpp:362 to the following works in jetson-inference?
ss << "format=YUY2 ! videoconvert ! video/x-raw, format=RGB ! videoconvert !";
Thanks. That fixes the problem as far as I can tell.
The only issues now seems to be it's only showing a frame rate of 10fps, and it's incorrectly saying it's alexnet and not googlenet.
I originally had it as RGB because that gave the best framerate/latency for USB camera, but it seems RGB is no longer in the formats list.
Is it saying alexnet in the titlebar or in the console output? If you re-clone, the title bar should have been updated and now shows the “TensorRT build | FP16/FP32 | FPS”
From: Jason Mecham notifications@github.com Sent: Monday, September 17, 2018 6:09 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Jetson Xavier + Logitech C920 Issue (#267)
Thanks. That fixes the problem as far as I can tell.
The only issues now seems to be it's only showing a frame rate of 10fps, and it's incorrectly saying it's alexnet and not googlenet.
— You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/267#issuecomment-422188103, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK8Jd-fonz3di7bhUVkfVFCRyX_WAks5ucB1xgaJpZM4WqIqh.
It's saying alexnet in the titlebar. In the console output it's correctly showing googlenet.
I think that's an artifact from the default title from older code. The title bar should be updated here: https://github.com/dusty-nv/jetson-inference/blob/3b072b6fd741a940543b5560a9bc7d21374e7ff1/detectnet-camera/detectnet-camera.cpp#L230
When I merge some changes from the dev branch and implement detection of Xavier/TX1/TX2 chip ID, I can select YUY2 format instead of RGB format on Xavier in the V4L2 path of gstCamera.
-------- Original message -------- From: Jason Mecham notifications@github.com Date: 9/17/18 6:13 PM (GMT-05:00) To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com, Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Jetson Xavier + Logitech C920 Issue (#267)
It's saying alexnet in the titlebar. In the console output it's correctly showing googlenet.
- You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/267#issuecomment-422189268, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK-NrclKFzxzskLJZ4-aLTjjsPTvvks5ucB6ggaJpZM4WqIqh.
Just a quick note about the frame rate.
It's not an issue with the jetson-inference, but that's the framerate of the camera at 1280x720.
Apparently I'm going to need a faster camera. :-)
Hello, I am experiencing this same issue, and I tried changing RGB to YUV2 in gstCamera.cpp:362 as per dusty-nv#267. Although it took me a step backwards because the console output now reads 'imagenet-camera: failed to initialize video device'. Where before I had it similar to above where the camera would be initialized but the console box had no video output, I think its because the blue lights around the Logitech C920 are not lighting up compared to when I use cheese or the command listed here gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=640, height=480, format=(string)YUY2" ! xvimagesink -e
As I understand it jetson-inference supports the Jetson Xavier (with Jetpack 4.0 DP) as of the latest commit (81b84d7). But, when I try imagenet-camera I'm unable to get any video from a Logitech C920 web camera.
I initially tried setting the DEFAULT_CAMERA connection to 0 when I had the C920 as /dev/video0, and I've also tried it with a setting of 1 when I had two USB UVC cameras attached to the system.
In previous experiences with the Jetson TX1/TX2 it sometimes required max performance mode to prevent USB suspend issues. I tried that on the Xavier using the jetson_clocks.sh script, but it didn't make any difference in whether the video worked or not.
I tried Cheese the webcam app, and that worked fine. So the camera+board seems to work.
I used to be able to use GTK UVC video viewer on the TX1/TX2, but that isn't working either. That crashes if I have a camera connected.