dusty-nv / jetson-inference

Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
https://developer.nvidia.com/embedded/twodaystoademo
MIT License
7.85k stars 2.98k forks source link

failed to capture frame / convert from NV12 to RGBA #184

Closed Sijoma closed 1 year ago

Sijoma commented 6 years ago

Hey,

i am trying to use the ZED stereo camera in combination with detectnet-camera. Default camera is set to 0, so that it does not use the onboard camera but rather the V4L2 pipeline. However, i get the following error. It also does not work with my webcam. I use a GTX1070 in my laptop, and added this to my cmakelists.txt : -gencode arch=compute_61,code=sm_61.

detectnet-camera args (2): 0 [./detectnet-camera] 1 [pednet]

[gstreamer] initialized gstreamer, version 1.8.3.0 [gstreamer] gstreamer decoder pipeline string: v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink

detectnet-camera: successfully initialized video device width: 1280 height: 720 depth: 24 (bpp)

detectNet -- loading detection network model from: -- prototxt networks/ped-100/deploy.prototxt -- model networks/ped-100/snapshot_iter_70800.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- threshold 0.500000 -- batch_size 2

[GIE] TensorRT version 3.0, build 3001 [GIE] attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache [GIE] loading network profile from cache... networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache [GIE] platform does not have FP16 support. [GIE] networks/ped-100/snapshot_iter_70800.caffemodel loaded [GIE] CUDA engine context initialized with 3 bindings [GIE] networks/ped-100/snapshot_iter_70800.caffemodel input binding index: 0 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel input dims (b=2 c=3 h=512 w=1024) size=12582912 [cuda] cudaAllocMapped 12582912 bytes, CPU 0x10217c00000 GPU 0x10217c00000 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage binding index: 1 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage dims (b=2 c=1 h=32 w=64) size=16384 [cuda] cudaAllocMapped 16384 bytes, CPU 0x10218800000 GPU 0x10218800000 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes binding index: 2 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes dims (b=2 c=4 h=32 w=64) size=65536 [cuda] cudaAllocMapped 65536 bytes, CPU 0x10218a00000 GPU 0x10218a00000 networks/ped-100/snapshot_iter_70800.caffemodel initialized. [cuda] cudaAllocMapped 16 bytes, CPU 0x1020a600200 GPU 0x1020a600200 maximum bounding boxes: 8192 [cuda] cudaAllocMapped 131072 bytes, CPU 0x10218c00000 GPU 0x10218c00000 [cuda] cudaAllocMapped 32768 bytes, CPU 0x10218a10000 GPU 0x10218a10000 default X screen 0: 1920 x 1080 [OpenGL] glDisplay display window initialized [OpenGL] creating 1280x720 texture loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x10218e00000 GPU 0x10218e00000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x10218804000 GPU 0x10218804000 [gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer msg new-clock ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0 [gstreamer] gstreamer msg stream-start ==> pipeline0 [gstreamer] gstreamer decoder onEOS [gstreamer] gstreamer v4l2src0 ERROR Internal data flow error. [gstreamer] gstreamer Debugging info: gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming task paused, reason not-negotiated (-4) [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink

detectnet-camera: camera open for streaming

detectnet-camera: failed to capture frame detectnet-camera: failed to convert from NV12 to RGBA detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4)imgRGBA, make_float2(0.0f, 255.0f), (float4)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /mnt/Uni/WS1718/DepthSensing/SAVEFROMUBUNTU/jetson-inference/detectnet-camera/detectnet-camera.cpp:247 [cuda] registered 14745600 byte openGL texture for interop access (1280x720)

detectnet-camera: failed to capture frame detectnet-camera: failed to convert from NV12 to RGBA detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4)imgRGBA, make_float2(0.0f, 255.0f), (float4)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /mnt/Uni/WS1718/DepthSensing/SAVEFROMUBUNTU/jetson-inference/detectnet-camera/detectnet-camera.cpp:247

dusty-nv commented 6 years ago

Hi Simon, it looks like you need to change around the gstCamera::ConvertRGBA() function which is setup for NV12 currently. It converts it to FP32 floating point RGBA. You will want to make sure your camera gets into that format.

-------- Original message -------- From: Simon Schulte notifications@github.com Date: 1/10/18 8:07 AM (GMT-05:00) To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Subscribed subscribed@noreply.github.com Subject: [dusty-nv/jetson-inference] failed to capture frame / convert from NV12 to RGBA (#184)

Hey,

i am trying to use the ZED stereo camera in combination with detectnet-camera. Default camera is set to 0, so that it does not use the onboard camera but rather the V4L2 pipeline. However, i get the following error. It also does not work with my webcam. I use a GTX1070 in my laptop, and added this to my cmakelists.txt : -gencode arch=compute_61,code=sm_61.

detectnet-camera args (2): 0 [./detectnet-camera] 1 [pednet]

[gstreamer] initialized gstreamer, version 1.8.3.0 [gstreamer] gstreamer decoder pipeline string: v4l2src device=/dev/video0 ! video/x-raw, width=(int)1280, height=(int)720, format=RGB ! videoconvert ! video/x-raw, format=RGB ! videoconvert !appsink name=mysink

detectnet-camera: successfully initialized video device width: 1280 height: 720 depth: 24 (bpp)

detectNet -- loading detection network model from: -- prototxt networks/ped-100/deploy.prototxt -- model networks/ped-100/snapshot_iter_70800.caffemodel -- input_blob 'data' -- output_cvg 'coverage' -- output_bbox 'bboxes' -- mean_pixel 0.000000 -- threshold 0.500000 -- batch_size 2

[GIE] TensorRT version 3.0, build 3001 [GIE] attempting to open cache file networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache [GIE] loading network profile from cache... networks/ped-100/snapshot_iter_70800.caffemodel.2.tensorcache [GIE] platform does not have FP16 support. [GIE] networks/ped-100/snapshot_iter_70800.caffemodel loaded [GIE] CUDA engine context initialized with 3 bindings [GIE] networks/ped-100/snapshot_iter_70800.caffemodel input binding index: 0 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel input dims (b=2 c=3 h=512 w=1024) size=12582912 [cuda] cudaAllocMapped 12582912 bytes, CPU 0x10217c00000 GPU 0x10217c00000 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage binding index: 1 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 0 coverage dims (b=2 c=1 h=32 w=64) size=16384 [cuda] cudaAllocMapped 16384 bytes, CPU 0x10218800000 GPU 0x10218800000 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes binding index: 2 [GIE] networks/ped-100/snapshot_iter_70800.caffemodel output 1 bboxes dims (b=2 c=4 h=32 w=64) size=65536 [cuda] cudaAllocMapped 65536 bytes, CPU 0x10218a00000 GPU 0x10218a00000 networks/ped-100/snapshot_iter_70800.caffemodel initialized. [cuda] cudaAllocMapped 16 bytes, CPU 0x1020a600200 GPU 0x1020a600200 maximum bounding boxes: 8192 [cuda] cudaAllocMapped 131072 bytes, CPU 0x10218c00000 GPU 0x10218c00000 [cuda] cudaAllocMapped 32768 bytes, CPU 0x10218a10000 GPU 0x10218a10000 default X screen 0: 1920 x 1080 [OpenGL] glDisplay display window initialized [OpenGL] creating 1280x720 texture loaded image fontmapA.png (256 x 512) 2097152 bytes [cuda] cudaAllocMapped 2097152 bytes, CPU 0x10218e00000 GPU 0x10218e00000 [cuda] cudaAllocMapped 8192 bytes, CPU 0x10218804000 GPU 0x10218804000 [gstreamer] gstreamer transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert1 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> videoconvert0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert1 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> videoconvert0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer msg new-clock ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> videoconvert0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0 [gstreamer] gstreamer msg stream-start ==> pipeline0 [gstreamer] gstreamer decoder onEOS [gstreamer] gstreamer v4l2src0 ERROR Internal data flow error. [gstreamer] gstreamer Debugging info: gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: streaming task paused, reason not-negotiated (-4) [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink

detectnet-camera: camera open for streaming

detectnet-camera: failed to capture frame detectnet-camera: failed to convert from NV12 to RGBA detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4)imgRGBA, make_float2(0.0f, 255.0f), (float4)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /mnt/Uni/WS1718/DepthSensing/SAVEFROMUBUNTU/jetson-inference/detectnet-camera/detectnet-camera.cpp:247 [cuda] registered 14745600 byte openGL texture for interop access (1280x720)

detectnet-camera: failed to capture frame detectnet-camera: failed to convert from NV12 to RGBA detectNet::Detect( 0x(nil), 1280, 720 ) -> invalid parameters [cuda] cudaNormalizeRGBA((float4)imgRGBA, make_float2(0.0f, 255.0f), (float4)imgRGBA, make_float2(0.0f, 1.0f), camera->GetWidth(), camera->GetHeight()) [cuda] invalid device pointer (error 17) (hex 0x11) [cuda] /mnt/Uni/WS1718/DepthSensing/SAVEFROMUBUNTU/jetson-inference/detectnet-camera/detectnet-camera.cpp:247

- You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/184, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDK0mKajVYGY7G2r2G3gE_n3StjWllks5tJLYSgaJpZM4RZSsU.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

Sijoma commented 6 years ago

Hey Dusty, thank you for your quick reply. My camera is in YUYV format, and i'm unsure what you mean exactly. Is there another conversion function already implemented other than "convertRGBA()"? Or should i modify ConvertRGBA() ?

I found this issue from a year ago, where they also discussed about using the ZED camera. However, this did not resolve my issue. https://github.com/dusty-nv/jetson-inference/issues/5

On another note: I just noticed that my webcam started working.

dusty-nv commented 6 years ago

The contents of ConvertRGBA () need modified per your colorspace (YUYV in this case). There is a CUDA function for this I believe in cudaYUV.h, that you can replace the NV12 function call with. Hope that helps!

-------- Original message -------- From: Simon Schulte notifications@github.com Date: 1/10/18 8:42 AM (GMT-05:00) To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com, Comment comment@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] failed to capture frame / convert from NV12 to RGBA (#184)

Hey Dusty, thank you for your quick reply. My camera is in YUYV format, and i'm unsure what you mean exactly. Is there another conversion function already implemented other than "convertRGBA()"? Or should i modify ConvertRGBA() ?

I found this issue from a year ago, where they also discussed about using the ZED camera. However, this did not resolve my issue.

5https://github.com/dusty-nv/jetson-inference/issues/5

On another note: I just noticed that my webcam started working.

- You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/184#issuecomment-356605469, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AOpDKycmPTytYY7uVfoiOWI-Kj2xPxj2ks5tJL5KgaJpZM4RZSsU.


This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

kanakiyab commented 6 years ago

Hi Dusty,

I am wondering as to why have you not written a kernel for converting I420-RGB/A format, more from a conceptual perspective if any.

I tried to write a kernel by following your code samples but see some artifacts. I have started a thread on dev forums. Please have a look and comment if possible.

Thanks.

eanmikale commented 4 years ago

@Jason_Fenwick @dusty-nv - We have tried to use the original ZED for inference. ImageNet in particular. Could you be more specific concerning how to modify the cudaYUV.h file? Or are you still recommending gstreamer instead? Also, when attempting to run two DNNs in parallel in the background, using imageNet, from an executable desktop file, we get two windows, one running the first DNN and the second window is blank. When adding a semi-colon after the first executable and then adding the path to the second , only one exec runs at a time, with the second only appearing after the first stream is cut. Wed like to split the GPUs between two DNNs, while making 4 CPU cores available for each execution. So one execution, two DNNs, and two windows for inference. Thank you!