Closed neilyoung closed 3 years ago
The crash does not happen if I operate the Jetson in 5W mode. So maybe a combination of power supply and overheat...
I'm now having attached a fan and run the Jetson in 5W mode. There shouldn't be any heat problems anymore. The power supply is able to provide at least 12 W. All USB cameras are attached to a powered USB hub.
The object detection runs with 16 fps, about 5 fps for each camera.
However, the entire solution crashes very quickly...
That being said at the moment the Jetson Nano with its GPU cannot compete with an RPI4B with a Coral TPU. This is not only performing at > 22 fps with three cams, it is also rock stable :(
I think I found the reason for the crash: I'm now using two networks of the same kind and feed them separately from each cam instead of just one That doesn't crash anymore. There is still the fan applied (I guess it will not work w/o) but the power supply is still the Raspberry PI plug with 5V 2.5 A. Both windows display 24 fps, but I don't believe it.
Power mode back to 10 W
At least it doesn't crash anymore. Hurray...
import jetson.inference
import jetson.utils
net1 = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
net2 = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera1 = jetson.utils.videoSource("/dev/video0") # '/dev/video0' for V4L2
camera2 = jetson.utils.videoSource("/dev/video1") # '/dev/video0' for V4L2
display1 = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file
display2 = jetson.utils.videoOutput("display://1") # 'my_video.mp4' for file
while display.IsStreaming():
img1 = camera.Capture()
detections1 = net1.Detect(img1)
display1.Render(img1)
display1.SetStatus("Object Detection | Network {:.0f} FPS".format(net1.GetNetworkFPS()))
img2 = camera2.Capture()
detections2 = net2.Detect(img2)
display2.Render(img2)
display2.SetStatus("Object Detection | Network {:.0f} FPS".format(net2.GetNetworkFPS()))
The crash does not happen if I operate the Jetson in 5W mode. So maybe a combination of power supply and overheat...
If it doesn't happen in 5W mode, and if when it does happen the Nano completely shuts off, then it is very likely a power supply issue. I recommend upgrading your power supply to a 5V/4A barrel jack adapter.
@dusty-nv It didn't happen again in 2 cam mode with the changes mentioned in my last post (two nets). But is is constantly happening with 3 cams. I will consider to purchase a better power plug. Thanks
@dusty-nv It didn't happen again in 2 cam mode with the changes mentioned in my last post (two nets). But is is constantly happening with 3 cams. I will consider to purchase a better power plug. Thanks
Hi! I bought the "LEICKE 5V 4A" power supply, and it works perfectly.
Yes, I will get mine on Friday. BTW: Is the problem you mentioned with inference and RTSP somehow related to mine here https://github.com/dusty-nv/jetson-inference/issues/885?
I think I found the reason for the crash: I'm now using two networks of the same kind and feed them separately from each cam instead of just one That doesn't crash anymore. There is still the fan applied (I guess it will not work w/o) but the power supply is still the Raspberry PI plug with 5V 2.5 A. Both windows display 24 fps, but I don't believe it.
Power mode back to 10 W
At least it doesn't crash anymore. Hurray...
import jetson.inference import jetson.utils net1 = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) net2 = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) camera1 = jetson.utils.videoSource("/dev/video0") # '/dev/video0' for V4L2 camera2 = jetson.utils.videoSource("/dev/video1") # '/dev/video0' for V4L2 display1 = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file display2 = jetson.utils.videoOutput("display://1") # 'my_video.mp4' for file while display.IsStreaming(): img1 = camera.Capture() detections1 = net1.Detect(img1) display1.Render(img1) display1.SetStatus("Object Detection | Network {:.0f} FPS".format(net1.GetNetworkFPS())) img2 = camera2.Capture() detections2 = net2.Detect(img2) display2.Render(img2) display2.SetStatus("Object Detection | Network {:.0f} FPS".format(net2.GetNetworkFPS()))
Hi! I've run your code on my jetson with:
And it gives to me 24 fps for each window but that's the network speed. To see the display frame rate I used display1.GetFrameRate()
and that gives to me 5fps for each window.
Here's the code:
import jetson.inference
import jetson.utils
net1 = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
#net2 = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5)
camera1 = jetson.utils.videoSource("/dev/video0") # '/dev/video0' for V4L2
camera2 = jetson.utils.videoSource("/dev/video1") # '/dev/video0' for V4L2
display1 = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file
display2 = jetson.utils.videoOutput("display://1") # 'my_video.mp4' for file
while display1.IsStreaming():
img1 = camera1.Capture()
detections1 = net1.Detect(img1)
display1.Render(img1)
display1.SetStatus("Object Detection | Network {:.0f} FPS".format(display1.GetFrameRate()))
img2 = camera2.Capture()
detections2 = net1.Detect(img2)
display2.Render(img2)
display2.SetStatus("Object Detection | Network {:.0f} FPS".format(net1.GetNetworkFPS()))
I've also modified a bit of your code to use opencv and gstreamer instead of the jetson utils methods.
To me works 20 fps on one camera and about 6 fps on second one, detection included. I've edited this code superfast just for a comparison, so it's not the best to achieve multi camera image processing.
The reason of this choose is that I find jetson inference very buggy and not so well documented, on the other hand I used to work with opencv for my personal projects.
It's been only a month that I have a jetson nano, so I'm still figuring out how it works :)
Btw here the code for opencv + gstreamer that I used. Please note that I'm also figuring out how gstreamer works, but I know that it is very capable and nvidia deepstream uses it, so I'm considering to learn also nvidia deep stream. That library seems to be the best option to make this kind of stuff on the jetson except for the fact that nobody uses it. https://gist.github.com/FedericoLanzani/dc0dcb3c82c00f718766bca346a03720
Bye!
Yes, I will get mine on Friday. BTW: Is the problem you mentioned with inference and RTSP somehow related to mine here #885?
mhm I don't think is somehow related, but I see that you found a solution.
In fact the contributor of this project found, but he also could not reproduce the issue...
I will provide you my results with 2 cams, maybe with 3 on Friday. But I can at least confirm the 24 or 15 fps claim, the network gives. In fact one can see, that the real fps is way lower for each separate cam.
On Nano with SSD-Mobilenet-v2 (the 90-class COCO model), you will get around ~24 FPS total for the network. With a higher batch size you could get more FPS, but my code is setup for batch_size=1. The other Jetson devices also get higher FPS of course.
Also retraining the model with only the classes you want will greatly improve the FPS. Most folks do not need the full 90 classes from the COCO model - that is a lot of classes for detection model. Instead you could pick and choose like this part of the tutorial: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
Thanks for your comment, Dusty. What exactly is this batch size and how could that be increased?
The batch size means how many images are processed at once. Since you have N cameras in theory you could do a batch size of N. Its not easily changeable in my code though as it requires the pre/post-processing be setup for batching too, and most users of my library only need batch=1. The TensorRT samples show batching but I'm not sure those explicitly do SSD-Mobilenet.
From: neilyoung notifications@github.com Sent: Wednesday, January 6, 2021 5:56:02 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
Thanks for your comment, Dusty. What exactly is this batch size and how could that be increased?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-755766491, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK5OB6DZIKQZ65Z7YVTSYTTAFANCNFSM4VSYFH2Q.
Also DeepStream can handle batching automatically and has optimized object detection models. The Transfer Learning Toolkit can be used to prune the models and make them faster, and run them with DeepStream.
From: Dustin Franklin dustinf@nvidia.com Sent: Wednesday, January 6, 2021 6:05:58 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com; dusty-nv/jetson-inference reply@reply.github.com Cc: Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
The batch size means how many images are processed at once. Since you have N cameras in theory you could do a batch size of N. Its not easily changeable in my code though as it requires the pre/post-processing be setup for batching too, and most users of my library only need batch=1. The TensorRT samples show batching but I'm not sure those explicitly do SSD-Mobilenet.
From: neilyoung notifications@github.com Sent: Wednesday, January 6, 2021 5:56:02 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
Thanks for your comment, Dusty. What exactly is this batch size and how could that be increased?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-755766491, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK5OB6DZIKQZ65Z7YVTSYTTAFANCNFSM4VSYFH2Q.
Also DeepStream can handle batching automatically and has optimized object detection models. The Transfer Learning Toolkit can be used to prune the models and make them faster, and run them with DeepStream. … ____ From: Dustin Franklin dustinf@nvidia.com Sent: Wednesday, January 6, 2021 6:05:58 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com; dusty-nv/jetson-inference reply@reply.github.com Cc: Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882) The batch size means how many images are processed at once. Since you have N cameras in theory you could do a batch size of N. Its not easily changeable in my code though as it requires the pre/post-processing be setup for batching too, and most users of my library only need batch=1. The TensorRT samples show batching but I'm not sure those explicitly do SSD-Mobilenet. ____ From: neilyoung notifications@github.com Sent: Wednesday, January 6, 2021 5:56:02 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882) Thanks for your comment, Dusty. What exactly is this batch size and how could that be increased? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<#882 (comment)>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK5OB6DZIKQZ65Z7YVTSYTTAFANCNFSM4VSYFH2Q.
Hi dusty, thank you for your time, I've tried to implement deepstream in my project but very few people seem to be using it and I didn't found courses or guides that teach how to use it, also the documentation is a bit confusing to me. I've also had problems using nvidia docker (on ubuntu). Surely deepstream seems VERY interesting and with a lot of potential for my projects, but I found complex to learn how to use it. I'd be happy if you could suggest me some courses or some good guides that teaches how deepstream works and how can be implemented.
DeepStream has many users, they just tend to be for production deployment and using multiple camera streams. Check out the DeepStream forums and these Python samples:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps
From: Federico Lanzani notifications@github.com Sent: Wednesday, January 6, 2021 6:25:18 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
Also DeepStream can handle batching automatically and has optimized object detection models. The Transfer Learning Toolkit can be used to prune the models and make them faster, and run them with DeepStream. … ____ From: Dustin Franklin dustinf@nvidia.commailto:dustinf@nvidia.com Sent: Wednesday, January 6, 2021 6:05:58 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.commailto:jetson-inference@noreply.github.com; dusty-nv/jetson-inference reply@reply.github.commailto:reply@reply.github.com Cc: Mention mention@noreply.github.commailto:mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882https://github.com/dusty-nv/jetson-inference/issues/882) The batch size means how many images are processed at once. Since you have N cameras in theory you could do a batch size of N. Its not easily changeable in my code though as it requires the pre/post-processing be setup for batching too, and most users of my library only need batch=1. The TensorRT samples show batching but I'm not sure those explicitly do SSD-Mobilenet. ____ From: neilyoung notifications@github.commailto:notifications@github.com Sent: Wednesday, January 6, 2021 5:56:02 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.commailto:jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.commailto:dustinf@nvidia.com; Mention mention@noreply.github.commailto:mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882https://github.com/dusty-nv/jetson-inference/issues/882) Thanks for your comment, Dusty. What exactly is this batch size and how could that be increased? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<#882 (comment)https://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-755766491>, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK5OB6DZIKQZ65Z7YVTSYTTAFANCNFSM4VSYFH2Q.
Hi dusty, thank you for your time, I've tried to implement deepstream in my project but very few people seem to be using it and I didn't found courses or guides that teach how to use it, also the documentation is a bit confusing to me. I've also had problems using nvidia docker (on ubuntu). Surely deepstream seems VERY interesting and with a lot of potential for my projects, but I found complex to learn how to use it. I'd be happy if you could suggest me some courses or some good guides that teaches how deepstream works and how can be implemented.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-755776078, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK2VU6QCXDCNO4LK7ZTSYTWN5ANCNFSM4VSYFH2Q.
On Nano with SSD-Mobilenet-v2 (the 90-class COCO model), you will get around ~24 FPS total for the network. With a higher batch size you could get more FPS, but my code is setup for batch_size=1. The other Jetson devices also get higher FPS of course.
Also retraining the model with only the classes you want will greatly improve the FPS. Most folks do not need the full 90 classes from the COCO model - that is a lot of classes for detection model. Instead you could pick and choose like this part of the tutorial: https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
Hi Dusty, this sounds interesting. Unfortunately I don't currently have the time to do a re-training. Would you have by chance a re-trained model (e.g. persons only) for a quick test and assesment?
Got the 20 W power plug. Same issue. Crashes with 3 USB cams.
When you say crashes, do you mean that the Nano powers off? Or that the app crashes to the desktop? If the later, what error/exception is thrown?
I don't have the person-only model, but there is one with DeepStream/TLT.
From: neilyoung notifications@github.com Sent: Thursday, January 7, 2021 6:10:53 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
Got the 20 W power plug. Same issue. Crashes with 3 USB cams.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-756050963, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK2F4GU4BMQFTZ3CC4DSYWJD3ANCNFSM4VSYFH2Q.
Yes, the videos freeze for a second then it all goes down and the entire machine reboots.
OK, thanks for checking for the model. Would it be possible to lead me to that DeepStream? Right now the Jetson is going to loose the race against Coral for performance and stability. I would give it at least yet another week before I give up, since I think, your solution has the better future.
Shutting off entirely means its likely a hardware problem, either power or USB related. Hopefully the previous kernel log is saved and you could check that - otherwise you can get the kernel log while its running on another machine via debug UART.
This page has the PeopleNet model -
https://ngc.nvidia.com/catalog/models/nvidia:tlt_peoplenet
See the section Instructions to deploy these models with DeepStream
From: neilyoung notifications@github.com Sent: Thursday, January 7, 2021 8:24:50 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
Yes, the videos freeze for a second then it all goes down and the entire machine reboots.
OK, thanks for checking for the model. Would it be possible to lead me to that DeepStream? Right now the Jetson is going to loose the race against Coral for performance and stability. I would give it at least yet another week before I give up, since I think, your solution has the better future.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-756114395, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK6AMSMD6W57QX627GDSYWY2FANCNFSM4VSYFH2Q.
I was running dmesg -wH
in an SSH console to no avail. Need to find the kernel log, though. Found a reference to a serial adapter setup. I think I have such a dongle in the shelf. Will try...
https://www.jetsonhacks.com/2019/04/19/jetson-nano-serial-console/
The situation is absolutely reproducible. It starts, it runs (sometimes it doesn't reach image display state), it freezes, it reboots.
The crash dump is in the first few lines of this log. The following is just reboot.
This is the log dumped from screen
terminal app on macOS Big Sur 11.1
I suppose it will remain a mystery... :(
It could be a problem with the USB. I will get 3 identical new USB cams tomorrow, let's see what happens then.
Is there any good explanation for this: I'm running your "my-detection.py" sample with just one USB camera. While the network reports 24 fps, the display frame rate is only 15. From the visible impression I can confirm these 15 fps.
EDIT: The camera is giving 60 fps if I drop the inference.
It could be a problem with the USB. I will get 3 identical new USB cams tomorrow, let's see what happens then.
You might want to re-arrange how you are plugging them in, trying a different hub/ect, to see if that is related.
While the network reports 24 fps, the display frame rate is only 15. From the visible impression I can confirm these 15 fps.
There is other processing outside of the network such as drawing the bounding box overlays, rendering with OpenGL, ect. In particular I think it's the OpenGL synchronization that is slowing it down - need to dig into this more. I think if you run it headlessly (i.e. streaming out via RTP) the overall FPS may be higher.
OK, makes sense. I will create a headless version tomorrow, just monitoring the detections and the fps achieved w/o drawing. My productive approach in the end is also headless.
BTW: From your experience: Would one single USB3 hub be capable to take the load of 3 cams? Well, the hub of course, but the upload too?
Anyway I will get three USB cams of one kind tomorrow, will test again then.
It depends on the camera and bandwidth of them I think. Have you tried plugging them into the individual USB ports on the Nano?
From: neilyoung notifications@github.com Sent: Thursday, January 7, 2021 3:32:24 PM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
BTW: From your experience: Would one single USB3 hub be capable to take the load of 3 cams? Well, the hub of course, but the upload too?
Anyway I will get three USB cams of one kind tomorrow, will test again then.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-756366572, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGKZ5P3OPA2QOCNNDWB3SYYK5RANCNFSM4VSYFH2Q.
Usually each is plugged to a different port. This crashes
OK, with the 3 new USB-2 cams there is no crash anymore, but the frame rate is poor. About 6 fps after inference with a little sub optimal capture process and in headless mode. Additional display costs about 1fps per camera. But the input is 640x480 MJPEG only...
At least it doesn't crash anymore and I will bet my hopes on another model then.
With 1280 x 720 as input the whole process not even start, because gstv4l2src_decide_allocation
complains about Buffer pool allocation failed
. The camera does not support something greater than 640 x 480 and smaller than HD.
This contradicts the claim of 24 fps for the inference engine. Practically I only can see 18, max 19. I suppose this number is hardcoded somewhere...
But the low frame rate could have also another explanation: I cannot achieve more than 11 fps RAW from the cameras on the Jetson. On other machines the behaviour is also strange: Sometimes 30 fps is no problem, but sometimes it is also only about 12 fps.
Would you mind to give this little test a chance? It uses OpenCV (so it runs out of the box on a Jetson nano). It opens the specified cam (index 0 by default) and captures 200 frames. It then prints out the achieved frame rate.
My result is very stable 11.2 fps on all devices at 640x480. On a couple of other devices, including a RPI 4, I'm achieving at least 20 fps...
https://gist.github.com/neilyoung/58b0ab82532ba1c388c3d60edf7c40ad
The 24FPS is for inferencing only, with SSD-Mobilenet-v2 on Nano (the 90-class COCO model). It doesn't include overhead from camera capture or display.
If you camera supports lower resolution, you could try that to see if it improves the capture FPS, because the image gets downsampled to 300x300 anyways for SSD-Mobilenet.
It doesn't appear that you are using hardware MJPEG decoding with GStreamer, rather CPU-only decode. Also for multiple cameras typically each would have their own thread (maybe they already do through OpenCV) and then processed in another inference thread.
From: neilyoung notifications@github.com Sent: Sunday, January 10, 2021 10:24:42 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
OK, with the 3 new USB-2 cams there is no crash anymore, but the frame rate is poor. About 6 fps after inference with a little sub optimal capture process and in headless mode. Additional display costs about 1fps per camera. But the input is 640x480 MJPEG only...
At least it doesn't crash anymore and I will bet my hopes on another model then.
With 1280 x 720 as input the whole process not even start, because gstv4l2src_decide_allocation complains about Buffer pool allocation failed. The camera does not support something greater than 640 x 480 and smaller than HD.
This contradicts the claim of 24 fps for the inference engine. Practically I only can see 18, max 19. I suppose this number is hardcoded somewhere...
But the low frame rate could have also another explanation: I cannot achieve more than 11 fps RAW from the cameras on the Jetson. On other machines the behaviour is also strange: Sometimes 30 fps is no problem, but sometimes it is also only about 12 fps.
Would you mind to give this little test a chance? It uses OpenCV (so it runs out of the box on a Jetson nano). It opens the specified cam (index 0 by default) and captures 200 frames. It then prints out the achieved frame rate.
My result is very stable 11.2 fps on all devices at 640x480. On a couple of other devices, including a RPI 4, I'm achieving at least 20 fps...
https://gist.github.com/neilyoung/58b0ab82532ba1c388c3d60edf7c40ad
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-757494119, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGK6HIOOW5EDHMKZMGA3SZHBDVANCNFSM4VSYFH2Q.
video-viewer v4l2:///dev/video0 --input-codec=raw
Confirms my measurements. With or without --input-codec=raw
the same poor frame rate on just a single cam: 11 fps
Are you sure it isn't defaulting to raw? You could try running it with --input-codec=mjpeg and check the gstreamer pipeline it prints out in the logs that it uses.
You can also try enabling HW decode by changing this back to nvjpegdec:
Then re-run make and sudo make install
I switched it from HW mjpeg decode to CPU decode because some legacy mjpeg cameras are non-conformant. Newer H264 cameras tend to work more smoothly than the MJPEG ones.
From: neilyoung notifications@github.com Sent: Sunday, January 10, 2021 11:02:46 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
video-viewer v4l2:///dev/video0 --input-codec=raw
Confirms my measurements. With or without --input-codec=raw the same poor frame rate on just a single cam: 11 fps
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-757499517, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGKYFIQAIGQRSBDTHUBDSZHFSNANCNFSM4VSYFH2Q.
Are you sure it isn't defaulting to raw? You could try running it with --input-codec=mjpeg and check the gstreamer pipeline it prints out in the logs that it uses. Y
Yes, it is already set to mjpeg
. I open the cameras like so:
cameraN = jetson.utils.videoSource("/dev/video0", argv = ["--input-codec=mjpeg", "--input-width=640", "--input-height=480"]
I can see that it works, since the default resolution is a HD format and it reacts to my 640 x 480 setting
video-viewer
confirms the low input frame rate (see my posting above).
The 24FPS is for inferencing only, with SSD-Mobilenet-v2 on Nano (the 90-class COCO model).
Sure. That was clear to me.
It doesn't include overhead from camera capture or display. If you camera supports lower resolution, you could try that to see if it improves the capture FPS,
No changes at 320 x 240.
It doesn't appear that you are using hardware MJPEG decoding with GStreamer, rather CPU-only decode.
I believe at least, that I do :)
Also for multiple cameras typically each would have their own thread (maybe they already do through OpenCV) and then processed in another inference thread.
Thanks for reminding me. Will give that a try.
You can also try enabling HW decode by changing this back to nvjpegdec: https://github.com/dusty-nv/jetson-utils/blob/833fc7998e34d852672277730a11aeed90024959/camera/gstCamera.cpp#L181 Then re-run make and sudo make install I switched it from HW mjpeg decode to CPU decode because some legacy mjpeg cameras are non-conformant. Newer H264 cameras tend to work more smoothly than the MJPEG ones.
mkdir build && cd build && cmake .. && make && sudo make install
Correct?
And since you have asked: The gstreamer log just shows a small warning regarding jpecdec0
(don't have it on top of my head right now, will provide later)
Yes, although you may already have a jetson-inference/build dir from building the project previously
From: neilyoung notifications@github.com Sent: Sunday, January 10, 2021 11:37:54 AM To: dusty-nv/jetson-inference jetson-inference@noreply.github.com Cc: Dustin Franklin dustinf@nvidia.com; Mention mention@noreply.github.com Subject: Re: [dusty-nv/jetson-inference] Overheat problem? - Jetson crash with two cameras simultaneously (#882)
You could try running it with --input-codec=mjpeg and check the gstreamer pipeline it prints out in the logs that it uses. You can also try enabling HW decode by changing this back to nvjpegdec: https://github.com/dusty-nv/jetson-utils/blob/833fc7998e34d852672277730a11aeed90024959/camera/gstCamera.cpp#L181 Then re-run make and sudo make install I switched it from HW mjpeg decode to CPU decode because some legacy mjpeg cameras are non-conformant. Newer H264 cameras tend to work more smoothly than the MJPEG ones.
mkdir build && cd build && cmake .. && make && sudo make install
Correct?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/dusty-nv/jetson-inference/issues/882#issuecomment-757504762, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ADVEGKYDWJWVVNNBKDTQA5LSZHJWFANCNFSM4VSYFH2Q.
I think now I need some help to restore the previous version. I didn't have had the jetson-util
project on this machine. Now two things are not going well
1) The deserialization of a model takes 6 times that long (5s before, 30s now). This might not have to do with the change, but rather with the fact, that I patched the utils. 2) (python3:12320): CRITICAL : 17:47:29.693: gst_adapter_push: assertion 'GST_IS_BUFFER (buf)' failed Segmentation fault (core dumped)
Trying to revert to the original version by reverting the change. I hope this will at least remove the assertion. BTW: Did I build a debug version or is RELEASE the default (as expected)?
No, even if I revert that the assertion remains. Time for a new SD setup from scratch?
Just in case it sheds some light on the scene:
This is the status now with a self-compiled jetson-util
but w/o the suggested change, so just the repo cloned and build
What happens if I start my test app now is this:
python3 first_attempt.py
jetson.inference -- detectNet loading build-in network 'ssd-mobilenet-v2'
detectNet -- loading detection network model from:
-- model networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
-- input_blob 'Input'
-- output_blob 'NMS'
-- output_count 'NMS_1'
-- class_labels networks/SSD-Mobilenet-v2/ssd_coco_labels.txt
-- threshold 0.500000
-- batch_size 1
[TRT] TensorRT version 7.1.3
[TRT] loading NVIDIA plugins...
[TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[TRT] Registered plugin creator - ::NMS_TRT version 1
[TRT] Registered plugin creator - ::Reorg_TRT version 1
[TRT] Registered plugin creator - ::Region_TRT version 1
[TRT] Registered plugin creator - ::Clip_TRT version 1
[TRT] Registered plugin creator - ::LReLU_TRT version 1
[TRT] Registered plugin creator - ::PriorBox_TRT version 1
[TRT] Registered plugin creator - ::Normalize_TRT version 1
[TRT] Registered plugin creator - ::RPROI_TRT version 1
[TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[TRT] Could not register plugin creator - ::FlattenConcat_TRT version 1
[TRT] Registered plugin creator - ::CropAndResize version 1
[TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[TRT] Registered plugin creator - ::Proposal version 1
[TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[TRT] Registered plugin creator - ::Split version 1
[TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[TRT] detected model format - UFF (extension '.uff')
[TRT] desired precision specified for GPU: FASTEST
[TRT] requested fasted precision for device GPU without providing valid calibrator, disabling INT8
[TRT] native precisions detected for GPU: FP32, FP16
[TRT] selecting fastest native precision for GPU: FP16
[TRT] attempting to open engine cache file networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT] loading network plan from engine cache... networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff.1.1.7103.GPU.FP16.engine
[TRT] device GPU, loaded networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff
[TRT] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[TRT] Deserialize required 5625205 microseconds.
[TRT]
[TRT] CUDA engine context initialized on device GPU:
[TRT] -- layers 116
[TRT] -- maxBatchSize 1
[TRT] -- workspace 0
[TRT] -- deviceMemory 40755712
[TRT] -- bindings 3
[TRT] binding 0
-- index 0
-- name 'Input'
-- type FP32
-- in/out INPUT
-- # dims 3
-- dim #0 3 (SPATIAL)
-- dim #1 300 (SPATIAL)
-- dim #2 300 (SPATIAL)
[TRT] binding 1
-- index 1
-- name 'NMS'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1 (SPATIAL)
-- dim #1 100 (SPATIAL)
-- dim #2 7 (SPATIAL)
[TRT] binding 2
-- index 2
-- name 'NMS_1'
-- type FP32
-- in/out OUTPUT
-- # dims 3
-- dim #0 1 (SPATIAL)
-- dim #1 1 (SPATIAL)
-- dim #2 1 (SPATIAL)
[TRT]
[TRT] binding to input 0 Input binding index: 0
[TRT] binding to input 0 Input dims (b=1 c=3 h=300 w=300) size=1080000
[TRT] binding to output 0 NMS binding index: 1
[TRT] binding to output 0 NMS dims (b=1 c=1 h=100 w=7) size=2800
[TRT] binding to output 1 NMS_1 binding index: 2
[TRT] binding to output 1 NMS_1 dims (b=1 c=1 h=1 w=1) size=4
[TRT]
[TRT] device GPU, networks/SSD-Mobilenet-v2/ssd_mobilenet_v2_coco.uff initialized.
[TRT] W = 7 H = 100 C = 1
[TRT] detectNet -- maximum bounding boxes: 100
[TRT] detectNet -- loaded 91 class info entries
[TRT] detectNet -- number of object classes: 91
[gstreamer] initialized gstreamer, version 1.14.5.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
[gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA
[gstreamer] v4l2-proplist, device.path=(string)/dev/video2, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.4, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA
[gstreamer] v4l2-proplist, device.path=(string)/dev/video1, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.3, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.2, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 14 caps for v4l2 device /dev/video0
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [6] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [11] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [12] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [13] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera -- selected device profile: codec=mjpeg format=unknown width=640 height=480
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)640, height=(int)480 ! nvjpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[video] created gstCamera from v4l2:///dev/video0
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: v4l2:///dev/video0
- protocol: v4l2
- location: /dev/video0
-- deviceType: v4l2
-- ioType: input
-- codec: mjpeg
-- width: 640
-- height: 480
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video1
[gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA
[gstreamer] v4l2-proplist, device.path=(string)/dev/video2, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.4, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA
[gstreamer] v4l2-proplist, device.path=(string)/dev/video1, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.3, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 14 caps for v4l2 device /dev/video1
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [6] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [11] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [12] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [13] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera -- selected device profile: codec=mjpeg format=unknown width=640 height=480
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video1 ! image/jpeg, width=(int)640, height=(int)480 ! nvjpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video1
[video] created gstCamera from v4l2:///dev/video1
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: v4l2:///dev/video1
- protocol: v4l2
- location: /dev/video1
- port: 1
-- deviceType: v4l2
-- ioType: input
-- codec: mjpeg
-- width: 640
-- height: 480
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video2
[gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA
[gstreamer] v4l2-proplist, device.path=(string)/dev/video2, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.4, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017;
[gstreamer] gstCamera -- found 14 caps for v4l2 device /dev/video2
[gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [6] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [7] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [8] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [9] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [10] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [11] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [12] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] [13] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 };
[gstreamer] gstCamera -- selected device profile: codec=mjpeg format=unknown width=640 height=480
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video2 ! image/jpeg, width=(int)640, height=(int)480 ! nvjpegdec ! video/x-raw ! appsink name=mysink
[gstreamer] gstCamera successfully created device v4l2:///dev/video2
[video] created gstCamera from v4l2:///dev/video2
------------------------------------------------
gstCamera video options:
------------------------------------------------
-- URI: v4l2:///dev/video2
- protocol: v4l2
- location: /dev/video2
- port: 2
-- deviceType: v4l2
-- ioType: input
-- codec: mjpeg
-- width: 640
-- height: 480
-- frameRate: 30.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video] created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://0
- protocol: display
- location: 0
-- deviceType: display
-- ioType: output
-- codec: raw
-- width: 1920
-- height: 1080
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video] created glDisplay from display://1
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://1
- protocol: display
- location: 1
- port: 1
-- deviceType: display
-- ioType: output
-- codec: raw
-- width: 1920
-- height: 1080
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video] created glDisplay from display://2
------------------------------------------------
glDisplay video options:
------------------------------------------------
-- URI: display://2
- protocol: display
- location: 2
- port: 2
-- deviceType: display
-- ioType: output
-- codec: raw
-- width: 1920
-- height: 1080
-- frameRate: 0.000000
-- bitRate: 0
-- numBuffers: 4
-- zeroCopy: true
-- flipMethod: none
-- loop: 0
------------------------------------------------
Starting...
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvjpegdec0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvjpegdec0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvjpegdec0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera -- map buffer size was less than max size (1008 vs 460807)
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)jpeg, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=640 height=480 size=460807
RingBuffer -- allocated 4 buffers (460807 bytes each, 1843228 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer -- allocated 4 buffers (921600 bytes each, 3686400 bytes total)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter3
[gstreamer] gstreamer changed state from NULL to READY ==> nvjpegdec1
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter2
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src1
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter3
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvjpegdec1
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter2
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src1
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline1
[gstreamer] gstreamer message new-clock ==> pipeline1
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter3
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvjpegdec1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src1
[gstreamer] gstreamer message stream-start ==> pipeline1
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera -- map buffer size was less than max size (1008 vs 460807)
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)jpeg, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=640 height=480 size=460807
RingBuffer -- allocated 4 buffers (460807 bytes each, 1843228 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline1
RingBuffer -- allocated 4 buffers (921600 bytes each, 3686400 bytes total)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter5
[gstreamer] gstreamer changed state from NULL to READY ==> nvjpegdec2
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter4
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src2
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline2
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter5
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvjpegdec2
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter4
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src2
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline2
[gstreamer] gstreamer message new-clock ==> pipeline2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter5
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvjpegdec2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter4
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src2
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message stream-start ==> pipeline2
[gstreamer] gstCamera -- onPreroll
[gstreamer] gstCamera -- map buffer size was less than max size (1008 vs 460807)
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)640, height=(int)480, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)jpeg, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1
[gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=640 height=480 size=460807
RingBuffer -- allocated 4 buffers (460807 bytes each, 1843228 bytes total)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline2
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline2
RingBuffer -- allocated 4 buffers (921600 bytes each, 3686400 bytes total)
** (python3:7221): CRITICAL **: 18:00:03.219: gst_adapter_push: assertion 'GST_IS_BUFFER (buf)' failed
Segmentation fault (core dumped)
neil@jetson:~/jetson-inference/build/aarch64/bin$
Rebuild it with `-DCMAKE_BUILD_TYPE=Release'. Assertion gone
OK, I have now tested @FedericoLanzani 's receipt. https://gist.github.com/FedericoLanzani/dc0dcb3c82c00f718766bca346a03720
1) I can open 2 cameras and achieve about 21 frames (!) with inference. 2) I cannot open 3 cameras with 640 x 480, because I run into memory problems.
[WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (1757) handleMessage OpenCV | GStreamer warning: Embedded video playback halted; module v4l2src2 reported: Failed to allocate required memory.
3) I think the main difference is that Frederico really opens the cam in video/x-raw mode. I have the suspecion, that these cameras have a hardware problem with MJPEG compression.
This is the pipeline, Frederico uses:
camSet1 = f'v4l2src device={video_source1} ! video/x-raw, width=640, height=480, framerate=30/1 ! videoconvert ! appsink'
This works faster, but needs more memory.
@FedericoLanzani With this pipeline I can achieve about 20fps for three cameras (I can open three cameras now)
camSet1 = f'v4l2src device={video_source1} ! image/jpeg, width=640, height=480, framerate=30/1 ! jpegdec ! videoconvert ! appsink'
but the latency is too high... About 1.5 secs
@dusty-nv Hi Dustin, would it be possible to provide the GST pipeline you are using to me or tell me, how I could see it? I'm trying to understand, why these two pipelines provide more fps than using the videosource object of your library:
// This one does not work with three cams on my 4GB Nano (memory not sufficient), but with two cams it runs at 15 fps
camSet1 = f'v4l2src device={video_source1} ! video/x-raw, width=640, height=480, framerate=30/1 ! videoconvert ! appsink'
// This one works with three cams and produces 20 fps on each of the cams
camSet1 = f'v4l2src device={video_source1} ! image/jpeg, width=640, height=480, framerate=30/1 ! jpegdec ! videoconvert ! appsink'
With your videosource I'm achieving 5 fps on each cam and I don't really get it.
When you run video-viewer, it will print out the GStreamer pipeline that it uses as it is creating the videoSource object
Here with one camera. The display shows 11 fps, which is consistent over all these cams. And it doesn't explain the higher fps with the Opencv code:
neil@jetson:~/jetson-inference/build/aarch64/bin$ video-viewer v4l2:///dev/video0 [gstreamer] initialized gstreamer, version 1.14.5.0 [gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0 [gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA [gstreamer] v4l2-proplist, device.path=(string)/dev/video2, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.4, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017; [gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA [gstreamer] v4l2-proplist, device.path=(string)/dev/video1, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.3, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017; [gstreamer] gstCamera -- found v4l2 device: HBV HD CAMERA [gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)"HBV\ HD\ CAMERA", v4l2.device.bus_info=(string)usb-70090000.xusb-2.2, v4l2.device.version=(uint)264588, v4l2.device.capabilities=(uint)2216689665, v4l2.device.device_caps=(uint)69206017; [gstreamer] gstCamera -- found 14 caps for v4l2 device /dev/video0 [gstreamer] [0] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [1] video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [2] video/x-raw, format=(string)YUY2, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [3] video/x-raw, format=(string)YUY2, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [4] video/x-raw, format=(string)YUY2, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [5] video/x-raw, format=(string)YUY2, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [6] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [7] image/jpeg, width=(int)1920, height=(int)1080, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [8] image/jpeg, width=(int)1280, height=(int)720, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [9] image/jpeg, width=(int)640, height=(int)480, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [10] image/jpeg, width=(int)352, height=(int)288, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [11] image/jpeg, width=(int)320, height=(int)240, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [12] image/jpeg, width=(int)176, height=(int)144, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] [13] image/jpeg, width=(int)160, height=(int)120, pixel-aspect-ratio=(fraction)1/1, framerate=(fraction){ 30/1, 25/1, 20/1, 15/1, 10/1, 5/1 }; [gstreamer] gstCamera -- selected device profile: codec=mjpeg format=unknown width=1280 height=720 [gstreamer] gstCamera pipeline string: [gstreamer] v4l2src device=/dev/video0 ! image/jpeg, width=(int)1280, height=(int)720 ! jpegdec ! video/x-raw ! appsink name=mysink [gstreamer] gstCamera successfully created device v4l2:///dev/video0 [video] created gstCamera from v4l2:///dev/video0 ------------------------------------------------ gstCamera video options: ------------------------------------------------ -- URI: v4l2:///dev/video0 - protocol: v4l2 - location: /dev/video0 -- deviceType: v4l2 -- ioType: input -- codec: mjpeg -- width: 1280 -- height: 720 -- frameRate: 30.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 ------------------------------------------------ [OpenGL] glDisplay -- X screen 0 resolution: 1920x1080 [OpenGL] glDisplay -- X window resolution: 1920x1080 [OpenGL] glDisplay -- display device initialized (1920x1080) [video] created glDisplay from display://0 ------------------------------------------------ glDisplay video options: ------------------------------------------------ -- URI: display://0 - protocol: display - location: 0 -- deviceType: display -- ioType: output -- codec: raw -- width: 1920 -- height: 1080 -- frameRate: 0.000000 -- bitRate: 0 -- numBuffers: 4 -- zeroCopy: true -- flipMethod: none -- loop: 0 ------------------------------------------------ [gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING [gstreamer] gstreamer changed state from NULL to READY ==> mysink [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1 [gstreamer] gstreamer changed state from NULL to READY ==> jpegdec0 [gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0 [gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0 [gstreamer] gstreamer changed state from NULL to READY ==> pipeline0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1 [gstreamer] gstreamer changed state from READY to PAUSED ==> jpegdec0 [gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0 [gstreamer] gstreamer stream status CREATE ==> src [gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0 [gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0 [gstreamer] gstreamer stream status ENTER ==> src [gstreamer] gstreamer message new-clock ==> pipeline0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1 [gstreamer] gstreamer message stream-start ==> pipeline0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> jpegdec0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0 [gstreamer] gstCamera -- onPreroll [gstreamer] gstCamera -- map buffer size was less than max size (1382400 vs 1382407) [gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)1280, height=(int)720, interlace-mode=(string)progressive, multiview-mode=(string)mono, multiview-flags=(GstVideoMultiviewFlagsSet)0:ffffffff:/right-view-first/left-flipped/left-flopped/right-flipped/right-flopped/half-aspect/mixed-mono, pixel-aspect-ratio=(fraction)1/1, chroma-site=(string)mpeg2, colorimetry=(string)1:4:0:0, framerate=(fraction)30/1 [gstreamer] gstCamera -- recieved first frame, codec=mjpeg format=i420 width=1280 height=720 size=1382407 RingBuffer -- allocated 4 buffers (1382407 bytes each, 5529628 bytes total) [gstreamer] gstreamer changed state from READY to PAUSED ==> mysink [gstreamer] gstreamer message async-done ==> pipeline0 [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink [gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0 RingBuffer -- allocated 4 buffers (2764800 bytes each, 11059200 bytes total) video-viewer: captured 1 frames (1280 x 720) [OpenGL] glDisplay -- set the window size to 1280x720 [OpenGL] creating 1280x720 texture (GL_RGB8 format, 2764800 bytes) [cuda] registered openGL texture for interop access (1280x720, GL_RGB8, 2764800 bytes) [gstreamer] gstreamer message qos ==> jpegdec0 [gstreamer] gstreamer message qos ==> jpegdec0 video-viewer: captured 2 frames (1280 x 720) video-viewer: captured 3 frames (1280 x 720) video-viewer: captured 4 frames (1280 x 720)
Wait. 640 x 480 selected as options. It starts again with 11 fps, then it goes up to 31, then it drops down again.... Man, any explanation for this behaviour? If it runs at 30 fps the latency is near 0..
Sometimes a trace "gstreamer message qos => v4lsrc0" appearas", but this seem to not have an impact...
BTW: I noticed this toggling fps already on my Mac Book... Strange...
Hi,
I took this sample code and used it with
/dev/video
, a Logitech C920.I was using a 5V 2.5A Raspberry PI power supply via micro USB. No fan.
Generally this worked fine at about 24 fps reported. The heat sink got pretty hot, though. The reported AO temperature after the test was about 68 degrees.
Then I plugged another USB cam (ESP 1080) and just doubled the code.
This is for sure not optimal, but it worked. I got two windows of the two cams. The reported FPS was still 24 fps, but the movements in the displayed images did show me, that the 24 fps distributed equally, so that each cam came to about 12 fps.
But... it crashed after several minutes. The Jetson switched off. Completely.
I first thought, it might be an "undervoltage" problem, but there is no trace of this. In two SSH windows I observed the output of
dmesg -wH
andjournalctl
. No problem with the power supply indicated.Just both windows did show this after a while:
Jan 04 10:21:14 jetson kernel: FAN rising trip_level:1 cur_temp:51000 trip_temps[2]:61000
For me it seems, that the Jetson is getting warm and he tries to switch on the fan :)In a third SSH window I observed the output of
tegrastats
.This is what appeared until the sudden death.
I can't see a remarkable overheat either...
I'm absolutely not sure, what causes the (reproducible) crashes. I will apply a fan for the next attempts, though.
Any further ideas, who to approach the problem?