MattsProjects / pylon_gstreamer

A robust integration of Basler's Pylon API with GStreamer. Delivers applications as ready-to-run standalone compiled executables (gst-launch-1.0 is not needed). Designed for reliability and easy access to performance optimizations. Note: This is not a plugin. It is an intergration using GStreamer's GstAppSrc element.
Apache License 2.0
29 stars 18 forks source link

[Question] How to utilize with multiple cameras? #5

Closed balos1 closed 4 years ago

balos1 commented 6 years ago

I am trying to use several Basler cams to be the source of data for a pipeline. I noticed in a recent commit you added unique names for the gstreamer element based on the camera serial number so multiple cameras can be used in the pipeline. How do you envision using the CInstanceCameraAppSrc for a multi-cam situation? Were you thinking of binning them?

MattsProjects commented 6 years ago

Hi cody, thanks for noticing! yes, another user also asked about multiple cameras, so I'm exploring it... I have a sample drawn up that takes two cameras (just by creating two instantcameraappsrc's), and brings them through one pipeline which has a videomixer, videobox, and videosink. The result is two cameras displaying their images side-by-side in the same window. The catch is that the 'videomixer' element is very slow, and can be slower than the cameras supply the images, so images get dropped or there is at least some lag between acquisition and display. There are better ways though, like using a separate pipeline for each camera, etc. What's your application though? During my work the other day I was keeping in mind that depending on the application it might be better to skip gstreamer and access the camera directly from the pylon API. PS: I will do some polishing of that multicam sample I drafted and push a commit in the next couple days so you can have a look. thanks again! -matt

MattsProjects commented 6 years ago

Hi Cody, Have a look at the latest commit. I added the two camera sample. I'm curious to know how it works for you!

balos1 commented 6 years ago

Hi Matt, I will give that demo a shot to see how it works, but latency and frame drops will be an issue for me because I am trying to feed a neural network from the cameras to do classification in real-time. I was thinking about running multiple pipelines, but I am not sure how I would do that since running multiple instances of the neural network would likely result in a serious performance drop. I was thinking of creating a pipeline like this:

cam1 ---\                                  /---> queue --> camfilter id=1 --> udpsink/appsink                              
         \                                / 
          appsrc-->neural_network-->tee---
         /                                \
cam2 ---/                                  \---> queue --> camfilter id=2 --> udpsink/appsink

Also, I have taken your code and kind of ported it to something that could be worked into a CInstanceCameraArrayAppSrc. It interleaves the camera feeds into a single data flow. After I clean it up Ill create a PR if you're open to it.

MattsProjects commented 6 years ago

Sounds good! Always happy to have a look. The array is nice, but there is a quirk or two that come with the cost of convinience (ie: cameras[i].startgrabbing() starts the grabbing of all of them in the array, not just one. A little counterintuitive.) I know I struggled with latency, frame drops, and "realtime" at first when making this two camera sample. Seems choosing the gstreamer plugins and tweaking them turned out to be pretty important, otherwise you live with either dropped frames or lag/latency. I should mention an error in my previous comment. It wasn't that "videomixer" was slower than the camera, it was that it was slower than other elements in the pipeline... Basically at any level, you're either using buffer queues to avoid dropped frames, or not using them to avoid latency. My appsrc sets pylon for "latestimageonly" which means there's only one buffer at the driver level. If it's not retrieved before the next frame comes in, it's dropped. This is good though, as it means when the pipeline requests an image from the source, it's not getting an old one from a pylon FIFO. It's 'realtime'. But what I saw was that the pipeline was requesting images from the appsrc faster than the downstream videomixer element could process them. So the end display dropped frames all the time. Adding a queue element before that prevented the drops, but by buffering frames, so the display had horrible lag/latency (just like if I used pylon's one-by-one fifo strategy and a stack of buffers at that level). In the end I got decent realtime+all frames performance by using the 'compositor' plugin instead of videomixer, setting sync=false on the videosink, and dropping the videobox plugin and instead using compositor's sink pads to position the images in the window. So yeah, it took some trial and error to get it working pretty even on an i7 laptop... But then again, I haven't played with the serious sync'ing features of gstreamer (timestamps, clocks, etc.), and I didn't do any hardcore tweaking of the pylon code in the appsrc (I figure it already looks complex, and I didn't want to confuse users more than I needed to :)).

As for the neural network application, I'm a novice there to be honest. But if gstreamer gives you trouble, you might want to look into just using pylon to interface with your library. There's much more control than what I have coded into my gstreamer source. You can boost the priority of the low level 'grab engine', run the higher level 'grab loop' (retrieveresult) automatically in a background thread, and even grab images directly into memory allocated by another application to save a memcpy (see the buffer factory sample). And if you're using color, most cameras can do debayering onboard and output in YUV or RGB natively (in my gstreamer source I pretty much just convert whatever comes in to RGB, which steals CPU resources definitely if the camera is set to raw bayer output).