MattsProjects / pylon_gstreamer

A robust integration of Basler's Pylon API with GStreamer. Delivers applications as ready-to-run standalone compiled executables (gst-launch-1.0 is not needed). Designed for reliability and easy access to performance optimizations. Note: This is not a plugin. It is an intergration using GStreamer's GstAppSrc element.
Apache License 2.0
29 stars 18 forks source link

Q) How to use it for image processing #10

Closed jahwanoh closed 2 years ago

jahwanoh commented 4 years ago

Thank you for your contribution. I'm developing system with multiple Basler USB3.0 + Nvidia Jetson Xavier. I've tried to use only pylon SDK, but for some reason the grab speed is really slow when I use image processing, video writing thread. so I'm trying to use gstreamer based grabbing.

I've check the sample code here, but now I'm wondering how I can get a image from camera. should I just use cameras[k].RetrieveResult( 10000, ptrGrabResults[k], TimeoutHandling_ThrowException); method? I'm worried if I get a same slow grabbing speed.

or Can I use something like below? VideoCapture capture("v4l2src device=/dev/video0 ! video/x-raw,width=1920,height=1080,format=(string)UYVY ! nvvidconv ! video/x-raw(memory:NVMM),width=1920,height=1080,format=(string)I420 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink");

jahwanoh commented 4 years ago

or maybe I can add while loophere(https://github.com/MattsProjects/pylon_gstreamer/blob/f978c65e0bca5323542925bb79f0a92418edd7e3/Samples/simplegrab/simplegrab.cpp#L208) with the class function retrieve_image(); ???

MattsProjects commented 4 years ago

Hi jahwanoh, thank you for the interest here! Personally I'd recommend trying to troubleshoot the original issue first, because the GStreamer API can be quite confusing and not so easy to jump into quickly. Also I would check that the image processing you want to do can be done with GStreamer elements/plugins. If you want to encode the image stream as h.264, this is very useful (especially when companies like Nvidia write gstreamer plugins that specifically take advantage of their hardware capabilities, like for encoding a stream). But if you want to do blob analysis, etc., I'm not sure if such plugins have been written (GStreamer is more oriented toward streaming video like movies than for machine vision).

If you opt to go the GStreamer route, there are two approaches:

  1. Use gst-launch-1.0 to make some command line pipelines, and then run them from shell scripts.
  2. Use the GStreamer API to make a complete application.

    1 is a common usage of GStreamer because it is easier, but the GStreamer team advises against using gst-launch for anything more that testing pipelines and debugging.

    2 is a more difficult but maybe more "proper" way. Here you write a C or C++ application using the GStreamer API.

The difference is that #1 requires a GStreamer "plugin" to access the camera. Like how the videotestsrc plugin is giving a test image, a plugin written for Basler cameras would provide the image. This is what people like zingmars are writing (his is quite good). The advantage is that you can easily use gst-launch with it. The disadvantage is that to access camera features, it's up to the plugin developer to expose them through GStreamer properties, which makes it a large task for the developer.

Since there are plugins already existing for Basler cameras, my project addresses the case in #2. Here I wrap the Basler pylon API with GStreamer's AppSrc element. This means that in the final application, you can access camera features directly with pylon, and still get the image through GStreamer pipelines. The disadvantage though, is again that the GStreamer API can be confusing. Also, I have not done much work with multiple cameras and GStreamer, so I don't know too much about the pitfalls to look out for. There is a "two camera compositor" sample in there though that I've gotten to work. It displays two cameras side by side on the screen.

I hope this helps!

PS: Basler cameras are not UVC compliant, so you will not see them as /dev/video0 like a webcam.

jahwanoh commented 4 years ago

Thank you for comment. I was wondering if it is possible to get a image in C++ code with this pylon_gstreamer library. my pipeline is as below. pylon_gstreamer(frame from camera) --> openCV or NPP image processing --> gstreamer(encoding) does it make sense?

I've tested with zingmars' pylonsrc, and it works well with command such as gst-launch-1.0 -v pylonsrc camera=1 fps=25 ! bayer2rgb ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! omxh265enc ! matroskamux ! filesink location=test_1_265.mp4 -e

MattsProjects commented 4 years ago

Hi Jahwanoh, I'm not too familiar with the specific elements, but you can try your pipeline by doing: demopylongstreamer -parse "gst-launch-1.0 -v pylonsrc camera=1 fps=25 ! bayer2rgb ! nvvidconv ! 'video/x-raw(memory:NVMM), format=NV12' ! omxh265enc ! matroskamux ! filesink location=test_1_265.mp4 -e" It will replace Zingmar's pylonsrc with the CInstantCameraAppSrc just to test.