alliedvision / VimbaPython

Old Allied Vision Vimba Python API. The successor to this API is VmbPy
BSD 2-Clause "Simplified" License
93 stars 40 forks source link

Unstable grabbing with SW triggering #143

Closed MhdKAT closed 1 year ago

MhdKAT commented 1 year ago

Hi I am running into a problem of camera unstability when I try to use two cameras in SW triggering mode. The cameras I am using are two Alviums 1800 U-240c and 1800 U-500c. I am using the following code snippet to grab a frame each time I get a SW signal which seems to work fine but and fails sporadically with Frame Incomplete Error. Is it a HW related problem or am I misusing the API?

class FrameHandler:
    def __init__(self, handler):
        super(FrameHandler, self).__init__()
        self.handler = handler

    def __call__(self, cam: Camera, frame: Frame):
        if frame.get_status() == FrameStatus.Complete:
            # convert frame to opencv format
            t0 = time.time()
            frame.convert_pixel_format(PixelFormat.Bgr8)
            print("conversion time took", time.time()-t0)
            # push frame to next stage
            self.handler(frame.as_opencv_image())
        else:
            print(f'ERR -- FrameHandler incomplete error')
            self.handler(None)
        cam.queue_frame(frame)
        print("Frame grabbed !")

class VimbaCamera(BaseCamera):
    def __init__(self, id, info, settings):
        super(VimbaCamera, self).__init__(id, info, settings)
        self.camera = None
        self.config_file = settings["config_file"]

        with Vimba.get_instance() as vimba:
            try:
                print("camera id  =", self.info["id"])

                self.camera = vimba.get_camera_by_id(self.info["id"])
                print("camera succesfully created !")
            except Exception as e:
                print(f'ERR -- Camera ({self.info["id"]}) creation error: {e}')

    def setup(self):
        with Vimba.get_instance() as vimba:
            with self.camera:
                print("setting up the camera")
                try : 
                     self.camera.load_settings(self.config_file, PersistType.All)
                     print("Loading camera settings from config file")
                except Exception as e:
                     print(e)
                self.camera.TriggerSelector.set('FrameStart')
                self.camera.TriggerActivation.set('RisingEdge')
                self.camera.TriggerSource.set('Software')
                self.camera.TriggerMode.set('On')

    def open(self):
        if not self.is_open:
            try :
                #setup camera 
                self.setup()
                self.is_open = True
            except Exception as e:
                print(f'ERR -- Camera open() error: {e}')

    def close(self):
        if self.is_open:
            try : 
                self.camera.stop_streaming()
                self.is_open = False
            except Exception as e :
                print(f'ERR -- Camera close() error: {e}')

    def trigger(self):
        t0 = time.time()
        with Vimba.get_instance() as vimba:
            with self.camera:
                print("context invocation took ", time.time()-t0)
                _handler = FrameHandler(self.handler)
                self.camera.start_streaming(handler=_handler)
                print("trigger issued")
                self.camera.TriggerSoftware.run()
                time.sleep(1)
Teresa-AlliedVision commented 1 year ago

Hello, I suspect a hardware or settings issue. The code looks good and if it generally works well, then I don't see a problem. However, regarding the frame status, you assume that every frame that is not logged as complete is automatically incomplete (this is most likely the case, but they can also be dropped, invalid or too small). You can read out the status (complete, incomplete, too small, invalid) from the frame object and infer dropped frames from the frame IDs.

Frame IDs and dropped frames Frame IDs of one camera are always consecutive. If a number is skipped between two frames, then a frame was dropped and the frame receiver was not triggered. Most likely the frame was already sorted out in the transport layer.

Incomplete and dropped frames If a frame is not complete, then that is most likey due to networking issues. Especially if you have several cameras that share the bandwidth. Either a CPU that can't process the data quickly enough, a non-optimized NIC and/or a switch with very little internal buffer. To remedy that, you can reduce the DeviceLinkThroughputLimit of the camera, which is the datarate of the camera in bytes. The software trigger is limiting the framerate, but if the DLTL is not limited, then the camera is sending data at maximum datarate. This can be too much if you don't have a dedicated USB card in a PCIE slot. What can also happen if you don't have a dedicated USB card for the camera (i.e. all USB traffic goes through the same card): Other USB traffic gets priority and camera data is lost -> incomplete frame(s). This depends very much on your system and USB card. In extreme cases you can have dropped and incomplete frames when for example a USB mouse is moved or another USB device is plugged in.

Optimizing the system and bandwidth for USB camera traffic This is best summed up in the USB Camera User Guide under the section Performance and Troubleshooting.

Cheers, Teresa

MhdKAT commented 1 year ago

Thank you for your response. So our hardware is a jetson AGX orin and we did all the power optimizations. We also did the optimizations related to DeviceLinkThroughputLimit and set usbfs at a higher value than the default on Linux. We think that it has helped to get the cameras working using VimbaViewer because default settings didn't allow to stream from both using the viewer and for test purposes. The problem is we had to actually to put time.sleep(1) to 4s after the trigger to get the frames otherwises incomplete frames happened more offen and when we reduce the sleep or completey remove it, the frame handler callback is never called. Can you explain what is the need for the time.sleep() and how can it affect the functionality? When the frames are not complete, dmesg outputs these errors :

[76857.024991] tegra-xusb 3610000.xhci: Looking for event-dma 0000007ffff672f0 trb-start 0000007ffff67300 trb-end 0000007ffff67330 seg-start 0000007ffff67000 seg-end 0000007ffff67ff0
[76950.279618] tegra-xusb 3610000.xhci: WARN Set TR Deq Ptr cmd failed due to incorrect slot or ep state.
[76956.017549] tegra-xusb 3610000.xhci: bad transfer trb length 65484 in event trb
[76956.018001] tegra-xusb 3610000.xhci: ERROR Transfer event TRB DMA ptr not part of current TD ep_index 10 comp_code 1
[76956.018429] tegra-xusb 3610000.xhci: Looking for event-dma 0000007ffff67260 trb-start 0000007ffff67270 trb-end 0000007ffff672a0 seg-start 0000007ffff67000 seg-end 0000007ffff67ff0
Teresa-AlliedVision commented 1 year ago

Can you tell me what your DeviceLinkThroughputLimit is? The error messages look like the usb host is not able to transfer the data. If you have 2 USB cameras on the Orin, then you should have each throughputlimit at less than half of the maximum bandwidth of the camera. This is because two cameras need a much higher overhead. 200 000 000 out of 450 000 000 is a good benchmark and make sure that DeviceLinkThroughputLimitMode is set to "On". Otherwise the camera will stream at full datarate. The frame handler not being called means that the frame was dropped, so it didn't get from the transport layer to the application. You don't do a whole lot in the frame handler, so the frame should be back in the queue fast, but you can still try adding more frames to the queue. Frames ususally get dropped when they are either missing a lot of data or there is no space for them in the queue. At what rate do you trigger the software trigger and what's the exposure time of the camera? If the sensor is still exposing when the trigger arrives, then it will just be skipped. That's just one thing to keep in mind, if you have set the exposure to automatic.

MhdKAT commented 1 year ago

So DeviceLinkThroughputLimit is 200 000 000 on the 1800 U-500c camera and 50 000 000 on 1800 U-240c camera. There is no problem with the 1800 U-240c camera actually and the problem of dropped frame only happens with the other one. The exposure time is 1061108.52 us on the 1800 U-500c. Software triggering happens each time a frame is grabbed and further processed so there is no risk for overtriggering or illegal triggering. Back to this part of the code isn't it problematic to call each time we have to trigger the camera start_streaming and recreate the handler and to open the camera context ?

    def trigger(self):
        t0 = time.time()
        with Vimba.get_instance() as vimba:
            with self.camera:
                print("context invocation took ", time.time()-t0)
                _handler = FrameHandler(self.handler)
                self.camera.start_streaming(handler=_handler)
                print("trigger issued")
                self.camera.TriggerSoftware.run()
                time.sleep(1)

Also one thing to take into consideration : we are using 8 meters long cables. Is there a way to find out by software if the camera are receiving enough voltage or so?

Teresa-AlliedVision commented 1 year ago

Regarding the code, yeah I didn't spot that this is not really ideal. Short version: Only start streaming once per camera, until you stop streaming (e.g. to change settings). The basic concept is this: You tell the camera to "start_streaming" and through the arguments assign it the handler and define how big the queue/buffer_count should be (i.e. how much space to store frames, before they get to the handler). That needs to be done once. camera.start_streaming(handler=your_frame_handler, buffer_count=10) Then you send the software trigger, to tell the camera to actually send a frame. The camera has already been assigned a handler and a buffer, so the frame will be caught/grabbed there. At the end of the program, it is recomended to stop streaming,

If you want to manage the context yourself, Niklas has done a very good writeup on this issue here: #116

NiklasKroeger-AlliedVision commented 1 year ago

What @Teresa-AlliedVision said is true. The general idea should be to simply start the streaming once and then keep the stream running. I would advise against managing you camera context manually as it essentially disables the safeguard that the with context provides and we try to leverage with our Camera implementation.

You already set up you device for software triggering single frames while keeping a constant stream running (in VimbaCamera.setup):

self.camera.TriggerSelector.set('FrameStart')
self.camera.TriggerActivation.set('RisingEdge')
self.camera.TriggerSource.set('Software')
self.camera.TriggerMode.set('On')

With this configuration you simply need to keep the stream running, and for every frame you want to trigger execute the software trigger feature. This will cause the camera to record a single frame and wait for another trigger.

I had to take some guesses as to how you are actually using the class you provided above. I made the following assumption:

The implementation below does the following:

import vimba
import time
import threading

class BaseCamera:
    def __init__(self, id, info, settings) -> None:
        # Just a placeholder to get the example to execute
        self.id = id
        self.info = info
        self.settings = settings

        self.is_open = False

    def handler(self, img):
        # just a placeholder to show that it works
        print(img[0,0])

class FrameHandler:
    def __init__(self, handler):
        super(FrameHandler, self).__init__()
        self.handler = handler

    def __call__(self, cam: vimba.Camera, frame: vimba.Frame):
        if frame.get_status() == vimba.FrameStatus.Complete:
            # convert frame to opencv format
            t0 = time.time()
            frame.convert_pixel_format(vimba.PixelFormat.Bgr8)
            print("conversion time took", time.time()-t0)
            # push frame to next stage
            self.handler(frame.as_opencv_image())
        else:
            print(f'ERR -- FrameHandler incomplete error')
            self.handler(None)
        cam.queue_frame(frame)
        print("Frame grabbed !")

class VimbaCamera(BaseCamera):
    def __init__(self, id, info, settings):
        super(VimbaCamera, self).__init__(id, info, settings)
        self.camera = None
        self.config_file = settings["config_file"]
        self._streaming_thread = None
        self._stop_streaming_event = threading.Event()

        with vimba.Vimba.get_instance() as vmb:
            try:
                print("camera id  =", self.info["id"])

                self.camera = vmb.get_camera_by_id(self.info["id"])
                print("camera succesfully created !")
            except Exception as e:
                print(f'ERR -- Camera ({self.info["id"]}) creation error: {e}')

    def setup(self):
        with vimba.Vimba.get_instance():
            with self.camera:
                print("setting up the camera")
                try:
                    self.camera.load_settings(self.config_file, vimba.PersistType.All)
                    print("Loading camera settings from config file")
                except Exception as e:
                    print(e)
                self.camera.TriggerSelector.set('FrameStart')
                self.camera.TriggerActivation.set('RisingEdge')
                self.camera.TriggerSource.set('Software')
                self.camera.TriggerMode.set('On')

    def open(self):
        if not self.is_open:
            try:
                # setup camera
                self.setup()
                self.is_open = True
            except Exception as e:
                print(f'ERR -- Camera open() error: {e}')

    def close(self):
        if self.is_open:
            try:
                # Tell the streaming thread to stop and wait until it is finished
                self._stop_streaming_event.set()
                if self._streaming_thread is not None:
                    self._streaming_thread.join()
                self.is_open = False
            except Exception as e:
                print(f'ERR -- Camera close() error: {e}')

    def _stream_frames(self):
        with vimba.Vimba.get_instance():
            with self.camera:
                self._stop_streaming_event.clear()
                _handler = FrameHandler(self.handler)
                self.camera.start_streaming(_handler)
                self._stop_streaming_event.wait()
                self.camera.stop_streaming()

    def trigger(self):
        t0 = time.time()
        with vimba.Vimba.get_instance():
            with self.camera:
                if self._streaming_thread is None:
                    self._streaming_thread = threading.Thread(target=self._stream_frames)
                    self._streaming_thread.start()
                print("context invocation took ", time.time()-t0)
                self.camera.TriggerSoftware.run()
                print("trigger issued")

if __name__ == "__main__":
    cam = VimbaCamera(id='THIS IS NOT USED?',
                      info={'id': '<your-device-id>'},
                      settings={'config_file': 'WE DO NOT CARE FOR THE EXAMPLE'})
    cam.open()
    for i in range(10):
        # On the first call to trigger the streaming thread will start. After that the camer will
        # remain in streaming mode until `cam.close` is called.
        cam.trigger()
        # We need to wait a bit here to make sure the frame we requested is actually done recording.
        # If we just execute without sleep, 10 triggers will be executed while the first frame is
        # recorded and only one frame will be received.
        time.sleep(1)
    # This will stop the stream
    cam.close()

While writing this example for you I ran into an issue, where the frame callback was only executed once, even though 10 software triggers were executed. This happened, because the software triggers were sent too quickly (that is why I added the time.sleep(1) in the usage example at the bottom).

Perhaps you are seeing a similar problem. If you execute a trigger, the camera will start recording a frame and transmit it. During (some of) that time, the camera is not able to process additional triggers. While the pixels are being exposed, additional trigger signals will not have any effect because the camera can of course not record more than one image at a time. It first has to finish the image that is currently being recorded. One way to work around this would be for example to add an event that indicates a trigger has been sent, and reset the event in the frame callback. Then before sending another trigger, make sure that the event is unset. That way you can be sure, that you are not "over-triggering" the camera by accident. But it would of course lead to some trigger delay that may not be acceptable to you. This is something you will have to decide based on your requirements.

I hope this helps. If you want to take another look at how frame streaming might work with multiple threads, you could also check out the FrameProducer class in the multithreading_opencv.py example.

MhdKAT commented 1 year ago

@Teresa-AlliedVision, @NiklasKroeger-AlliedVision thank you for your valuable help. @NiklasKroeger-AlliedVision thank you for your answers your guesses were actually right ! it was a bit confusing coming from Basler pypylon where afaik, their streaming thread is implemented under the hood within their "grab loop".