alliedvision / VimbaPython

Old Allied Vision Vimba Python API. The successor to this API is VmbPy
BSD 2-Clause "Simplified" License
93 stars 40 forks source link

Frames arriving jumbled together (i.e stripes/patches in images) #123

Closed i-jey closed 1 year ago

i-jey commented 2 years ago

Acquiring images asynchronously, I'm running into an issue where frame data appears to be mixed up with subsequent/previous frames. In the example photos below, there are parts of these frames which appear to come from the subsequent frame. This issue appears to happen the most when there are bright/dark transitions between frames. Unfortunately for our eventual use case, that is going to be nearly every frame!

I'm doing a check for the frame status to ensure it's complete - so I am a little puzzled as to what's causing this behaviour. Fyi, this is an AVT Alvium 1800 U-319m mono bareboard connected to a Raspberry Pi 4 over USB3.0.

Initial frame: image

Frame with patch: image

Last frame: image

I've opted to use the AVT camera w/o a context manager as it fits our design case better (we are use a couple of different models of cameras, all of which need to have the same API wrapped around so that they can be drop-in replacements).

Here's the _frame_handler:

def _frame_handler(self, cam, frame):
        if self.queue.full():
            self.queue.get()
        if frame.get_status() == vimba.FrameStatus.Complete:
            self.queue.put(frame.as_numpy_ndarray())
        cam.queue_frame(frame)

and here's the rest of the "AVTCamera" wrapper:

class AVTCamera:
    def __init__(self):
        self.vimba = Vimba.get_instance().__enter__()
        self.queue = queue.Queue(maxsize=1)
        self.connect()
        self.minExposure_ms, self.maxExposure_ms = self.getExposureBoundsMilliseconds()

    def _get_camera(self):
        with Vimba.get_instance() as vimba:
            cams = vimba.get_all_cameras()
            return cams[0]

    def _camera_setup(self):
        self.camera.ExposureAuto.set("Off")
        self.camera.ExposureTime.set(500)
        self.setBinning(bin_factor=2)
        self.camera.set_pixel_format(vimba.PixelFormat.Mono8)

    def connect(self) -> None:
        self.camera = self._get_camera()
        self.camera.__enter__()
        self._camera_setup()

    def deactivateCamera(self) -> None:
        self.stopAcquisition()
        self.vimba.__exit__(*sys.exc_info())

    def _frame_handler(self, cam, frame):
        if self.queue.full():
            self.queue.get()
        if frame.get_status() == vimba.FrameStatus.Complete:
            self.queue.put(frame.as_numpy_ndarray())
        cam.queue_frame(frame)

    def _flush_queue(self):
        with self.queue.mutex:
            self.queue.queue.clear()

    def startAcquisition(self) -> None:
        if not self.camera.is_streaming():
            self.camera.start_streaming(self._frame_handler)

    def stopAcquisition(self) -> None:
        if self.camera.is_streaming():
            self.camera.stop_streaming()

    def yieldImages(self):
        if not self.camera.is_streaming():
            self._flush_queue()
            self.startAcquisition()

        while self.camera.is_streaming():
            yield self.queue.get()[:, :, 0]

    def setBinning(self, mode: str="Average", bin_factor=1):
        while self.camera.is_streaming():
            self.camera.stop_streaming()

        self.camera.BinningHorizontalMode.set(mode)
        self.camera.BinningVerticalMode.set(mode)
        self.camera.BinningHorizontal.set(bin_factor)
        self.camera.BinningVertical.set(bin_factor)

        # For some reason, setting the binning mode only changes the maximum image width/height, and not the current
        # image width/height. So they must be set manually. (I figured this out by looking at the Vimba Viewer and noticing
        # that the max height/width were changed when adjusting binning factor, but not the current image height/width)
        self.camera.Width.set(self.camera.WidthMax.get())
        self.camera.Height.set(self.camera.HeightMax.get())

    def getBinning(self):
        return self.camera.BinningHorizontal.get()

    def _getTemperature(self):
        try:
            return self.camera.DeviceTemperature.get()
        except:
            print("Could not get the device temperature using DeviceTemperature.")
        raise

    def _setExposureTimeMilliseconds(self, value_ms: int):
        try:
            self.camera.ExposureTime.set(value_ms*1000)
            return
        except:
            print(f"Could not use ExposureTime.set().")

        print(f"Could not set exposure time.")
        raise

    def _getCurrentExposureMilliseconds(self):
        try:
            exposureTime_ms = self.camera.ExposureTime.get() / 1000
            return exposureTime_ms
        except:
            print(f"ExposureTime method failed.")
        print(f"Could not get the current ExposureTime.")
        return None

    def getExposureBoundsMilliseconds(self):
        try:
            minExposure_ms = self.camera.ExposureAutoMin.get()/1000
            maxExposure_ms = self.camera.ExposureAutoMax.get()/1000
            return [minExposure_ms, maxExposure_ms]
        except Exception as e:
            print(e)
            print(f"Could not get exposure using ExposureAutoMin / ExposureAutoMax.")

    @property
    def exposureTime_ms(self):
        return self._exposureTime_ms

    @exposureTime_ms.setter
    def exposureTime_ms(self, value_ms: int):
        if (value_ms > self.minExposure_ms) and (value_ms < self.maxExposure_ms):
            try:
                self._setExposureTimeMilliseconds(value_ms)
                exposureFromCamera = self._getCurrentExposureMilliseconds()
                self._exposureTime_ms = exposureFromCamera
                print(f"Exposure time set to {exposureFromCamera} ms.")

            except:
                print(f"Failed to set exposure'")
        else:
            raise ValueError

Any insight or advice would be greatly appreciated! I worry I've overlooked something simple while circumventing the original context-manager use-case.

I saw this issue on Pymba (a Python API predating AVT's official API) which seemed relevant, however the proposed solution at the end of that thread was simply to ignore frames with frame.data.receiveStatus == -1, which I believe is what frame.get_status() == vimba.FrameStatus.Complete should be doing anyway?

nordeh commented 2 years ago

Corrupted frame even FrameStatus=Complete sounds strange indeed. Does it happen in VimbaViewer too? Assuming a performance issue I'd suggest to slow down a bit, slightly decrease DeviceLinkThroughputLimit. If the Alvium 1800 U-319m is still suspected please check for a firmware update first of all: https://www.alliedvision.com/en/support/firmware-downloads/

i-jey commented 1 year ago

A resolution!

This is my fault, this problematic behaviour, and a guard against it, is laid out clearly in the multithreading_opencv.py example (here).

The relevant section is here:

def __call__(self, cam: Camera, frame: Frame):
        # This method is executed within VimbaC context. All incoming frames
        # are reused for later frame acquisition. If a frame shall be queued, the
        # frame must be copied and the copy must be sent, otherwise the acquired
        # frame will be overridden as soon as the frame is reused.
        if frame.get_status() == FrameStatus.Complete:

            if not self.frame_queue.full():
                frame_cpy = copy.deepcopy(frame)
                try_put_frame(self.frame_queue, cam, frame_cpy)

For my code above, I simply updated the _frame_handler to the following:

def _frame_handler(self, cam, frame):
        try:
            self.queue.get_nowait()
        except queue.Empty:
            pass
        if frame.get_status() == vimba.FrameStatus.Complete:
            frame_deep_copy = np.copy(frame.as_numpy_ndarray()[:, :, 0])
            self.queue.put(frame_deep_copy)
        cam.queue_frame(frame)

Because the frame object is reused, multithreading (which is how I was dealing with frames in my application) runs the risk of having the image data become partially overwritten as you are still dealing with it. Creating a deep copy of the frame ensures that the camera cannot overwrite it.

The DeviceLinkThroughput parameter does not need to be changed!

Cheers.