alliedvision / VimbaPython

Old Allied Vision Vimba Python API. The successor to this API is VmbPy
BSD 2-Clause "Simplified" License
93 stars 40 forks source link

Action command stops working after 5 frames #50

Closed alexandruradovici closed 3 years ago

alexandruradovici commented 3 years ago

I have a setup with 8 cameras, connected in two groups of 4, each group connected to a different switch and to a separate network interface.

The system has MTU 9000 on each of the two network interfaces.

Running the following code will take the first 5 frames and then ignore the action command. Am i doing something wrong?

Using the adjust packet size code will not work, so I set the BytesPerSecond manually.

from vimba import *
import threading
import time

device_key = 1
group_key = 1
group_mask = 1

def get_command_sender(interface_id):
    # If given interface_id is ALL, ActionCommand shall be sent from all Ethernet Interfaces.
    # This is achieved by run ActionCommand on the Vimba instance.
    if interface_id == 'ALL':
        return Vimba.get_instance()

    with Vimba.get_instance() as vimba:
        # A specific Interface was given. Lookup via given Interface id and verify that
        # it is an Ethernet Interface. Running ActionCommand will be only send from this Interface.
        try:
            inter = vimba.get_interface_by_id(interface_id)

        except VimbaInterfaceError:
            abort('Failed to access Interface {}. Abort.'.format(interface_id))

        if inter.get_type() != InterfaceType.Ethernet:
            abort('Given Interface {} is no Ethernet Interface. Abort.'.format(interface_id))

    return inter

class ImageReader(threading.Thread):
    def __init__ (self, cam: Camera):
        threading.Thread.__init__(self)
        self.cam = cam

    def frame_handler(self, cam: Camera, frame: Frame):
        print (frame)
        if frame.get_status() == FrameStatus.Complete:
            print('Frame(ID: {}) has been received.'.format(frame.get_id()), flush=True)

    # cam.queue_frame(frame)

    def run(self):
        with self.cam as cam:
            print (cam)

            cam.StreamBytesPerSecond.set (20_000_000)

            # for frame in cam.get_frame_generator(limit=2, timeout_ms=3000):
            #     print (f"{cam} frame {frame}")

            cam.TriggerSelector.set('FrameStart')
            cam.TriggerSource.set('Action0')
            cam.TriggerMode.set('On')
            cam.ActionDeviceKey.set(device_key)
            cam.ActionGroupKey.set(group_key)
            cam.ActionGroupMask.set(group_mask)

            cam.start_streaming(self.frame_handler)

            while True:
                time.sleep (1)

            # sender.ActionDeviceKey.set(device_key)
            # sender.ActionGroupKey.set(group_key)
            # sender.ActionGroupMask.set(group_mask)
            # sender.ActionCommand.run()

            # cam.stop_streaming()

if __name__ == '__main__':
    readers = []
    vimba = Vimba.get_instance()
    with vimba:
        for cam in vimba.get_all_cameras():
            reader = ImageReader(cam)
            readers.append (reader)

        for reader in readers:
            reader.start ()

        time.sleep (2)
        interface_id = "ALL"
        sender = get_command_sender(interface_id)

        while True:
            sender.ActionDeviceKey.set(device_key)
            sender.ActionGroupKey.set(group_key)
            sender.ActionGroupMask.set(group_mask)
            sender.ActionCommand.run()
            print ("Command sent")
            time.sleep (2)

        for reader in readers:
            reader.join ()
NiklasKroeger-AlliedVision commented 3 years ago

Thank you for the full code example!

I noticed, that you are not re-queuing your frames in the ImageReader.frame_handler method. My guess is that this is the reason you are not seeing new frames arrive. The actions are probably triggered, but they no longer result in images being received because there is no frame available to transfer the image.

By default, the Camera.start_streaming method uses a buffer_count of 5 which means, that 5 frames are allocated and queued to transfer image data from the camera. Each time a new image is submitted from the camera to the computer, one frame is taken from the transfer queue, and the registered frame handler is called with it. It is then the responsibility of the frame handler, to place the frame back into the transfer queue for future acquisitions. If no frames are available in the transfer queue, new images cannot be received.

You currently have the correct code for re-queuing the frame commented out. Please try to adjust your frame_handler to the following and see if that fixes your problem:

    def frame_handler(self, cam: Camera, frame: Frame):
        print (frame)
        if frame.get_status() == FrameStatus.Complete:
            print('Frame(ID: {}) has been received.'.format(frame.get_id()), flush=True)
        cam.queue_frame(frame)

If that does not fix the problem we would need to take a closer look to see if the action is actually send out via your Ethernet device. For this it would probably be best to capture the traffic on that device with wireshark and search for a corresponding packet. But let us get into that, if the problem is not fixed by re-queuing your used frames.

alexandruradovici commented 3 years ago

Thank you, this seems to solve my problem. Another issue that I am facing seems is the following. I am trying to convert the frame to a file and save to the disk using PIL.

def frame_handler(self, cam: Camera, frame: Frame):
        print (frame)
        if frame.get_status() == FrameStatus.Complete:
            print('Frame(ID: {}) has been received ({}x{}x{}) .'.format(frame.get_id(), frame.get_width(), frame.get_height(), frame.get_pixel_format()), flush=True)
            try:
                buffer = frame.get_buffer()
                numpy_image = np.ndarray(buffer=buffer,
                             dtype=np.uint8,
                             shape=(frame.get_height(),frame.get_width(),frame.get_pixel_format()))
                # numpy_image = frame.as_numpy_ndarray()
                img = Image.fromarray (numpy_image)
                # img = img.rotate (camera.rotation, expand = True)
                # with io.BytesIO() as output:
                #     img.save(output, format="PNG", compress_level = 1)
                #     img_data = output.getvalue()
                # self.file_writer.write_file(loop.call_soon_threadsafe (write_files.put_nowait, ({
                #             "filename": f"{self.index}_{frame.get_id()}.png",
                #             "image": img_data,
                #         })))
            except Exception as e:
                print (e)
        cam.queue_frame(frame)

I am getting buffer is too small for requested array. Is it possible that the received frame is incomplete? Also, the resolution seems to be 5328x3040 instead of 5120x3120.

NiklasKroeger-AlliedVision commented 3 years ago

Hmmm. I am not sure why you are getting that error. I only ever used the frame.as_numpy_ndarray() method of getting a frame as numpy array. What benefits do you see in using your method of frame.get_buffer?

If you are worried about unnecessary copies of data being created, that does not happen. as_numpy_ndarray() essentially does the same thing you did and reuses the already filled image buffer. It additionally ensures, that the frame format is one that can be utilized as a numpy array. So perhaps that is actually the problem you are experiencing. What format are your frames recorded in?

Is it possible that the received frame is incomplete?

You can check for the transfer status of your received frame by taking a look at Frame.get_status(). The resolution of the frame and size of the buffer should not be impacted by the transfer state.

If you want to perform an additional sanity check of your received data you can take a look at the Camera.Width and Camera.Height features to see the values the camera reports and compare them to your Frame.get_width() and Frame.get_height() values. However I would assume that these values should match and I currently do not know where the unexpected image dimensions could come from (except for the already mentioned perhaps incompatible pixel format).

alexandruradovici commented 3 years ago

I think I might have found the problem. It seems that the camera receives 8bits per pixel, I was assuming it is RGB (24bits).

EDIT: Is there any way to convert the image to RGB?

NiklasKroeger-AlliedVision commented 3 years ago

What camera model are you using? Ideally you would simply set the color format you want to use on the camera and receive images in the correct format right away.

Here is a small code snippet that shows you how to get a list of supported pixel formats from your camera and how to change the setting:

import vimba

if __name__ == "__main__":
    with vimba.Vimba.get_instance() as vmb:
        cams = vmb.get_all_cameras()
        print("found the following cams: {}\nUsing this one: {}".format(list(map(str, cams)), str(cams[0])))
        with cams[0] as cam:
            pixel_formats = cam.get_pixel_formats()
            print("Camera reported these supported formats: {}".format(pixel_formats))

            print("Currently set format: {}".format(str(cam.get_pixel_format())))
            # Change the pixelformat to the one I want to use (exact value depends on your use case and what your camera supports)
            cam.set_pixel_format(vimba.PixelFormat.Rgb8)
            print("new format: {}".format(str(cam.get_pixel_format())))

For my camera I have connected right now (Alvium U-500c) I get the following output.

found the following cams: ['Camera(id=DEV_1AB22D01C0C8)']
Using this one: Camera(id=DEV_1AB22D01C0C8)
Camera reported these supported formats: (PixelFormat.Mono8, PixelFormat.Mono10, PixelFormat.Mono10p, PixelFormat.BayerGR8, PixelFormat.BayerGR10, PixelFormat.BayerGR10p, PixelFormat.Rgb8, PixelFormat.Bgr8, PixelFormat.YCbCr411_8_CbYYCrYY, PixelFormat.YCbCr422_8_CbYCrY, PixelFormat.YCbCr8_CbYCr)
Currently set format: Mono8
new format: Rgb8

If your camera does not support RGB directly, we would need to use the pixel format transformations supported by VimbaPython, but the way suggested above would be preferred to save some processing.

NiklasKroeger-AlliedVision commented 3 years ago

A small example on how you may use the pixel transformation ability of VimbaPython can be found in the VimbaPython Manual PDF that is provided with your installation of Vimba (usually placed at C:\Program Files\Allied Vision\Vimba_X.Y\VimbaPython\Documentation on Windows or wherever you exported Vimba to on Linux).

Please be advised, that the pixel format conversion is done in-place which means that you can not simply re-queue the frame in your asynchronous image acquisition frame_handler function as the allocated memory size changes.

alexandruradovici commented 3 years ago

Thank you for the suggestions, I managed to save the files using open cv. I am experiencing another interesting fact, from time to time I seem to get an Incomplete Frame. I was just wondering if the code is legal from the memory safety point of view:

class FileWriter(threading.Thread):
    def __init__ (self):
        threading.Thread.__init__ (self)
        self.loop = asyncio.new_event_loop()
        self.write_files = asyncio.Queue(loop=self.loop)

    async def write_image(self):
        while True:
            file = await self.write_files.get ()
            if "image" in file:
                async with aiofiles.open(f"./images/{file['filename']}", "wb") as out:
                    await out.write(file["image"])
                    await out.flush()
                print ("write_image:  save image {}".format (file["filename"]))
            self.write_files.task_done()
    def run(self):
        self.loop.run_until_complete (self.write_image())

    def write_file (self, file):
        self.loop.call_soon_threadsafe (self.write_files.put_nowait, (file))

class ImageReader(threading.Thread):
    def __init__ (self, cam: Camera, index: int, file_writer: FileWriter, shot: Barrier):
        threading.Thread.__init__(self)
        self.cam = cam
        self.index = index
        self.file_writer = file_writer
        self.shot = shot

    def frame_handler(self, cam: Camera, frame: Frame):
        print (frame)
        if frame.get_status() == FrameStatus.Complete:
            print('Frame(ID: {}) has been received ({}x{}x{}) .'.format(frame.get_id(), frame.get_width(), frame.get_height(), frame.get_pixel_format()), flush=True)
            try:
                buffer = frame.as_numpy_ndarray()
                f2 = cv2.cvtColor(buffer, cv2.COLOR_BAYER_RG2RGB)
                shot.wait ()
                self.file_writer.write_file({
                    "filename": f"{self.index}_{frame.get_id()}.bmp",
                    "image": f2
                })
            except Exception as e:
                print (e)
        cam.queue_frame(frame)

    def run(self):
        with self.cam as cam:
            print (cam)
            # Try to adjust GeV packet size. This Feature is only available for GigE - Cameras.
            try:
                cam.GVSPAdjustPacketSize.run()

                while not cam.GVSPAdjustPacketSize.is_done():
                    pass

            except (AttributeError, VimbaFeatureError):
                pass

            cam.StreamBytesPerSecond.set (30_000_000)

            cam.ExposureTimeAbs.set (1700)
            cam.BalanceWhiteAuto.set ("Off")

            cam.BalanceRatioSelector.set ("Red")
            cam.BalanceRatioAbs.set (2.78)

            cam.BalanceRatioSelector.set ("Blue")
            cam.BalanceRatioAbs.set (2)

            cam.set_pixel_format(PixelFormat.BayerRG8)

            cam.TriggerSelector.set('FrameStart')
            cam.TriggerSource.set('Action0')
            cam.TriggerMode.set('On')
            cam.ActionDeviceKey.set(device_key)
            cam.ActionGroupKey.set(group_key)
            cam.ActionGroupMask.set(group_mask)

            cam.start_streaming(self.frame_handler)

            while True:
                time.sleep (1)
NiklasKroeger-AlliedVision commented 3 years ago

First off: I never used asyncio myself so I am no expert on that. But from my understanding your code looks good. I see no places where you might run into memory issues with the buffers you are queuing back to the transfer queue.

One thing I cannot quite tell, is what the shot.wait() call in your frame_handler is doing. Generally you want your frame_handler to finish as quickly as possible as this is a function called from the Vimba context and if it takes too long it might block some internal processes. With that in mind one thing you might try (though I am not entirely sure it will really make a difference) is to skip the image transformation in the frame_handler and instead do it in the FileWriter before saving the image (code below only shows places where I changed something from your example):

import copy

# some unchanged code left out

    async def write_image(self):
        while True:
            file = await self.write_files.get()
            if "image" in file:
                image = cv2.cvtColor(file["image"], cv2.COLOR_BAYER_RG2RGB)
                async with aiofiles.open(f"./images/{file['filename']}", "wb") as out:
                    await out.write(image)
                    await out.flush()
                print("write_image:  save image {}".format(file["filename"]))
            self.write_files.task_done()

# some unchanged code left out

    def frame_handler(self, cam: Camera, frame: Frame):
        print(frame)
        if frame.get_status() == FrameStatus.Complete:
            print('Frame(ID: {}) has been received ({}x{}x{}) .'.format(frame.get_id(),
                  frame.get_width(), frame.get_height(), frame.get_pixel_format()), flush=True)
            try:
                buffer = frame.as_numpy_ndarray()
                shot.wait()  # as mentioned, unsure what this does and if it is really needed here
                self.file_writer.write_file({
                    "filename": f"{self.index}_{frame.get_id()}.bmp",
                    "image": copy.deepcopy(buffer)
                })
            except Exception as e:
                print(e)
        cam.queue_frame(frame)

I am not sure how much of an impact this will actually have on your occasional incomplete frames. You mentioned quite a few cameras in your setup, so perhaps the overall bandwidth of you network interfaces is not quite enough to keep up with the amount of image data that needs to be transferred? This might be a more general troubleshooting topic than an actual problem with VimbaPython, that our application support can help you better with than me. You could contact them via the form on our website.

alexandruradovici commented 3 years ago

I works fine, thank you.