alliedvision / VimbaPython

Old Allied Vision Vimba Python API. The successor to this API is VmbPy
BSD 2-Clause "Simplified" License
93 stars 40 forks source link

Error Code: VmbErrorIncomplete Int Value: -19 #60

Closed ColbearChan closed 3 years ago

ColbearChan commented 3 years ago

Hi,

The current situation requires me to perform get_frame() within a loop for every 2 seconds.

However, occasionally, I will get the following error: VmbErrorIncomplete, -19

I have found this error on the Vimba c manual, where it says: VmbErrorIncomplete, -19, A multiple registers read or write was partially incomplete.

If I use try and except pass for this error, it will cause memory leak.

I have tried many ways to use get_frame(), and the error still pops up. Here are two sample codes:

Sample Code 1 (With little memory leak, dropped after jumping out from the loop)

result = False
while !result:
        time.sleep(2)          #I have also tried using timestamp
    with Vimba.get_instance() as vimba:
        cams = vimba.get_all_cameras()
        with cams[0] as cam:
            frame = cam.get_frame()
            raw = frame.as_opencv_image()
                        result = do_something(raw)       

Sample Code 2 (With heavy memory leak, dropped after jumping out from the loop)

result = False
with Vimba.get_instance() as vimba:
        cams = vimba.get_all_cameras()
        while not result:
                time.sleep(2)
            with cams[0] as cam:
                frame = cam.get_frame()
                raw = frame.as_opencv_image()
                        result = do_something(raw)

I have noticed the timeout parameter in get_frame_generator, however, by increasing from the default value, the problem still occurs.

I have also tried to use the streaming method, which streaming runs by its own thread, and the main program will take the latest frame from the buffer every time (by constantly emptying the built in queue __capture_fsm so I could take the latest frame lol). This method works, however, due to the limited computational power, it took too many of my CPU usage, averaging 98%.

Please let me know if there is a solution or workaround for this problem. Thank you!

PC: Jetson Nano

NiklasKroeger-AlliedVision commented 3 years ago

Unfortunately I cannot reproduce the error you are getting at the moment. I tried to create a small test from the code you provided and ended up with the following which uses your code to record 10 frames for sample code 1 and 10 frames for sample code 2:

import time
from vimba import *

count = 0

def do_something(frame):
    global count
    count += 1
    print(count)
    return count >= 10

# Sample code 1
result = False
while not result:
    time.sleep(2)  # I have also tried using timestamp
    with Vimba.get_instance() as vimba:
        cams = vimba.get_all_cameras()
        with cams[0] as cam:
            frame = cam.get_frame()
            raw = frame.as_opencv_image()
            result = do_something(raw)

count = 0
# Sample code 2
result = False
with Vimba.get_instance() as vimba:
    cams = vimba.get_all_cameras()
    while not result:
        time.sleep(2)
        with cams[0] as cam:
            frame = cam.get_frame()
            raw = frame.as_opencv_image()
            result = do_something(raw)

Does running this produce the error on your machine?

Generally speaking the synchronous image acquisition (cam.get_frame()) is not the recommended way to use our cameras. Especially in VimbaPython there is quite some overhead involved that quickly can become problematic if get_frame() is called multiple times in quick succession. The main problem is that synchronous acquisition is implemented in such a way, that every time get_frame() is called the required frame buffers are allocated, announced to the Transport Layer, Queued and then acquisition is started. For asynchronous acquisition this is not done for every frame but only once. This might also be the reason for the memory leak you are seeing:

If I use try and except pass for this error, it will cause memory leak.

What I believe you are noticing is this fresh acquisition for every call to get_frame(). And since python has a garbage collector that frees up memory only from time to time, it looks like the memory use is growing constantly. In fact Python would free that memory eventually as it sees that the frame is no longer needed (assuming in your do_something() function you are not saving all data in some way). So far we were not able to detect any memory leaks from VimbaPython, and with the example code above I see the typical behaviour of "memory increases constantly but after some time some blocks are freed" that is typical for the garbage collector. See this screenshot below where I ran the example above for a couple of hundred frames: image

After leaving the process run for about 30 minutes now I would say the memory never jumps above ~480MB because the garbage collector cleans up before that.

Back to your problem:

If you are open for changing your image acquisition procedure (you mention that you already tried to use "the streaming method") I would suggest using asynchronous image acquisition and triggering images via a software trigger. This way you can still control at what point in your code an image should be recorded, but the overhead that comes with synchronous acquisition is eliminated. And since you are only receiving frames very slowly (depending on how often you run the software trigger) the CPU load should also be far lower than what you were experiencing. Here is a small code snippet that showcases how to use asynchronous image acquisition together with software triggering to get you started. If you have further questions or need help integrating this into your existing code feel free to let me know.

import vimba
import time

def frame_callback(cam: vimba.Camera, frame: vimba.Frame):
    # Called every time a new frame is received. Perform image analysis here
    print('{} acquired {}'.format(cam, frame), flush=True)
    # Hand used frame back to Vimba so it can store the next image in this memory
    cam.queue_frame(frame)

def setup_software_triggering(cam: vimba.Camera):
    # Always set the selector first so that folling features are applied correctly!
    cam.TriggerSelector.set('FrameStart')

    # optional in this example but good practice as it might be needed for hadware triggering
    cam.TriggerActivation.set('RisingEdge')

    # Make camera listen to Software trigger
    cam.TriggerSource.set('Software')
    cam.TriggerMode.set('On')

with vimba.Vimba.get_instance() as vmb:
    cams = vmb.get_all_cameras()
    with cams[0] as cam:
        setup_software_triggering(cam)
        try:
            # Tell the camera to start streaming. The first frame will be recorded when the software
            # trigger is executed
            cam.start_streaming(handler=frame_callback)
            # example loop to record 10 images and then stop.
            for _ in range(10):
                cam.TriggerSoftware.run()
                time.sleep(2)
        finally:
            cam.stop_streaming()
        print("done")
ColbearChan commented 3 years ago

I am really grateful for your detailed reply and I have learned a lot from it.

I have changed my code as suggested, and so far there is no error popping up.

I will update the result after observing for a couple more days.

Thank you :).

NiklasKroeger-AlliedVision commented 3 years ago

That is great to hear!

I will leave it up to you to close this ticket once you have let it run for some time and are sure the problem is solved.

frischwood commented 3 years ago

I have a question related to the last framework posted above: Is there a nice and easy way (preferably built in vimba) to stack the 10 frames (and many more optionally) from the software triggering loop and save them as a batch afterwards using cv2.imwritemulti() for example? And if not, how to make sure each frame can get written before the next Software triggering happens?

I ask this because I'm having problems at saving the images with cv2.imwrite() within the frame_callback(). The Triggering goes faster than the writing. In the end several images are missing on disk.

Just to give you an idea it looks as follows:

def take_pic(num_pic=60):

    with Vimba.get_instance() as vimba:
        with vimba.get_all_cameras()[0] as cam:            
            try:
                cam.start_streaming(handler=frame_callback)

                for _ in range(num_pic):
                    cam.TriggerSoftware.run()
                    time.sleep(1)

            finally:
                cam.stop_streaming()

def frame_callback(cam: vimba.Camera, frame: vimba.Frame):

    # Called every time a new frame is received. Perform image analysis here
    frame=do_the_necessary_transformations(frame) 
    cv2.imwrite(some_filename, frame.as_opencv_image())

    # Hand used frame back to Vimba so it can store the next image in this memory
    cam.queue_frame(frame)

Also I'd prefer to remove the 1 sec. delay after each triggering. Many thanks for the material already shared. It helped a lot!!

NiklasKroeger-AlliedVision commented 3 years ago

@frischwood this seems like a topic for a new issue. To keep things on topic here I will copy your question and create one where I will respond to it.

ColbearChan commented 3 years ago

Hi @NiklasKroeger-AlliedVision ,

Here is an update to my previous problem.

After running the program in these couple days, the Error code -19 still appears and force my program to be terminated.

My program is required to run from 9am to 5pm, and sometimes it was terminated by the following error. (random times during the hours, 1 or 2 times per day)

07-22-2021 16:23:45.977 [E] cRuntime - failed to parse document (Failed to read Element name)
07-22-2021 16:23:45.993 [T] cCameraActor - failed to setup features
Traceback (most recent call last):

  File "/home/blurred.py", line 348, in "blurred"
    with cams[0] as cam:
  File "/home/blurred/Vision/vimba/util/tracer.py", line 134, in wrapper
    return func(*args, **kwargs)
  File "/home/blurred/Vision/vimba/camera.py", line 359, in __enter__
    self._open()
  File "/home/blurred/Vision/vimba/util/tracer.py", line 134, in wrapper
    return func(*args, **kwargs)
  File "/home/blurred/Vision/vimba/util/context_decorator.py", line 44, in wrapper
    return func(*args, **kwargs)
  File "/home/blurred/Vision/vimba/camera.py", line 909, in _open
    self.__feats = discover_features(self.__handle)
  File "/home/blurred/Vision/vimba/util/tracer.py", line 134, in wrapper
    return func(*args, **kwargs)
  File "/home/blurred/Vision/vimba/feature.py", line 1242, in discover_features
    call_vimba_c('VmbFeaturesList', handle, None, 0, byref(feats_count), sizeof(VmbFeatureInfo))
  File "/home/blurred/Vision/vimba/util/tracer.py", line 134, in wrapper
    return func(*args, **kwargs)
  File "/home/blurred/Vision/vimba/c_binding/vimba_c.py", line 753, in call_vimba_c
    getattr(_lib_instance, func_name)(*args)
  File "/home/blurred/Vision/vimba/c_binding/vimba_c.py", line 671, in _eval_vmberror
    raise VimbaCError(result)
vimba.c_binding.vimba_common.VimbaCError: VimbaCError(<VmbError.Incomplete: -19>)

To me, I think this issue is caused by opening the camera too often, where the program was trying to open a camera again, but the camera was not ready to be opened? My program has a big loop with three main stages. For each stages, it needs to perform image analysis, where the program needs to open the camera.

Here is a simple illustration of my program flow.

github_png

I tried a work around method, which is to open the camera and keep streaming within an independent thread, the camera thread updates frames to a global buffer which the main thread is able to grab a frame from it. However, this method consumed too much CPU.

Is the frequent calls on opening the camera the cause to this error?

Thanks in advance.

NiklasKroeger-AlliedVision commented 3 years ago

It is generally a good idea to not open and close the camera connection too often. In VimbaPython every time the camera connection is opened (if no other connections from the same process already exist), the list of all available features is enumerated to allow access to them via Python properties (e.g. cam.ExposureTime).

You may be able to avoid this by just keeping the camera connection open in the main thread of your program. It is safe to enter the with context of the Vimba object and Camera objects multiple times and as long as one connection remains open, the mentioned overhead should not occur. I do not know the structure of your code, but as a simple workaround the following might work if you can find an appropriate entry point where you can do this. If you could provide a rough outline of how your code is structured I might be able to help you better. One problem that remains with this workaround is the fact, that you start and stop your streaming very often and that incurs different overhead (frame allocation, announcing, queueing etc.):

# Main function that starts your processing and kicks of the separate stages
def main():
    with vimba.Vimba.get_instance() as vmb:
        with vmb.get_all_cameras()[0] as cam:
            # Perform your processing here:
            # Each of these may enter the camera context again, that should not be a problem.
            run_stage_1()
            run_stage_2()
            run_stage_3()

That being said, I personally would prefer something slightly more complex. Since you need to perform different processing tasks depending on the stage, a state-machine like frame handler might be an approach worth considering. This would have the benefit, that you do not need to start and stop streaming as often because the camera can just remain in its streaming mode, pictures would only be transferred when the software trigger is run. I believe this should be the most efficient way to approach your problem. Here is a runnable example of how I would attempt this:

import enum
import time

import vimba

class Stage(enum.Enum):
    STAGE1 = 1
    STAGE2 = 2
    STAGE3 = 3

class FrameHandler:
    def __init__(self):
        self._stage = Stage.STAGE1
        # Register which function should be called depending on the current stage
        self._processing_functions = {
            Stage.STAGE1: self._stage_1_processing,
            Stage.STAGE2: self._stage_2_processing,
            Stage.STAGE3: self._stage_3_processing
        }

    def set_stage(self, stage: Stage):
        self._stage = stage

    def _stage_1_processing(self, cam: vimba.Camera, frame: vimba.Frame):
        print('STAGE1: {} acquired {}'.format(cam, frame), flush=True)

    def _stage_2_processing(self, cam: vimba.Camera, frame: vimba.Frame):
        print('STAGE2: {} acquired {}'.format(cam, frame), flush=True)

    def _stage_3_processing(self, cam: vimba.Camera, frame: vimba.Frame):
        print('STAGE3: {} acquired {}'.format(cam, frame), flush=True)

    def frame_callback(self, cam: vimba.Camera, frame: vimba.Frame):
        # Called every time a new frame is received. Perform image analysis here
        self._processing_functions[self._stage](cam, frame)
        # Hand used frame back to Vimba so it can store the next image in this memory
        cam.queue_frame(frame)

def setup_software_triggering(cam: vimba.Camera):
    # Always set the selector first so that folling features are applied correctly!
    cam.TriggerSelector.set('FrameStart')

    # optional in this example but good practice as it might be needed for hadware triggering
    cam.TriggerActivation.set('RisingEdge')

    # Make camera listen to Software trigger
    cam.TriggerSource.set('Software')
    cam.TriggerMode.set('On')

def main():
    with vimba.Vimba.get_instance() as vmb:
        with vmb.get_all_cameras()[0] as cam:
            # By default the handler will assume you want to work with Stage1
            handler = FrameHandler()
            # optionally set the stage explicitely
            handler.set_stage(Stage.STAGE1)

            setup_software_triggering(cam)
            try:
                # Tell the camera to start streaming. The first frame will be recorded when the
                # software trigger is executed
                cam.start_streaming(handler=handler.frame_callback)
                # example loop to record 10 images and then stop.
                for _ in range(10):
                    cam.TriggerSoftware.run()
                    time.sleep(0.5)
                # Now we can start working on stage 2
                print("Stage 1 is done.... Moving on to Stage 2")
                handler.set_stage(Stage.STAGE2)
                for _ in range(10):
                    cam.TriggerSoftware.run()
                    time.sleep(0.5)
                # And finally stage 3
                print("Stage 2 is done.... Moving on to Stage 3")
                handler.set_stage(Stage.STAGE3)
                for _ in range(10):
                    cam.TriggerSoftware.run()
                    time.sleep(0.5)
            finally:
                cam.stop_streaming()
            print("All stages done")

if __name__ == "__main__":
    main()

As you can see the cameras start_streaming and stop_streaming methods are only called once, but the run image processing function can be altered by setting the "state" of the FrameHandler object. Also there is only a single place in the code where the camera connection is opened, saving us the overhead of feature enumeration I mentioned above. The trick here is that in the actual registered frame_callback function, that is called for every transferred frame, I perform another function call. The function that is called is determined, by looking in a dictionary of functions, that are associated with a Stage entry. So by checking which stage the FrameHandler is currently in self._stage and taking the relevant processing function for that stage from the dictionary, the performed processing can be changed. Registration of the processing functions to their corresponding Stage is done in the FrameHandler.__init__ constructor. If you want more details on this feel free to ask!

The code should output something along the lines of:

STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=0, status=FrameStatus.Complete, buffer=0x1cbfa4ce040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=1, status=FrameStatus.Complete, buffer=0x1cbfa9a4040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=2, status=FrameStatus.Complete, buffer=0x1cbfae8b040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=3, status=FrameStatus.Complete, buffer=0x1cbfb361040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=4, status=FrameStatus.Complete, buffer=0x1cbfb846040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=5, status=FrameStatus.Complete, buffer=0x1cbfa4ce040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=6, status=FrameStatus.Complete, buffer=0x1cbfa9a4040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=7, status=FrameStatus.Complete, buffer=0x1cbfae8b040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=8, status=FrameStatus.Complete, buffer=0x1cbfb361040)
STAGE1: Camera(id=DEV_1AB22C00041C) acquired Frame(id=9, status=FrameStatus.Complete, buffer=0x1cbfb846040)
Stage 1 is done....
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=10, status=FrameStatus.Complete, buffer=0x1cbfa4ce040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=11, status=FrameStatus.Complete, buffer=0x1cbfa9a4040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=12, status=FrameStatus.Complete, buffer=0x1cbfae8b040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=13, status=FrameStatus.Complete, buffer=0x1cbfb361040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=14, status=FrameStatus.Complete, buffer=0x1cbfb846040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=15, status=FrameStatus.Complete, buffer=0x1cbfa4ce040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=16, status=FrameStatus.Complete, buffer=0x1cbfa9a4040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=17, status=FrameStatus.Complete, buffer=0x1cbfae8b040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=18, status=FrameStatus.Complete, buffer=0x1cbfb361040)
STAGE2: Camera(id=DEV_1AB22C00041C) acquired Frame(id=19, status=FrameStatus.Complete, buffer=0x1cbfb846040)
Stage 2 is done....
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=20, status=FrameStatus.Complete, buffer=0x1cbfa4ce040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=21, status=FrameStatus.Complete, buffer=0x1cbfa9a4040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=22, status=FrameStatus.Complete, buffer=0x1cbfae8b040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=23, status=FrameStatus.Complete, buffer=0x1cbfb361040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=24, status=FrameStatus.Complete, buffer=0x1cbfb846040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=25, status=FrameStatus.Complete, buffer=0x1cbfa4ce040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=26, status=FrameStatus.Complete, buffer=0x1cbfa9a4040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=27, status=FrameStatus.Complete, buffer=0x1cbfae8b040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=28, status=FrameStatus.Complete, buffer=0x1cbfb361040)
STAGE3: Camera(id=DEV_1AB22C00041C) acquired Frame(id=29, status=FrameStatus.Complete, buffer=0x1cbfb846040)
All stages done

This turned into a wall of text again but I hope it is helpful. Let me know if either approach works for you.

ColbearChan commented 3 years ago

@NiklasKroeger-AlliedVision Thanks, I have implemented my own version inspired by your suggestions, and it works. The key I took from you is to only open the camera with trigger mode at the beginning of the program, trigger the camera for frames, and close it at the end. Currently my program has been tested for weeks, and there is no more error -19 showing up. I am closing the issue with this comment.

Again, thanks for your patience and help.