Open chemicalimage opened 2 years ago
Quick question what you want to reach?
Sequence of images with different exposure times ?
This piece of code is coupled to a command processor which says "short" or "long." The usage may be:
3 is the problematic use case. Consider a security camera which has an AI that decides when to use an illuminator to create video as an example. Ideally, the framerate is constant no matter which imaging mode is used.
Quick question what you want to reach?
4 fps
Sequence of images with different exposure times ?
Yes.
Some first comments: Your takesnap routine always reinitializes the grab engine. This could take longer and different time
You never really activate software trigger.
Do I understand right that you want to average all images in one run as fast as possible but with a given exposuretime ?
Still the question if your original task is to average the images that you capture at a given exposuretime??
because img = (img / stacking_count )*16
will create an integer mean of all images captured and shifted by 4bit represented as float ... ( so the lower 4bit will always be '0'
Your original code was capturing in pylon internal buffers and reading them after all capturing was done. Have you been doing this due to some performance issue?
The below rewrite is an image stacking implementation with direct retrieval of image buffers and highest performance pypylon constructs
If you want to get given framerate use the AcquisitionFrameRate feature One comment to your specific camera model: In triggered mode it has a very long ExposureStartDelay of 21.5ms ( as documented in https://docs.baslerweb.com/acquisition-timing-information?filter=Camera:a2A3840-45umBAS ) This means that in triggered mode ( also SW trigger ) the exposure will start 21.5ms after you called TriggerSoftware -> this should explain the timing you observed. -> In image will take ExposureStartDelay + max(ExposureTime, ReadoutTime,TransmissionTime) to be received.
import time
import numpy as np
from pypylon import pylon as py
class StackCapture(object):
def __init__(self):
# open and configure camera
tlf = py.TlFactory.GetInstance()
self.cam = py.InstantCamera(tlf.CreateFirstDevice())
self.cam.Open() # camera settings can be modified outside of acquisition
self.cam.StreamGrabber.MaxTransferSize = 4194304
self.cam.PixelFormat.SetValue('Mono12') # 12 bit, lsb aligned
def __del__(self):
self.cam.Close()
def take_snap(self, snap_state):
t0 = time.perf_counter()
cam_settings = snap_state[snap_state['mode']]
self.cam.ExposureTime.SetValue(cam_settings['exposure'])
# ... and all other settings that come from state dict
print(f"set {snap_state['mode']} - exposure {cam_settings['exposure']} us")
# pre allocate target array outside of time critical grab loop
img = np.zeros((self.cam.Height(), self.cam.Width()), dtype=np.uint16)
# automatically stop grabbing after 'camera_settings["stacking"]' frames
self.cam.StartGrabbingMax(cam_settings['stacking'], py.GrabStrategy_OneByOne)
# context manager of RetrieveResult will release the result at the end of call
img_count = 0
while self.cam.IsGrabbing():
with self.cam.RetrieveResult(5000, py.TimeoutHandling_ThrowException) as res:
if res.GrabSucceeded():
# don't create a python object copy of the pixel data, as we directly push it into numpy
# zero copy allows this
with res.GetArrayZeroCopy() as img_zc:
img += img_zc
img_count += 1
else:
print("GRAB of a frame FAILED!!")
# create average of all values
img_avg = (img / img_count).astype(float)
print(f"duration - {snap_state['mode']} - {(time.perf_counter() - t0) * 1e3}ms")
return snap_state['path'], img_avg
if __name__ == '__main__':
state = dict()
state['path'] = '/media/image/'
state['short'] = dict()
state['short']['exposure'] = 4000
state['short']['stacking'] = 16
state['long'] = dict()
state['long']['exposure'] = 100000
state['long']['stacking'] = 16
# setup image grabber
capture = StackCapture()
state['mode'] = 'long'
for ndx in range(100):
capture.take_snap(state)
state['mode'] = 'short'
for ndx in range(100):
capture.take_snap(state)
for ndx in range(100):
state['mode'] = 'long'
capture.take_snap(state)
state['mode'] = 'short'
capture.take_snap(state)
I am noticing significant delays when changing integration times for the a2A3840 camera I am using. I am not sure of the cause. Is it my inefficient code or something deeper? I would like to reduce this if possible. 300 000 microseconds is a long time relative to a 4000 or even a 100 000 microsecond image.
These are the results of the code. Notes: I am using software triggering in case I decide to use image stacking. The software triggering requires I sleep until the end of the integration time plus a small delay. The camera is capable of 12 bit monochrome images and I am placing them in a 16 bit array. The "state" dictionary is complex. I have exposed only the relevant parts.