Closed mahwhat closed 4 years ago
Hi and sorry for the long wait.
Unfortunately I do not have the hardware handy to try out triggering via the physical IO lines. So I can't test if I experience the same problem you are having with your code. But perhaps a different example I have that is using software triggers can help here. In this gist I modified the Asynchronous_grab.py example to only record images every two seconds by using a software trigger, that is called from a separate thread. Perhaps it can help you as well.
Looking through your code I also noticed, that in your setupTrigger
function, you do not change the AcquisitionMode
feature. By default I believe this is set to Continuous
. This means, that once your sensor sends a rising edge, your camera will start recording images as as fast as it can until you tell it to stop. If you only want a single frame ever time your sensor sends a rising edge, you would have to set this to SingleFrame
and "rearm" the camera after every frame. This is done in the linked gist above in the SoftwareTriggerThread.run
method (line 138) but I believe you should be able to do this in your frame_handler
function after you queue the frame again.
I would also suggest, that you try out using a queue
object to transfer your image data. These are thread safe which might be important, as your frame_handler
function is a callback, that is being called from our C layer.
Last I saw that you are only passing a single buffer to cam.start_streaming
. Is there a particular reason for this? Generally it is advisable to have more buffers available in the frame queue so new data can be stored and you do not lose images.
Another doubt is about clear functions, how can i clear the buffer of the frame?
You should not have to manually clear the buffer of a handled frame. Once you queue it again to be filled with new image data, the buffer is considered to be available to be overwritten. The old image data in there should be exchanged with data from the new image. Manually clearing the buffer by writing zeros to it would generate unnecessary overhead and slow down our image acquisition.
I hope these comments help. If you have further questions feel free to ask them here. If this solves your problem you can just close this issue.
Best regards, -Niklas.
Hi Niklas,
First thanks for helping me! I don't know why, but using the openCV window I had this problem with delayed images, so to try a different solution, I created a GUI with pyqt and it worked perfectly. I didn't know how to rearm the camera, so I was using AcquisitionMode
in Continuos
and a single buffer, with your advice I set AcquisitionMode
in SingleFrame
.
I am studying the best way (maybe queue or calback) to send my acquired images in the frame_handler
to the processing stage.
Thanks again for helping me! Matheus
I'm using a sensor to trigger the camera and acquire a new frame, but when i active the sensor in the first time the image doesn't appear in the openCV window. In the second time the image that appears in the window is the image corresponding to the first trigger, so I'm always exhibiting one delay image.
This is the code that i'm using:
Another doubt is about clear functions, how can i clear the buffer of the frame?