Closed KJ-Chiu closed 1 year ago
I am not sure, what you are doing, so you may send me your Python script. However, you do not need extra threads per grabber, because each grabber creates its own thread.
Stefan
I want to build a multi-angle slow motion replay for sports. Due to balls or athletes move pretty fast, every millisecond's different might cause the object moves 2 to 5 centimeters in the picture. That will make the slow motion replay seems weird and incorrect.
Therefore, I start to do some research for precision timing between cameras, and PTP is the possible solution I found. I read the manual of DFK 33GX287 and it seems did implement PTP function, but I have no idea how to use that technology.
The following script is what I used for camera capture, the commercial part is large and useless for this issue so I removed it.
ps. the config path and xml file need to change before execute this script
import ctypes from datetime import datetime import os import queue import sys import threading from time import sleep import cv2 import numpy as np
sys.path.append(os.path.join(os.path.dirname(file), "..", "config")) from const import TIS_DIR, TIS_DLL
sys.path.append(os.path.join(os.path.dirname(file), "..", TIS_DIR)) import tisgrabber as tis
ic = ctypes.cdll.LoadLibrary( os.path.join(os.path.dirname(file), "..", TIS_DIR, TIS_DLL) ) tis.declareFunctions(ic)
class CameraThread(threading.Thread): def init( self, ThreadStopEvent, deviceName, grabber, shotQueue, ): threading.Thread.init(self) self.ThreadStopEvent = ThreadStopEvent
self.deviceName = deviceName
self.grabber = grabber
self.shotQueue = shotQueue
self.frameCallback = ic.FRAMEREADYCALLBACK(FrameCallback(deviceName, shotQueue))
self.deviceLostCallback = ic.DEVICELOSTCALLBACK(DeviceLostCallback(deviceName))
self.userData = CallbackUserdata(grabber, self.frameCallback)
ic.IC_SetCallbacks(
self.grabber,
self.frameCallback,
None,
self.deviceLostCallback,
self.userData,
)
ic.IC_SetContinuousMode(self.grabber, 0)
def run(self):
ic.IC_StartLive(self.grabber, 0)
print(
"Starting {} at {}".format(
self.deviceName, datetime.now().strftime("%H:%M:%S:%f")
)
)
while True:
if self.ThreadStopEvent.is_set():
ic.IC_StopLive(self.grabber)
ic.IC_ReleaseGrabber(self.grabber)
print("CameraThread Stop {}".format(self.deviceName))
break
sleep(1)
continue
class CallbackUserdata(ctypes.Structure): def init(self, deviceName, camera): self.deviceName = deviceName self.camera = camera # Reference to a camera/grabber object
def FrameCallback(deviceName, shotQueue): def callback(hGrabber, pBuffer, framenumber, pData): """ :param: hGrabber: This is the real pointer to the grabber object. Do not use. :param: pBuffer : Pointer to the first pixel's first byte :param: framenumber : Number of the frame since the stream started :param: pData : Pointer to additional user data structure """ now = datetime.now()
Width = ctypes.c_long()
Height = ctypes.c_long()
BitsPerPixel = ctypes.c_int()
colorformat = ctypes.c_int()
ic.IC_GetImageDescription(hGrabber, Width, Height, BitsPerPixel, colorformat)
bpp = int(BitsPerPixel.value / 8.0)
buffer_size = Width.value * Height.value * bpp
if buffer_size <= 0:
return
imagedata = ctypes.cast(pBuffer, ctypes.POINTER(ctypes.c_ubyte * buffer_size))
image = np.ndarray(
buffer=imagedata.contents, # type: ignore
dtype=np.uint8,
shape=(Height.value, Width.value, bpp),
)
image = cv2.flip(image, 0)
# pass image to queue for later action
shotQueue.put((now, image, deviceName, framenumber))
return callback
def DeviceLostCallback(deviceName): def callback(hGrabber, pData): pass
return callback
def initCameras( ThreadStopEvent: threading.Event, leftQueue: queue.Queue, rightQueue: queue.Queue, ) -> tuple[bool, CameraThread, CameraThread]: ic.IC_InitLibrary(0)
configPath = os.path.join(os.path.dirname(__file__), "..", "config")
def initGrabber(deviceName):
grabber = ic.IC_CreateGrabber()
ic.IC_LoadDeviceStateFromFile(
grabber,
tis.T(os.path.join(configPath, "{}.xml".format(deviceName))),
)
return grabber
leftGrabber = initGrabber("3rd-left")
rightGrabber = initGrabber("3rd-right")
if (
ic.IC_IsDevValid(leftGrabber) is False
or ic.IC_IsDevValid(rightGrabber) is False
):
ic.IC_ReleaseGrabber(leftGrabber)
ic.IC_ReleaseGrabber(rightGrabber)
return False, None, None # type: ignore
threadLeft = CameraThread(
ThreadStopEvent,
"3rd-left",
leftGrabber,
leftQueue,
)
threadRight = CameraThread(
ThreadStopEvent,
"3rd-right",
rightGrabber,
rightQueue,
)
return True, threadLeft, threadRight
class SyncController(threading.Thread): def init(self, ThreadStopEvent, leftQueue, rightQueue): threading.Thread.init(self) self.ThreadStopEvent = ThreadStopEvent
self.leftQueue = leftQueue
self.rightQueue = rightQueue
def run(self):
while self.ThreadStopEvent.is_set() is False:
for shotQueue in [self.leftQueue, self.rightQueue]:
if shotQueue.qsize() > 0:
shot = shotQueue.get()
cv2.imwrite(
"./{}_{}_{}.jpg".format(
shot[0].strftime("%H%M%S"),
shot[0].microsecond,
shot[2],
),
shot[1],
)
sleep(1 / 3000)
def main(): threadStopEvent = threading.Event() leftQueue = queue.Queue() rightQueue = queue.Queue()
success, threadLeft, threadRight = initCameras(
threadStopEvent, leftQueue, rightQueue
)
if success is False:
print("Failed to init cameras")
return
threadLeft.start()
threadRight.start()
# I removed my commercial logic over here
# Basically it will handle the queue, skip most of them but save sequence frames when in purpose
# Then user can replay multi cameras' frame in slow motion at the same time
syncController = SyncController(threadStopEvent, leftQueue, rightQueue)
syncController.start()
sleep(3)
threadStopEvent.set()
threadLeft.join()
threadRight.join()
syncController.join()
main()
By using this code, I get almost same timing multi-angle frames "sometimes", and the frame will just not match the other time like the sample below.
![Sample Image](https://i.imgur.com/1q166uL.jpeg)
https://imgur.com/1q166uL
I'm not sure looking forward to find the usage of PTP is useless or not, but make multi cameras' shutter shot at the same time is what I hope to reach.
Hello
The usage of PTP has no effect on image capture, if the cameras are free running. PTP simply synchronizes the clocks of the cameras, not more. At least it does not more in Windows.
If you want to have the cameras synchronised, then they must be triggered. The most exact synchronising is done by an external trigger source or frequency generator, which is connected to all cameras in parallel. Also it is necessary to set the property "IMX_LowLatencyMode" to true, because then the cameras react within the specified trigger delay time. If this is turned of, then the cameras start to expose, when they are ready, because exposure and frame delivery runs overlapped. It has following consequences:
IMX_LowLatencyMode = false:
IMX_LowLatencyMode = true:
Instead of using a frequency generator, a software trigger can be broadcasted by so called Action Command. I tested that with IC Imaging Control .NET in C#. The returned DeviceFrameTimes provided with the FrameMetaData of the received frames is very small 70ns - 128ns (no typo, its nano seconds). Exposure time is set to 1/100000 second.
You see, trigger mode gives a far better synchronisation.
Now we should discuss, how to proceed.
Stefan
How to get FrameMetaData and DeviceFrameTimes in Python FRAMEREADYCALLBACK? I tried using framenumber in FRAMEREADYCALLBACK, but even I trigger both cameras in same function, I can not pair the frames by their framenumber. Looks like it's due to not every trigger can make device return frame.
class frequencyGenerator(threading.Thread):
def __init__(self, ThreadStopEvent):
threading.Thread.__init__(self)
self.ThreadStopEvent = ThreadStopEvent
def run(self):
global activeCameras
while self.ThreadStopEvent.is_set() is False:
for activeCamera in activeCameras:
activeCamera.trigger()
sleep(1 / FRAME_RATE)
I am very sorry, but the tisgrabber.dll does not support the frame meta data. You can get them, if you use Python.NET and IC Imaging Control in Python: https://github.com/TheImagingSource/IC-Imaging-Control-Samples/tree/master/Python/Python%20NET. Is that an option for you?
Stefan
Due to pretty pretty not familiar with C# lol, I might take times to understand the C# samples and translate to Python version. But it's definitely worth to give it a try! My environment is okay for the requirement and it's a good chance to reach my purpose. I will work on it and feedback my code if there is any progress. Thanks for the help~
Please let me know, when you got a) Results b) need help for implementing.
After you got results, please come back to me, because I suppose, I need to write a DLL which implements the Action Command Code for triggering by broad cast.
Stefan
Hi Stefan,
I found that there seems no broadcast function in Python.Net. Is there any suggestion for me to set the action command?
Currently I also try to use Software Trigger
with Burst Count
and IMX_LowLatencyMode
, then trigger both camera immediately.
But when I set like following below, the callback still can't be pretty accurate (maybe still because it is not action schedule broadcast)
<fps>300</fps>
<vcdpropertyitems>
<item guid="{90D57031-E43B-4366-AAEB-7A7A10B448B4}" name="Trigger">
<element guid="{B57D3000-0AC6-4819-A609-272A33140ACA}" name="Enable">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E04}" value="1" />
</element>
<element guid="{B89A9D2C-51FD-4C2D-80DF-89B642781B7E}" name="Noise Suppression Time">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E08}" value="0." />
</element>
<element guid="{6A9D1F4E-B0AB-4472-9BB3-C6448A1D49DA}" name="Debounce Time">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E08}" value="0." />
</element>
<element guid="{B4109964-77E4-4AF3-ACA8-45BBAA861B5C}" name="Burst Count">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E03}" value="300" />
</element>
<element guid="{6519038D-1AD8-4E91-9021-66D64090CC85}" name="Polarity">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E04}" value="1" />
</element>
<element guid="{9CF42696-7C51-4BFE-8D83-296D729C42A2}" name="Mask Time">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E08}" value="0." />
</element>
<element guid="{A9715AB3-69AE-454D-8DF5-7E06D87C109C}" name="Burst Interval">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E03}" value="100" />
</element>
<element guid="{C337CFB8-EA08-4E69-A655-586937B6AFEC}" name="Delay">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E08}" value="3.10000000000000008881784197001" />
</element>
<element guid="{B6E013CA-76C7-4DDD-9AC8-A17E07C5E3F1}" name="Exposure Mode">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E06}" value="0" />
</element>
<element guid="{BE0B5BDC-0B1D-4F75-B512-4F17A047671E}" name="IMX Low-Latency Mode">
<itf guid="{99B44940-BFE1-4083-ADA1-BE703F4B8E04}" value="1" />
</element>
</item>
</vcdpropertyitems>
the upon setting gave me 150 frames in two seconds with one trigger, shouldn't it be 300 frames for two seconds or 150 frames for one seconds?
Hello
I found that there seems no broadcast function in Python.Net. Is there any suggestion for me to set the action command?
That is correct. The suggestion is to command me to write a suitable DLL for you, that you can use for this task.
The software trigger push() command sends the command immediately, the camera starts to expose and sends an answer. The push() waits for this answer and the time for waiting can last up to 4ms. If both push()s all called in separate threads, then the wait time does not matter.
You should receive 300 frames. Is it possible, you got frame drops? You may try the same with IC Capture 2.5. It shows the number of dropped frames.
Also make sure, that the bandwidth of your USB bus is not exhausted.
Stefan
Hi Stefan, I would be grateful if you could help me for that!
The following is the screenshot I used IC Capture 2.5 with same software trigger setting.
It did delivered 150 frames with no dropped. I also tried with IMX Low-Latency Mode Off
, then I can received 300 frames in about second.
https://i.imgur.com/N1xw7XQ.png
For this 150 frames error I have a fix. Please make a contact at https://www.theimagingsource.com/support/contact/.
The ActionCommand can be made in Python directly. I have to make a sample for that.
Stefan
Thanks Stefan! IMX Low-Latency Mode
works well after the firmware update.
I can received frame at precision timing by setting Burst Count
and Burst Interval
with IMX Low-Latency Mode - On
.
For example, when I set Burst Interval to 7692 (130FPS), 99% of DeviceTimeStamp Intervals will be around 7691 to 7693. Means maximum 1 microsecond different, not perfect but good enough for me now.
Is this the expected performance when I use Software Trigger?
For the second issue - the software solution to replace Scheduled Trigger ActionCommand. trigger takes few milliseconds to return, it is not a good idea to trigger them line by line. However the multi threads in Python is actually still using the same process, and multi process take lots of resource to communicate and correct between each. Therefore, I finally use kind of cycle trigger, sacrifice four frames but keep the efficiency of program, code below:
benchmarkTime = datetime.now()
nextTriggerTime = perf_counter()
for activeCamera in activeCameras:
activeCamera.trigger(benchmarkTime) # I wrap the ic in my CameraControl class and implement the trigger method
benchmarkTime += timedelta(seconds=DETECTION_FRAME_GAP * 2)
nextTriggerTime += DETECTION_FRAME_GAP * 2
while perf_counter() < nextTriggerTime:
continue
This solution give me nice result, the interval between trigger is truly around two times of DETECTION_FRAME_GAP. But then another issue comes, the timing control now all depends on computer side. Once my computer get heavy job, the timing control will not be that precision. I hope to transfer the timing control loading from computer to camera by using Scheduled Trigger ActionCommand. For example, schedule both master and slave camera start trigger after 10 milliseconds by one line. Is that possible to achieve?
Merry Christmas!
Hello
You can download a little command line program from trigger.zip
Unzip the file and start a command line (cmd.exe). Change into the directory, where you unzipped the files to. Then enter trigger 30
This configures all connected TIS GigE cameras for the Action Command Triggering and sends the trigger command with 30 fps.
Now start your software and see what happens. Set the "Trigger Burst Count" to 1. You should now receive stable 30 fps synchronised from both cameras. You can also try higher frame rates.
Stefan
Hi Stefan,
Thanks for your zip file, It works well! I also wrote another solution with Python, using multiprocess and infinity loop.
I assign scheduleAt
at 2 seconds after process started. With setting camera "Trigger Burst Count" to 1, all the processes will trigger camera at the same time and trigger them every FRAME_GAP
from time import perf_counter_ns, sleep
def _initCamera(
deviceName: str, frameQueue: mp.Queue, scheduleAt: int, scheduleAtDatetime: datetime
):
print("Sub Process _initCamera {}: {}".format(deviceName, os.getpid()))
_, cameraController = getCam(deviceName, frameQueue)
cameraController.resume()
while True:
sleep(FRAME_GAP / 2)
while scheduleAt > perf_counter_ns():
continue
cameraController.trigger()
scheduleAt += FRAME_GAP_NANO
cameraController.stop()
Due to the issue has been fixed, I will close the issue. Thanks you Stefan for helping so much! me the freshman of camera really learned a lot from here.
Best Regards, KJ Chiu
Is there an example to implement IEEE 1588 PTP for my multi camera in python?
Hardware: I have two DFK 33GX287 connected with ADLNK PCIe-GIE74 which also have PTP technology by CAT6 ethernet cable.
Software: I use
FRAMEREADYCALLBACK
with300fps
and start two camera at the same time (two thread and start immediately)By setting one camera to PTP master and slave to another, the frame callback seems still have 2 ~ 3ms moment different. Do I miss any hardware/software setting? I hope to reduce the moment different less than 100μs.
Thanks for any help!