Open agosztolai opened 4 years ago
running the image processing on multiple cores makes absolute sense. you should split your program into one thread that handles all image acquisition and then use multiprocessing to exchange the acquired frames with workers on other cores. with this setup, you don't have to sync the complex InstantCamera object between processes. Only ndarrays ( images ) which are simple.
I currently use PyZMQ to share images from the acquisition process to the worker processes (including savers), you can see the saver here, and the camera streaming data here. It is a great way of leveraging multi-core computers, but you must be sure workers can keep up, it is tricky to handle overrunning with ZMQ.
Hello,
I would greatly appreciate any advice on the following.
I have implemented your code with a hardware trigger for multiple cameras.
I would like to push the framerate high for potentially extended periods. I am currently appending the grabbed frames into an array from all cameras before saving, which is quite CPU intensive. So I considered parallelizing.
Do you have experience with this and do you know if it could result in a speedup? What is the fastest way to save images from multiple cameras?
I have tried using multiprocessing and passing each camera to a different worker
However, this is not possible because the camera is SwigPyObject, which cannot be pickled and sent to the workers.