python-microscope / microscope

Python library for control of microscope devices, supporting hardware triggers and distribution of devices over the network for performance and flexibility.
https://www.python-microscope.org
GNU General Public License v3.0
66 stars 39 forks source link

Continous acquisition slow on Ximea in free running #247

Closed jacopoabramo closed 1 year ago

jacopoabramo commented 2 years ago

Greetings,

After rewriting the _fetch_loop implementation for the Ximea (in this commit), I tested the performance in free running and made a comparison with my napari-live-recording plugin. Unfortunately, the python-microscope implementation of the Ximea handle is quite slower: at the same exposure time and at full frame the maximum framerate I achieved was 545 FPS on microscope, while with my plugin I reached up to 1500 FPS. Of course, I understand that it's a limitation of microscope and it was already mentioned before (microscope was not intended in its origin to be used in free running with cameras), but I was hoping that maybe there could be a solution on how to gain some speed. It may very well be that my implementation of the fetch loop is not efficient.

iandobbie commented 2 years ago

How did you interface your code to microscope? Did you use the device server and a Pyro connection? If so then you are going to pay a high price for serialization/deserialization, might be better to just access the microscope device directly, although you then have to deal with GIL issues etc as you loose the process spawning of the deviceserver.

jacopoabramo commented 2 years ago

Hi @iandobbie , I'm not sure I understand your questions. I'll post the code snippet I used for testing; I apologize I should have done this from the start:

from time import sleep
from queue import Queue
from microscope.cameras.ximea import XimeaCamera
from microscope import TriggerType, TriggerMode

buffer = Queue()

camera = XimeaCamera()
camera.set_trigger(TriggerType.SOFTWARE, TriggerMode.STROBE)
camera.set_exposure_time(100.0*1e-6)
camera.set_client(buffer)

camera.enable()
sleep(0.10)
framerate = camera.get_setting("framerate")
camera.disable()

print(f"Final framerate: {1.0 / framerate}")

camera.shutdown()

print(f"Current queue size: {buffer.qsize()}")
print(f"First image of buffer: {buffer.queue[0]}")

This is the produced output:

Final framerate: 545.0
Current queue size: 169
First image of buffer: [[1 4 1 ... 1 1 1]
 [1 1 4 ... 4 1 4]
 [5 3 5 ... 6 6 6]
 ...
 [5 5 5 ... 5 5 5]
 [4 4 5 ... 5 4 5]
 [5 5 5 ... 5 5 5]]

As you can see it's a pretty straightforward example which uses my fork of python microscope. If you instead refer to my plugin it's actually a straightforward implementation of the python bindings of XiAPI. As I said, I expected as much since microscope is a more complex wrapper around the python APIs so a loss in performance is due. Still, I was hoping to achieve at least a 1000 FPS rate with this settings.

iandobbie commented 2 years ago

So you aren't using the device server so not suffering from the serialization slowdown. However you have just arbitrarily created a queue object which you are adding unknown sized objects to. I think a large chunk of your issue could be the need to create and manage the memory for this. Do you get better results if you pre-create a buffer and use that for the data?

LeonZ77 commented 1 year ago

Hello @jacopoabramo, I tried your commit but it doesen't work for me as I would expect. For some reason, I only get 2 images in the queue. Did you change other code sections in order to make it work? The camera I'm using is the xiC MC124MG-SY which uses the UBS3 Vision Protocol. It seems like that the trigger configuration XI_TRG_OFF is not working correctly for me.

jacopoabramo commented 1 year ago

Hi @LeonZ77 , if you only changed the code using the commit I mentioned in the opening message of the issue it won't be enough I believe. You also need to implement the changes in this commit, which will map the trigger type and mode accordingly to the supported operations of the Ximea APIs. I'm not familiar with this camera model anyway; what exposure times are you using? I did a full frame acquisition with 100 microseconds exposure time.

LeonZ77 commented 1 year ago

For simplicity, I adopted your code completely and also worked with the same exposure time. I tried the same code in order to see if I can achieve similar results. Does your camera also use the UBS3 Vision communication protocol? You can check this with xiCOP by ximea. (I don't know if different communication protocols can lead to different results.)

jacopoabramo commented 1 year ago

My camera uses a PCIe protocol, so I don't think it will implement this protocol. It may be that some of the trigger types are not supported by your camera but I checked the manual and the xiC family should support free-running mode. Have you tried making a test acquisition using only the Ximea python APIs, without the microscope layer? There should be some examples in their website to test out your camera performance.

LeonZ77 commented 1 year ago

I did an implementation using the Ximea python APIs without and with the microscope layer in a GUI. Here, the trigger type seems to work properly and I get a more or less fluent camera video (I am not well versed in GUI programming, so performance is probably not optimal.) . In my case, I get for both implementations a FPS-value of ~25 (calculated by the timestamps) even though the perfomance of the actual video seems to be smoother for the raw Ximea python APIs implementation (but this is probably due to my GUI-code).

jacopoabramo commented 1 year ago

25 FPS at full frame seems reasonable as it's what's reported in the Ximea manual (30 FPS is the expected maximum). The timestamps may oscillate a bit since the python APIs are somewhat underperforming in comparison to the pure C++ APIs so I guess it's expected. It could be that you're facing the same problem I have with my camera (the queue is too slow of a client to handle data storage in the _fetch_loop, so there's a performance degradation especially because you're using an USB connection). Also your camera has double the pixel resolution so it would make sense that performances are worse. I was working on making a ring buffer in python which would implement the put method so that I could store my data in a numpy array, to recuperate some speed, but I hadn't much time to look more deeply into that. There are though some example packages you could take inspiration from, like numpy-ringbuffer.

jacopoabramo commented 1 year ago

Hi @iandobbie , an update on this. I managed to get better perfomances by creating a numpy image circular buffer (basically a 2D version of numpy-ringbuffer) to store images using the same snippet. With this I managed to get up to 818 FPS. Do you think there could be any other way to improve performance?

iandobbie commented 1 year ago

Hi @jacopoabramo, on the face of it there isnt too much more you could so as all you are doing is setting the frame rate, and then grabbing images into a preallocated buffer. How does this compare to the results from directly accessing Ximea API in python? To be honest we never seriously optimized for speed as this often leads to more complicate code.

The very newest python versions are meant to have significant speedups, but it depends critically on the python code paths you are using.

jacopoabramo commented 1 year ago

Hi @iandobbie , apologies for the late reply. As I said I made a comparison using the Ximea APIs with my napari plugin for live recording and I'm capable of reaching up to 1500 FPS. I understand that the newest python versions should give a speedup in code execution, but I don't feel particulary comfortable with that. Most of the packages I use tend not to be supported from version 3.10 onwards. I usually try to stick with version 3.9 (I actually had to downgrade to 3.8 since a new camera doesn't seem to support version 3.9...). At any rate I understand that microscope wasn't meant for this type of high-speed acquisitions so I will just close the issue for now.