Open juliomateoslangerak opened 3 years ago
After some time trying to figure out with some memory tracking tools, like @iandobbie, I could not find any memory leakages. Then I realized that the memory was released when I deactivated the cameras. See how memory usage is building up with 4 steps (one per position) in 4 larger steps (4 time-points) and stays after the experiment is finished. The two drops at the end are when I deactivate the two cameras. One drop per camera deactivated. My bet is that the camera view is keeping those images in memory .
For that matter, a single site time-lapse experiment or two consecutive single time-point experiments suffer from the same issue.
Plot of a single site experiment setting 16 repetitions and at the end deactivating the two cameras Two consecutive experiments with 1 repetition each. Also two cameras deactivated before quitting.
For info, the issue is reproduced in the production system, with real cameras and laser s!
The memory 'leakage' is in the imageQueue for the GUI. The memory is released when, upon call of ViewCanvas.clear the queue is flushed.
while True:
try:
self.imageQueue.get_nowait()
except queue.Empty:
break
During acquisition, the queue should be progressively emptied by a thread running processImages
# Grab all images out of the queue; we'll use the most recent one.
newImage = self.imageQueue.get()
while not self.imageQueue.empty():
newImage = self.imageQueue.get_nowait()
I believe the loop within processImages is blocked by self.drawEvent.wait()
.
Not really sure of the implications of not waiting forever are but anyway, just adding a timeout of 1 sec doing the trick 2ce0ab2f1386de33811e5ba0e37d366734d7b687. I
n the GUI i see no difference so far. And memory is not building up:
Seems to fix the memory issue. The 1s wait seems sensible, maybe too long. The camera window is meant to drop frames if they are arriving too quickly and I thought it did so I am surprised by this but it definitely seems to solve the problem. a 100x 28 frames in 2 channels runs fine and the python process at the end is about 3 GB, but that include mosaic etc...
There is an exception after some time acquiring multi-site experiments. Nothing crazy: 4 positions 180 cycles, one every 5 min wide field z stack (1024 x 1024 x 21)
After cycle 74 files are 1 kb and there's where I imagine the problem started
The PC is windows, a bit old 2 sockets with a total of 8 cores and and only has 16 Gb of RAM but giving 5 min to handle the data should be enough. Anyway the memory is indeed filled up and to a good extent released, but only after a min or so, after aborting the experiment.
Here is the Exceptions trace