Closed ShihongWu closed 11 months ago
Hi @ShihongWu,
The tool, as is right now would require twice the amount of memory than the image size and I apologize for this. As datasets get bigger, such issues become more apparent. I'm working on a more ram-efficient version of the tool which should hopefully be released by December.
If you don't require autofluorescence correction to improve segmentation, I would strongly advise subtracting the measured respective autofluorescence mean intensities in the quantification table (produced as output in the context of MCMICRO).
I hope this helps and I hope to improve the tool soon! Kresimir
I would strongly advise subtracting the measured respective autofluorescence mean intensities in the quantification table
Hi Kresimir @kbestak Thanks for your kind reply and clear clarification! I will follow your suggestion and do the subtraction in the quantification step! Hope your new version of background subtraction package will come soon! Looking forward to that! Many thanks and best wishes,
Shihong
Hi @ShihongWu,
The tool, as is right now would require twice the amount of memory than the image size and I apologize for this. As datasets get bigger, such issues become more apparent. I'm working on a more ram-efficient version of the tool which should hopefully be released by December.
If you don't require autofluorescence correction to improve segmentation, I would strongly advise subtracting the measured respective autofluorescence mean intensities in the quantification table (produced as output in the context of MCMICRO).
I hope this helps and I hope to improve the tool soon! Kresimir
Hi kresimir, @kbestak Thank you for your wonderful work! This pipeline is wonderful and I don't want to give up using it to remove the auto fluorescent channels just because of the memory problem, so I'm cropping my giant whole side images into small tiles and our cluster could manage the smaller ones. But I do observe one thing here, after the subtraction and removal of extra channels, for example, here is my markers.csv, after removing 8 channels, the file size of the output image is even larger than the input image. I'm wondering the reasons here. I've checked the data type, pyramid structure and metadata before and after the subtraction. None of them seems to be the cause. Could you kindly help me out here? Many thanks! We really appreciate your time and assistance!
channel_number | cycle_number | marker_name | Filter | excitation_wavelength | emission_wavelength | background | exposure | remove |
---|---|---|---|---|---|---|---|---|
0 | 0 | DNA_1 | DAPI | 395 | 431 | 50 | ||
1 | 0 | AF488_background | FITC | 485 | 525 | 100 | TRUE | |
2 | 0 | AF555_background | Cy3 | 555 | 590 | 200 | TRUE | |
3 | 0 | AF647_background | Cy5 | 640 | 690 | 500 | TRUE | |
0 | 1 | DNA_2 | DAPI | 395 | 431 | 50 | TRUE | |
1 | 1 | IgG4 | Cy3 | 555 | 590 | AF555_background | 100 | |
2 | 1 | LPS | Cy5 | 640 | 690 | AF647_background | 600 | |
0 | 2 | DNA_3 | DAPI | 395 | 431 | 50 | TRUE | |
1 | 2 | AF488_bleach_1 | FITC | 485 | 525 | 100 | TRUE | |
2 | 2 | AF555_bleach_1 | Cy3 | 555 | 590 | 200 | TRUE | |
3 | 2 | AF647_bleach_1 | Cy5 | 640 | 690 | 500 | TRUE | |
0 | 3 | DNA_4 | DAPI | 395 | 431 | 50 | TRUE | |
1 | 3 | COL3 | FITC | 485 | 525 | AF488_bleach_1 | 1000 | |
2 | 3 | GATA3 | Cy3 | 555 | 590 | AF555_bleach_1 | 400 | |
3 | 3 | Cleaved_Caspase_3 | Cy5 | 640 | 690 | AF647_bleach_1 | 800 |
Hi Shihong, @ShihongWu The tile size used for generating the pyramids may result in larger-than-expected output images. I am happy to say the RAM-efficient version is up and running with v0.4.1 added with https://github.com/SchapiroLabor/Background_subtraction/pull/14. For comparison, on an 88GB image instead of approximately 170GB of RAM the previous version required, I was able to get it down to 13GB so I think it will be able to handle your 800GB images. I've adapted a new pyramid writer from Palom that performs much better than the one I used previously. This might also help with the output image size issue.
I hope this helps and I'm looking forward to feedback on the new version! Kresimir
When I submitted the background subtraction job, I encountered an out-of-memory error. My image is about 800 G, and I asked 600g of memory on our HPC. But still I got the out of memory error. I was wondering if there's anyway to load the whole image at once, like just load the cycles that are required for the subtraction in a sequential way. Many thanks!