flatironinstitute / CaImAn

Computational toolbox for large scale Calcium Imaging Analysis, including movie handling, motion correction, source extraction, spike deconvolution and result visualization.
https://caiman.readthedocs.io
GNU General Public License v2.0
624 stars 365 forks source link

memory mapping #12

Closed epnev closed 8 years ago

epnev commented 8 years ago

Make saving and reading faster if possible. Make sure we don't need to save both Y and Yr.

epnev commented 8 years ago

@agiovann

I think we are converging to the following:

  1. Read tiff file either serially (for memory management) or all at once
  2. Reshape 3d Y into 2d Yr
  3. Save and map Yr
  4. Keep Y to pass it into initialization, then discard it.

Is that so?

akhambhati commented 8 years ago

How are you reading in large tiff sequences (say greater than 50gb) that exceed available memory? Unfortunately, reading in tiff files sequentually and appending to a numpy mmap file is not possible. Does your package support HDF5 (through Python's h5py)?

agiovann commented 8 years ago

Hello Ankit,

good point. In principle it should not be too difficult to adapt the code to use hdf5 files by memory mapping. I have not coded that though and at the moment we are following another approach to reduce memory consumption but it still at an early stage. If you are willing to take on the challenge I can explain you how it should be done, otherwise I will look at it as soon as I have some bandwidth.

Best,

Andrea

agiovann commented 8 years ago

Dear Ankit,

I had a look at the h5py, there is one issue that is a bit problematic to perform the whole operation, although not impossible. The problem is that np.memmap is very integrated with numpy and scipy, so many operations are actually performed without loading into memory the whole thing. However, hdf5 is not similarly integrated. For instance, when you reshape an hdf5 file it will transform it into an numpy array, whereas the same operation can be performed with numpy.memmap without loading into memory. Please let me know if you have any suggestions/insights about how to tackle this issue. Thank you for your feedback.

akhambhati commented 8 years ago

Dear Andrea,

I realized a similar problem when I tried to implement h5py yesterday. Do you have any suggestions on how to convert a sequence of large TIFF stacks into a single numpy array? On Apr 7, 2016 07:28, "Andrea Giovannucci" notifications@github.com wrote:

Dear Ankit,

I had a look at the h5py, there is one issue that is a bit problematic to perform the whole operation, although not impossible. The problem is that np.memmap is very integrated with bumpy and scipy, so many operations are actually performed without loading into memory the while thing. However, hdf5 is not similarly integrated. For instance, when you reshape an hdf5 file it will transform it into an bumpy array, whereas the same operation can be performed with numpy.memmap without loading into memory. Please let me know if you have any suggestions/insights about how to tackle this issue. Thank you for your feedback.

— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/agiovann/Constrained_NMF/issues/12#issuecomment-206824053

agiovann commented 8 years ago

was looking into it right now. It seems that np.memmap can do it but it is veryyyy slowwww. Anyway if you want to give it a try and see if you can make it better I will gladly introduce it. A simple example

a = np.memmap('bla.bin', mode='w+', dtype=np.float32, shape=(512*512,5000))
b=np.random.random((512*512,1000))
a[:,:1000]=b
a[:,2000:3000]=b
del a #forces to flush buffer
a=np.memmap('bla.bin',mode='r+',shape=(512*512,5000),dtype=np.float32)
pl.plot(np.sum(a,axis=0))

The idea is to look on the web if it is possible to find ways to improve performance by modifying the function parameters.

Let me know

Andrea

epnev commented 8 years ago

In matlab we are able to do this by reading the tiff file in chunks and then append to a memory mapped .mat file.

For python I think we can read chunks of large tiff files using tifffile.imread and then append to a memory mapped numpy file the way Andrea suggested if we cannot find anything faster.

Regarding hdf5 matlab files are hdf5 files and the reshaping operation cannot be operated without loading the file in memory. But everything else works fine. So I memory map the data both in the 3d format and as a 2d matrix which is not ideal but works. Perhaps we can do something similar for python as well.

akhambhati commented 8 years ago

Thank you both! I will test out Andrea’s memmap approach and report back! 

I’d like to use this opportunity to thank you for pursuing this great study and developing such a userful toolbox!

Ankit


Ankit N. Khambhati, PhD Post Doctoral Fellow in Complex Systems Group Center for Neuroengineering and Therapeutics University of Pennsylvania akhambhati@gmail.com

On April 7, 2016 at 09:17:33, eftychios pnevmatikakis (notifications@github.com) wrote:

In matlab we are able to do this by reading the tiff file in chunks and then append to a memory mapped .mat file.

For python I think we can read chunks of large tiff files using tifffile.imread and then append to a memory mapped numpy file the way Andrea suggested if we cannot find anything faster.

Regarding hdf5 matlab files are hdf5 files and the reshaping operation cannot be operated without loading the file in memory. But everything else works fine. So I memory map the data both in the 3d format and as a 2d matrix which is not ideal but works. Perhaps we can do something similar for python as well.

— You are receiving this because you commented. Reply to this email directly or view it on GitHub

akhambhati commented 8 years ago

I was able to construct a large numpy memmap array (d1=330, d2=714, T=30117) after downsampling our TIFF by 2x. The original movie size was too large to create a numpy array on our cluster.

However, the cse package hung and eventually crashed on the pre-process function, likely because the system ran out of available memory.

As an alternative, would I be able to split the data into chunks run the CSE on each file segment and then merge the cell locations and time-courses back? Is this something you have tried?

agiovann commented 8 years ago

Hello Ankit,

Try regulating the number of pixels processed at the same time by modifying the code this way

options = cse.utilities.CNMFSetParms(Y,p=p,gSig=[XXX,XXX],K=YYY) # this would be your line with your own parameters options['preprocess_params']['n_pixels_per_process']=1000 # this is the number of pixels concurrently processed

this would reduce the memory consistently at the cost of more IO on the HD.

let me know it this works out

Andrea

On Sat, Apr 9, 2016 at 7:40 AM, Ankit Khambhati notifications@github.com wrote:

I was able to construct a large numpy memmap array (d1=330, d2=714, T=30117) after downsampling our TIFF by 2x. The original movie size was too large to create a numpy array on our cluster.

However, the cse package hung and eventually crashed on the pre-process function, likely because the system ran out of available memory.

As an alternative, would I be able to split the data into chunks run the CSE on each file segment and then merge the cell locations and time-courses back? Is this something you have tried?

— You are receiving this because you modified the open/close state. Reply to this email directly or view it on GitHub https://github.com/agiovann/Constrained_NMF/issues/12#issuecomment-207776471

Andrea Giovannucci, PhD Postdoctoral research associate Sam Wang Lab (http://synapse.princeton.edu) Molecular Biology Department Princeton University email:agiovann@princeton.edu tel:(+001) 609-258-8586

epnev commented 8 years ago

@akhambhati We have also implemented a pipeline along the lines that you suggest that lets you split the data, process them in parallel and then combining the results. We are now testing this and we should able to release this within the next 1-2 weeks.

akhambhati commented 8 years ago

I updated the n_pixels_per_process parameter and it helped complete preprocessing. However, the pipeline again crashed when trying to initialize components. I suspect that each of these middle steps will require caching to a temporary memmapped file.

I will likely wait for the implementation that allows each file to be processed independently in parallel and then combined.

Thank you! Ankit


Ankit N. Khambhati, PhD Post Doctoral Fellow in Complex Systems Group Center for Neuroengineering and Therapeutics University of Pennsylvania akhambhati@gmail.com

On April 11, 2016 at 10:39:50, eftychios pnevmatikakis (notifications@github.com) wrote:

@akhambhati We have also implemented a pipeline along the lines that you suggest that lets you split the data, process them in parallel and then combining the results. We are now testing this and we should able to release this within the next 1-2 weeks.

— You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub

agiovann commented 8 years ago

Hey Ankit,

have a crack at the new version! It should work well. In the first trial set the option fraction_downsample to 0.1 in demo_patches.py to see how it behaves!

Good luck