tlambert03 / nd2

Full-featured nd2 (Nikon NIS Elements) file reader for python. Outputs to numpy, dask, and xarray. Exhaustive metadata extraction
https://tlambert03.github.io/nd2
BSD 3-Clause "New" or "Revised" License
53 stars 15 forks source link

Implement sub-frame read/chunking #85

Open tlambert03 opened 2 years ago

tlambert03 commented 2 years ago

in the reader, at around

https://github.com/tlambert03/nd2/blob/56cde515fdf121c8489f219b4bd7af8d7c2c4df2/src/nd2/_sdk/latest.pyx#L280

we should implement subframe cropping.

gatoniel commented 10 months ago

Hi, I think I really would like this enhancement to be available. I have several huge nd2 files (> 100 GB) consisting of large overview stitched images (7117 x 21044 pixels). Often I only need to extract small subvolumes of them. Looking at the files with NIS Elements is fairly fast for the size. But when I use xarray with

crop = xa_nd2[:, slicex, slicey].compute()

it definitely takes longer to load the data into RAM than NIS requires. The chunk size of the xarray is (1, 21044, 7117). Do you think implementing the sub-frame read/chunking would help with that, eg, by reducing the chunk size?

What skills are required to help on this issue? Unfortunately, I have zero experience with C/C++ ... But I could try my best with specific info on the task...

tlambert03 commented 10 months ago

Hey @gatoniel,

Actually, it looks like this is already possible in the current nd2 library, but you need to do it in a specific way:

There is a method in ND2File called .read_frame(). You call it with a single integer representing the frame number in the file. Here, frame number means the index of the 2D plane (possibly multichannel) acquired over the course of the experiment (let me know if you need help determining the frame numbers in your files). But lets take the simple case of a single 2d tiled image:

import nd2

f = nd2.NDFile(...)
array = f.read_frame(0)

that returned np.ndarray object is just a view onto a memmap object that knows the location on disk where the data resides, but nothing has been read from disk and loaded into memory yet). So, you can slice that object before reading which should help you save memory.

crop = array[:, slicex, slicey]
# do what you want with the crop.

f.close()  # reminder

Currently, that won't work in the multi-dimensional xarray and dask wrappers (since the data underlying those is non-contiguous on disk, so it's harder to use a simple numpy view like this) ... so I guess that is the current "state" of this feature request. It can be done already on a single-plane level, using read_frame, but if you want to do more complex slicing that includes X/Y and other dimensions, more work is needed. Let me know if that's your situation or if the read_frame approach would work for you for now


Unfortunately, I have zero experience with C/C++

btw, there's no more C code in this library! (since #135 🎉 )

gatoniel commented 9 months ago

I would like to try out the .read_frame(). Could you help me in determining the correct frame numbers? nd2_file.sizes gives {'T': 30, 'P': 5, 'C': 2, 'Y': 21044, 'X': 7117}.

tlambert03 commented 9 months ago

sure, you can use something like np.ravel_multi_index. There is a private method that does this here:

https://github.com/tlambert03/nd2/blob/eb4bf8f17bbf6e97458ffc55254b353c393e813c/src/nd2/nd2file.py#L935-L938

so, for your example size {'T': 30, 'P': 5, 'C': 2, 'Y': 21044, 'X': 7117}, your coordinate shape would be (30, 5) (i.e. all coordinates that are not C Y or X). So, if you wanted to get the sequence index for timepoint=5 position=3 (both zero indexed), then you would use:

coord_shape = (30, 5)
frame_coordinate = (5, 3)  # t=5, p=3 
sequence_index = np.ravel_multi_index(frame_coordinate,  coord_shape)  # would be 28 in this case
frame_array = f.read_frame(sequence_index)

you can also use the private f._seq_index_from_coords if you like, but it may break in the future.

gatoniel commented 9 months ago

@tlambert03 thanks a lot. A preliminary test in my code shows that it is way faster that way!!!

tlambert03 commented 9 months ago

great!

tlambert03 commented 9 months ago

i'll update this issue when the dask interface also has chunking

gatoniel commented 9 months ago

What exactly would be needed to make the dask interface able to use the chunking? Wouldn't it just need an intermediate function that relays to the .read_frame() function?

tlambert03 commented 9 months ago

that intermediate function already exists, it's called _dask_block:

https://github.com/tlambert03/nd2/blob/eb4bf8f17bbf6e97458ffc55254b353c393e813c/src/nd2/nd2file.py#L940-L962

so, what additionally needs to happen is:

https://github.com/tlambert03/nd2/blob/eb4bf8f17bbf6e97458ffc55254b353c393e813c/src/nd2/nd2file.py#L957-L959

gatoniel commented 9 months ago

Hey @tlambert03 I have one more question when using .read_frame() for timeseries. I want to extract the same xy crop for all timepoints, so I use the following:

timestack_ = []
for i in range(t_len):
    frame_coordinate = (i, pos)
    sequence_index = np.ravel_multi_index(frame_coordinate, coord_shape)
    buffer = nd2_file.read_frame(sequence_index)
    timestack_.append(buffer[1, yslice, xslice])
timestack = np.stack(timestack_, axis=0)

Do you have ideas on how to further speed that up? The first thing would be to allocate an empty array, instead of stacking everything afterwards. But is it better to directly read from the buffer with np.copy or just assign the view to the empty array like so

timestack = np.empty((t_len, ylen, xlen))
for i in range(t_len):
    frame_coordinate = (i, pos)
    sequence_index = np.ravel_multi_index(frame_coordinate, coord_shape)
    buffer = nd2_file.read_frame(sequence_index)
    timestack[i, ...] = buffer[1, yslice, xslice]

? Or is there another trick that could speed this up? best, Niklas

tlambert03 commented 9 months ago

i do think using np.empty will be a slight improvement… but beyond that, I’m not really sure much can be done. Does it feel excessively slow? Or just curious? You could play around with a multithreaded read? (but to be honest, I’m not sure how the single file handle to the nd2 file will work there). If you hit on anything, do let me know!

On Dec 1, 2023, at 7:46 AM, niklas netter @.***> wrote:

Hey @tlambert03 https://github.com/tlambert03 I have one more question when using .read_frame() for timeseries. I want to extract the same xy crop for all timepoints, so I use the following:

timestack_ = [] for i in range(t_len): frame_coordinate = (i, pos) sequence_index = np.ravel_multi_index(frame_coordinate, coord_shape) buffer = nd2_file.read_frame(sequenceindex) timestack.append(buffer[1, yslice, xslice]) timestack = np.stack(timestack_, axis=0) Do you have ideas on how to further speed that up? The first thing would be to allocate an empty array, instead of stacking everything afterwards. But is it better to directly read from the buffer with np.copy or just assign the view to the empty array like so

timestack = np.empty((t_len, ylen, xlen)) for i in range(t_len): frame_coordinate = (i, pos) sequence_index = np.ravel_multi_index(frame_coordinate, coord_shape) buffer = nd2_file.read_frame(sequence_index) timestack[i, ...] = buffer[1, yslice, xslice]


Or is there another trick that could speed this up?
best,
Niklas
—
Reply to this email directly, view it on GitHub <https://github.com/tlambert03/nd2/issues/85#issuecomment-1836146145>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAMI52LKTO6GP6RGJTBXQYLYHHNULAVCNFSM56R576U2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBTGYYTINRRGQ2Q>.
You are receiving this because you were mentioned.
gatoniel commented 9 months ago

Not excessively slow. I was curious, mainly.