Closed GinYoshida closed 2 years ago
Thanks for the detailed issue @GinYoshida :) very helpful.
without having access to the file itself, I'm not immediately sure 🤔
That number 50059620352
is 17.000092544233993
times the size of the number of elements in shape: (26420,37152,1,3)
which is super close to sequenceCount
, so my main question is whether this is a somehow corrupt file/frame that we need to handle more gracefully, or something else.
Can you try something for me? use the read_using_sdk
flag and let me know if it works for that file
dask_array = nd2.imread(file_path, dask=True, read_using_sdk=True)
dask_array =dask_array[0,0:100,0:100,:]
result_ndarray = dask_array.compute()
_get_frame in nd2file.py seems to require a big memory if the data is huge in width and height due to converting it to ndarray?
yeah, unfortunately, I haven't yet implemented subframe chunking. The SDK doesn't provide it directly (i.e. you must read a full 2D + channels chunk of data before cropping), but it's on the list of things to do. It shouldn't be too hard to do this at the level of the mmap around here. Will add a new issue to track progress on that
@tlambert03 Thank you for your very quick reply.
Using read_using_sdk is working very well. Really appreciate!
nd2.imread(file_path, dask=True, read_using_sdk=True)
I see your worry. We cannot share the file. it must be very hard for several issues without reproducing the phenomenon on your side. If we find some condition to get this kind of unique data, which is not confidential, we will share it.
I also appreciate your action. I hope the demand is not too low for other people.
hi @GinYoshida, you might give this another try after version 0.4.4 ... I'm not certain if it will fix your issue (when not using read_with_sdk=True
without seeing the file itself. but it might?)
Since this issue is hard to tackle without the actual file, and since you have a workaround using the sdk reader, I'm going to close this issue, and see #85 for the sub-frame chunking. Feel free to re-open or comment with additional questions
Description
I would like to slice the big image data.
Code
Error
What I Did
Tried to compute the following nd2 file.
The size information
Note
Another trial
I tried another file and it was working well.
Question
_get_frame in nd2file.py seems to require a big memory if the data is huge in width and height due to converting it to ndarray?
My status
Sorry to day, I'm a beginner at Python. Just using the debugger and running a straightforward script is the maximum that I can do. Please inform me what you would like to make me do some more investigation.