Open b-grimaud opened 3 months ago
Hi there,
Really sorry for the lack of answer, it was a busy summer and we are working hard to provide a new plugin version backed up by a new library.
Obviously something is going wrong, as you already have patched data (64, 64) and it should just take the 512 patches as is, and just apply augmentations (x8 factor). The plugin simply calls the functions from n2v
, so I'd assume the error is there.
I will try to look into it.
On an unrelated note, I found that loading proprietary file formats through third party plugins sometimes return frames as xarrays instead of numpy arrays, which apparently is what is expected from this plugin. I'm not familiar with napari guidelines, is it up to IO plugins to align with a specific array type or up to processing plugins to be compatible with different arrays ?
As far as I know there is no guideline, it is up to the plugins (including IO) to do what they want.
Do you get an error if the xarray is passed to the plugin now? I'd expect xarrays to be seamlessly processed by functions expecting a numpy array. But I don't have much experience with xarrays.
As I said, I am sorry things are super slow on this front, I am the only one to maintain the plugin and there is not much time for me to spend on this. On the bright side, I am currently switching the back end of the plugin to CAREamics (https://careamics.github.io/0.1/), which has an entire team to support it. So once we have successfully released the new napari plugin, there will be several people able to help (and with a mandate to do so)!
Hi,
I'm trying out this model through Napari on calcium imaging videos. The videos are loaded as TIFF, and the training is done on CPU as I'm still figuring out the TF version issue.
I tried with both the default (SYX) or expected (TYX) axes, it seems that patches are generated for each frame of the video.
Output is :
Repeated a bunch of times, presumably one per frame, then :
This results in 90+ Gb of memory usage, which increases even further after the first epoch is finished and crashes with an OOM error on my machine (128 Gb of RAM).
Is this the expected behavior on videos, or is there a better way of handling things ?
I ended up extracting the first frame of a video and training on it, with decent results for a first attempt.
On an unrelated note, I found that loading proprietary file formats through third party plugins sometimes return frames as xarrays instead of numpy arrays, which apparently is what is expected from this plugin. I'm not familiar with napari guidelines, is it up to IO plugins to align with a specific array type or up to processing plugins to be compatible with different arrays ?
Thanks !