fmi-faim / faim-ipa

A collection of Image Processing and Analysis (IPA) functions used at the Facility for Advanced Imaging and Microscopy (FAIM)
BSD 3-Clause "New" or "Revised" License
8 stars 6 forks source link

Store multiscales metadata in .zattrs `datasets` list to allow napari to load lower-res versions of the images #30

Closed jluethi closed 1 year ago

jluethi commented 1 year ago

Finally gotten to test faim-hcs a bit, super helpful jupyter notebooks @tibuch @imagejan !

One thing I ran into, at least with the demo notebooks, was that the OME-Zarr files created by them don't contain the whole list of the different resolution datasets in their .zattrs file. The dataset list of the different multiscales resolutions (see e.g. here) should contain a path to all the resolutions. In the examples I looked at so far, it only contained the full-resolution data.

When loading the images in napari via the napari-ome-zarr plugin, it then doesn't appear to recognize the lower resolution pyramids are available (I think it checks that .zattrs to get the pyramid levels) and the data is just loaded as the full-res dask array, instead of a MultiScaleData object (2 is the data for your 3D example in napari, 3 is the data for another OME-Zarr file we made with Fractal): Screenshot 2023-05-16 at 16 52 45

If you had issues with opening those OME-Zarr files in napari, that would be a very likely culprit.


I'm anyway going through the repository a bit to try and create a Fractal task that would wrap those parsing functions and make them available from within Fractal. I can have a look whether I can make a good PR with a suggested fix for this if you'd like. Also, let me know if you want further issues on this repo or whether we should have discussions in another channel.

jluethi commented 1 year ago

For context, this is the datasets list I see in the current example datasets:

"datasets": [
                {
                    "coordinateTransformations": [
                        {
                            "scale": [
                                1.0,
                                1.3668,
                                1.3668
                            ],
                            "type": "scale"
                        }
                    ],
                    "path": "0"
                }
            ],

but I think it should be something like this:

"datasets": [
                {
                    "coordinateTransformations": [
                        {
                            "scale": [
                                1.0,
                                1.3668,
                                1.3668
                            ],
                            "type": "scale"
                        }
                    ],
                    "path": "0"
                },
                {
                    "coordinateTransformations": [
                        {
                            "scale": [
                                1.0,
                                2.7336,
                                2.7336
                            ],
                            "type": "scale"
                        }
                    ],
                    "path": "1"
                },
                {
                    "coordinateTransformations": [
                        {
                            "scale": [
                                1.0,
                                5.4672,
                                5.4672
                            ],
                            "type": "scale"
                        }
                    ],
                    "path": "2"
                },
                {
                    "coordinateTransformations": [
                        {
                            "scale": [
                                1.0,
                                10.9344,
                                10.9344
                            ],
                            "type": "scale"
                        }
                    ],
                    "path": "3"
                }
            ],
jluethi commented 1 year ago

My bad, I misread things. There's only a single pyramid level for those examples, right? Got the resolution levels mixed up with the channel list.

Nice inference of how many pyramid options you'd want to create. Wasn't initially obvious that you'd infer them and do single resolution if the size of the image is < 2048.

That's why those datasets only contain a single entry, right? My bad 🙈

tibuch commented 1 year ago

Yes, this was the reasoning. Just checked a dataset with larger data and there the coordinate transformations are present.