Closed andy-sweet closed 1 year ago
Merging #56 (808463f) into main (4bceba4) will decrease coverage by
0.23%
. The diff coverage is50.00%
.
:mega: This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more
@@ Coverage Diff @@
## main #56 +/- ##
==========================================
- Coverage 93.22% 92.99% -0.23%
==========================================
Files 16 16
Lines 1506 1514 +8
==========================================
+ Hits 1404 1408 +4
- Misses 102 106 +4
Impacted Files | Coverage Δ | |
---|---|---|
src/napari_metadata/_sample_data.py | 38.23% <50.00%> (+3.61%) |
:arrow_up: |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
Super cool @andy-sweet ! I tried loading via the sample data and loading directly from disk and observed some minor differences:
The on-disk version gets correct channel names, while the one from sample data names everything like the first channel. But the sample data gets the space units correctly set to micrometers, while the on-disk version shows "none" for space units.
See here:
Is the plan to include multiscales
metadata or is there also an option to expose additional metadata from the OME-NGFF file? For example, it could be interesting to allow users to edit some of the OMERO metadata like the channel names (already possible in theory I think), the default colors of the layers, their default rescaling (start & end values) and such things.
And, in the future, we may pack additional metadata into those .zattrs files (probably with their own unique keyword). Would be interesting if they can be exposed as well.
Thanks for trying it out!
The on-disk version gets correct channel names, while the one from sample data names everything like the first channel. But the sample data gets the space units correctly set to micrometers, while the on-disk version shows "none" for space units.
Is it possible you have a different plugin registered to read .zarr
directories (e.g. napari-ome-zarr) by default? I think that would explain the differences.
Will take a look at the channel name bug for the sample data.
Is the plan to include
multiscales
metadata or is there also an option to expose additional metadata from the OME-NGFF file?
What else would you want to see from the multiscales metadata? Transform per level? The contents of the "metadata"
key in multiscales
?
One thing I've thought of is to make the read only view more like a raw dump of the JSON data into a tree.
For example, it could be interesting to allow users to edit some of the OMERO metadata like the channel names (already possible in theory I think), the default colors of the layers, their default rescaling (start & end values) and such things.
Yeah, I wanted to get to this, but didn't quite make it yet. The napari-ome-zarr reader already brings in some of these things (e.g. the colormap) that the napari model supports, so the mapping is already defined, which is the hard part.
Is it possible you have a different plugin registered to read .zarr directories (e.g. napari-ome-zarr) by default? I think that would explain the differences.
Ah yes, I'm reading it via napari-ome-zarr. Is there support for it through the napari core reader then that you're using for the sample data?
What else would you want to see from the multiscales metadata? Transform per level? The contents of the "metadata" key in multiscales?
From the multiscales, I think it covers everything we use at the moment. We only use the coordinateTransformations
for the pixel sizes and the one at full resolution are the relevant ones :)
One thing I've thought of is to make the read only view more like a raw dump of the JSON data into a tree.
That would be neat!
Yeah, I wanted to get to this, but didn't quite make it yet. The napari-ome-zarr reader already brings in some of these things (e.g. the colormap) that the napari model supports, so the mapping is already defined, which is the hard part.
Good point. I agree, it doesn't make much sense that it would show up in 2 places. Also the start & end values are used by napari to already rescale the image. It's just currently not possible to save a change back to the OME-Zarr file, which would actually be a useful thing to do sometimes :)
Ah yes, I'm reading it via napari-ome-zarr. Is there support for it through the napari core reader then that you're using for the sample data?
Actually, I'm being slightly sneaky here and vendoring the napari-ome-zarr reader which then adds extra stuff (that napari doesn't currently have a place for) to Layer.metadata
. A long term solution would be need upstream changes to both napari-ome-zarr and napari - what I have here is a plugin-specific implementation to test out functionality.
The channel/layer name issues should be fixed now. @jluethi : up to you if you want to retest, I'll likely merge this after 24 hours.
And I also added the zenodo reference to the sample data function docstring.
Awesome, looks great to me now! Thanks @andy-sweet
This adds some real-world sample data from zenodo that contains a MIP of a 3D volume of hiPSCs. It uses
pooch.retrieve
to manage downloading and caching this data, so just uses the general pooch cache.I used the MIP instead of the full 3D volume to reduce the download size (~100MB instead of ~1GB).
Closes #45