If I am correct, the OME-NGFF specification does not require a multiscale image to be the same image downscaled. I think we should clarify and maybe enforce this for a series of reasons:
1. Different semantic
Looking at the new specs from @bogovicj, the general coordinate transformations are stored in multiscales->coordinateTransformations, while the transformations for the scales are stored in multiscales->datasets->coordinateTransformations. So it makes sense to me to keep the two transformations semantics separate, and restrict the latter to scale + translation.
2. Use in software
I would find unexpected if zooming in in napari the image/contrast limits/colors suddently change. This would occur if a multiscale image can contain images that are not downscaled. Also, having very different images in various scales could lead to algorithms giving very different results depending on the scale that is used.
3. Limitations
If using separately acquired images for different scales is allowed, then either the transformations for the multiscales are scale+translation either they are general transformations. In the first case the user would not be able to use an image as a scale in a multiscale image if the image has the slightest rotation, which would feel strange to me. In the second case, implementations (e.g. napari) would be much more complex since we can't just pass a list of images anymore.
Final comment
Common use cases would still be, of course, allowed: an user with an additional image would still be able to align it to a multiscale image via the new coordinate transformations in multiscales->coordinateTransformations.
If I am correct, the OME-NGFF specification does not require a multiscale image to be the same image downscaled. I think we should clarify and maybe enforce this for a series of reasons:
1. Different semantic
Looking at the new specs from @bogovicj, the general coordinate transformations are stored in
multiscales->coordinateTransformations
, while the transformations for the scales are stored inmultiscales->datasets->coordinateTransformations
. So it makes sense to me to keep the two transformations semantics separate, and restrict the latter toscale
+translation
.2. Use in software
I would find unexpected if zooming in in napari the image/contrast limits/colors suddently change. This would occur if a multiscale image can contain images that are not downscaled. Also, having very different images in various scales could lead to algorithms giving very different results depending on the scale that is used.
3. Limitations
If using separately acquired images for different scales is allowed, then either the transformations for the multiscales are
scale
+translation
either they are general transformations. In the first case the user would not be able to use an image as a scale in a multiscale image if the image has the slightest rotation, which would feel strange to me. In the second case, implementations (e.g. napari) would be much more complex since we can't just pass a list of images anymore.Final comment
Common use cases would still be, of course, allowed: an user with an additional image would still be able to align it to a multiscale image via the new coordinate transformations in
multiscales->coordinateTransformations
.