Open VolkerH opened 2 years ago
Just my personal opinion here: I think having an .affine
propery on layers was a wrong technical decision. Theoretically, we have to make all plugins and image processing algorithms compatible with the .affine
parameter. It would be much easier to actually apply an affine transform to an image, then show it as a layer without any .affine
parameter. Then, images could be just processed. So I'd say this feature request is out-of-scope.
Theoretically, we have to make all plugins and image processing algorithms compatible with the .affine parameter. I see that that is painful. I'd just be happy if a plugin tells me whether it supports this or not.
So I'd say this feature request is out-of-scope. Totally fine.
It would be much easier to actually apply an affine transform to an image, then show it as a layer without any .affine parameter
I disagree here. The affine is a layer property that can be changed and the rendering will be applied by the GPU without additional transfers from RAM to data. It also works with pyramids. Applying the affine to the underlying data (e.g. for a whole slide image, including all pyramid levels) would be very expensive and time consuming.
The use case I have for a crop-plugin (I thought about writing one myself last week then did a search and found this one) is the following. We have whole slide-images (pyramid, chunked, backed by lazy-loaded zarr) that get registered to a world coordinate system (multi-well layout) using an .affine
property.
Now, we want to run cellpose on many regions of that slide (which is very time-consuming).
Therefore it makes sense to run cellpose on a few small representative regions and optimize the parameters (flow_threhold etc) on those small regions before running all large areas in batch.
The cellpose-napari-plugin works on indidvidual layers. So a quick way to achieve what I want is crop-pluin that allows me to create a small region-of-interest layer on which I can run cellpose.
I will have a play sometime in the coming weeks with supporting affine and see how far I get.
the rendering will be applied by the GPU without additional transfers from RAM to data
If we could process images on the GPU without back-and-forth data transfer to RAM, that would be amazing, I agree. See also: https://github.com/napari/napari/issues/2243
We have whole slide-images (pyramid, chunked, backed by lazy-loaded zarr) that get registered to a world coordinate system (multi-well layout) using an
.affine
property.
So you actually just use the translation
part of the affine transform, right? Maybe it's much easier to implement it with this limitation.
My code creates an affine matrix that could be decomposed for scaling and translation. However, the nikon nd2 file also have a rotation in the metadata for the stage/camera calibration and I was hoping to add that as well, so I would prefer to use the .affine. I actually don't see a big problem accounting for the .affines, you have the transforms and you simply calculate which pixels in data the corners of the affine correspond to. (You can then transform the data by actually applying an affine if it is non-rectangular).
EDITED to add: Our use case is 2D only, I just realised that you are supporting 3D volumes as well which may complicate matters.
Both the image layer to crop from and the shape layer defining the crop regions could have an
.affine
property. I don't think these are currently taken into account.