Closed NHPatterson closed 3 years ago
Heh, I was wondering about this recently myself. Nice investigative work with the Affine class. I think you are probably right that the correct interpretation is to compose the transformations, but the question is, in what order? And by the way, in general, I don't think we should override the scale (with nap_aff.scale =
), but rather compose it with a new transform containing the scale.
Based on your suggestion (that elastix takes the scale as an input and then does the registration), it sounds to me like we should apply scale/translate first, then the given affine.
@sofroniewn, what do you think?
Curious if this is intentional. I am happy to take a stab if someone can tell me where things may need to be changed.
Hi @NHPatterson thanks for raising this! Just to make sure I understand what this issue is about - right now when you pass both affine
and scale
we just silently ignore the scale and only use the affine, you can see that here https://github.com/napari/napari/blob/679fd1d7c6d6bd41b9ae0daca642d35af2691c86/napari/layers/base/base.py#L210-L236
When you then look at the layer.scale
you actually get the scale contribution of the decomposition of the affine! It seemed to make sense at the time, but maybe wasn't the best choice.
An alternative API would be to have affine
, translate
, scale
, rotate
, shear
all be independent of each other, store them separably and then compose them in some default order and not return the decomposition. If we do that should we still allow rotate
and shear
to be inputed separately
Ah - i see mid-writing @jni has just proposed something like that asked my thoughts too :-)
Yes, I think we should probably change something, but I want to think a little more about the final API, for example would we ever provide things like layer.composed_scale
to the the user, and what if now people have a more complex series of transforms that they want to support?
There was some suggestion in #1663 of just improving the API for our transforms in general, which might be a more robust way to solve this longer term. I'm not quite sure yet what is best and very open to more input.
Awesome work, @NHPatterson !
As suggested by @jni , keeping the scale
, translate
, rotate
, affine
separate and composing them in a specific order could be a good improvement.
For compatibility with elastix and a number of other imaging tools, e.g. itk, imagej, the ideal composition order is:
affine * (rotate * scale + translate)
If skew is needed, it could be:
affine * skew * (rotate * scale + translate)
For using elastix directly in napari, there is now an elastix_napari plugin, cheers to @ViktorvdValk ! -- feedback and contributions are welcome!
Ok awesome @thewtex - let's adjust this to do
affine * skew * (rotate * scale + translate)
I think we should also then avoid all the decomposition stuff that we're doing all over the place as well, which should make everything much faster. We can work on this for the 0.4.9
release
For using elastix directly in napari, there is now an elastix_napari plugin, cheers to @ViktorvdValk ! -- feedback and contributions are welcome!
That's great, thanks for sharing!! Excited to check that out
If skew is needed, it could be:
affine * skew * (rotate * scale + translate)
@thewtex Did you define that order (with parentheses in those particular places) because you think of skew
being absorbed into the affine component of that composition rather than something that describes how the pixels in an image are sampled? Or is there another reason?
Alternatively, could we consider the following order?
affine * (rotate * skew * scale + translate)
In this case, you could think of rotate * skew
as defining ITK's direction matrix, which seems to support (or at least not forbid) non-orthogonal directions according to ImageBase::SetDirection
, but not really expect them. My is that guess is a lot of tools based on ITK will either break or not handle non-orthogonal directions properly.
The nice thing about the second order of rotate * skew * scale
is that we can use the QR-decomposition to resolve each component from any affine matrix. We rely on extracting some of these components from general affine matrices in napari, so this would mean fewer code changes, and maybe fewer breakages.
It's the first time I've had to think about skew/shear as an acquisition/sampling property, so any thoughts would be welcome.
affine * (rotate * skew * scale + translate)
seems very reasonable to me, and i can see how it will be nicer with the QR-decomp, but good to know if there are good reasons to consider the alternative!
My is that guess is a lot of tools based on ITK will either break or not handle non-orthogonal directions properly.
You are right. Most image processing algorithms do not support skew, but this is not limited to ITK.
It's the first time I've had to think about skew/shear as an acquisition/sampling property,
Yes, an image is defined by sampling that is on a uniformly spaced, rectilinear grid.
In ITK, there are no checks that a Direction
is orthogonal, but most algorithms are not valid without that assumption. For ITK, the Direction is orthonormal.
Without a uniformly spaced, rectilinear grid, many image processing algorithms become complex or intractable, and assumptions cannot be made that dramatically impact performance.
So, the logic to apply affine
and skew
after was to separate them from uniformly spaced, rectilinear grid properties. Taking it further, these could be left out from an ImageLayer
. But, if they are there, maybe the order does not matter :-).
Thanks for the feedback @thewtex.
In the last napari community meeting, we decided to keep our current implied ordering rotate * shear * scale + translate
. As napari's primary concern is visualization, the order of these transforms (and presence of shear) is not critical, and that order was believed to be the most natural when dealing with visualization of datasets that do contain some kind of shear (e.g. lattice light-sheet microscopy).
After merging PR 2855, this issue should be fixed in that the affine
property is defined independently of the other transform properties like scale
, translate
, and rotate
. Those latter properties could be used like ITK's spacing, origin, and direction. For some plugins (including those based on ITK), it probably makes sense to at least warn if the napari Image layer has a non-identity shear, and maybe error.
One other thing to watch out for is that napari currently absorbs reflections into the scale component (i.e. the scale component can contain negative values), so the scale, translate, and rotate parameters may not always come out exactly as they went in (i.e. if the rotate matrix passed in has a determinant of -1, the one returned will have a determinant of 1 and corresponding scale parameter will be negated. I think we can change that without too much work though.
After merging PR 2855, this issue should be fixed in that the affine property is defined independently of the other transform properties like scale, translate, and rotate. Those latter properties could be used like ITK's spacing, origin, and direction. For some plugins (including those based on ITK), it probably makes sense to at least warn if the napari Image layer has a non-identity shear, and maybe error.
:+1: makes sense, and a good path forward. Thanks for working on this.
One other thing to watch out for is that napari currently absorbs reflections into the scale component (i.e. the scale component can contain negative values), so the scale, translate, and rotate parameters may not always come out exactly as they went in (i.e. if the rotate matrix passed in has a determinant of -1, the one returned will have a determinant of 1 and corresponding scale parameter will be negated. I think we can change that without too much work though.
Ah, thanks for the note. Yes, if we can expect scale to always be positive and any negations absorbed into rotate, it will simplify working with ImageLayer
since pixel spacing in other contexts is taken to be positive.
🚀 Feature / Motivation
I am working a bit to use transformations derived from elastix using napari's 2D linear transformation rendering. The good news is that it mostly works, but I have a question regarding physical scales. elastix takes these into account during registration and returns a transformation and the "spacing" or distance between pixels. When using this transformation matrix in napari, adding an affine transform and a scale to physical dimension aren't composed together. See example below:
I get this:
The rotation and translation from elastix is correct as the rotation and translation is appropriate but the fact the image is half the size was ignored. The printed matrix after the layer is initialized being the same as the original indicates the
scale
argument appears to be overriden or ignored whenaffine != None
.Pitch
Adding scaling to the affine matrix fixes this issue:
Curious if this is intentional. I am happy to take a stab if someone can tell me where things may need to be changed.