brainglobe / brainglobe-atlasapi

A lightweight python module to interact with atlases for systems neuroscience
https://brainglobe.info/documentation/brainglobe-atlasapi/index.html
BSD 3-Clause "New" or "Revised" License
125 stars 33 forks source link

Sort out confusion with cartesian/ij indexing #73

Closed vigji closed 9 months ago

vigji commented 4 years ago

As per the slack discussion:

There is some ambiguity when working on displaying numpy arrays, coming from the fact that image-first applications (eg: matplotlib, brainrender) have the convention that as the numpy array index increase in a slice, you move downward and rightward in the image. As opposed to this, cartesian plotting assumes that, increasing values, you move upward and rightward in the plot. As a result, inconsistencies might happen. Napari takes care of those inconsistencies by adopting the image-first convention for points as well.

this has a series of consequences:

1.

If we want to describe the raw ARA origin as we get it with the bg-space convention, it is actually “asr” and not “asl”. To be sure about this, we can do the following:

spacecache = ReferenceSpaceCache(
    manifest=download_dir_path / "manifest.json",
    # downloaded files are stored relative to here
    resolution=100,
    reference_space_key="annotation/ccf_2017"
    # use the latest version of the CCF
)
ara_ref, h = spacecache.get_template_volume()
ara_ref[:, :, :57] = ara_ref[:, :, :57] / 2
viewer = napari.view_image(ara_ref)

and we can confirm that what gets dimmed is the right hem and not the left one. Conclusion: specify how ARA description is confusing, and moving standard BG orientation to “asr”

2.

This introduces confusion for applications using cartesian indexes. As a simple example, this

at = BrainGlobeAtlas("example_mouse_100um")
mod_ref = at.reference.copy()
#Create some points:
three_pts = np.array([[50, 40, 80], [60, 40, 80], [60, 40, 20]])# 2 in left hem, 1 in right hem
# Take points from root mesh for scatterplot:
bounds = at.root_mesh().points[::10, :] / 100
# Display everything in napari:
viewer = napari.view_image(mod_ref)
viewer.add_points(bounds, size=1, n_dimensional=True)
viewer.add_points(three_pts, size=10, n_dimensional=True)
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(bounds[:, 0], bounds[:, 1], bounds[:, 2], s=1)
ax.scatter(three_pts[:, 0], three_pts[:, 1], three_pts[:, 2], s=20)

produces a correct napari view (with 2 dots in the left hem, and 1 in the right one), but a flipped scatterplot, as I guess brainrender would do. The easiest solution, for ugly that it might sound, would be to just invert one axis (does not really matter which one) in cartesian indexed applications such as brainrender. This:

fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(-bounds[:, 0], bounds[:, 1], bounds[:, 2], s=1)
ax.scatter(-three_pts[:, 0], three_pts[:, 1], three_pts[:, 2], s=20)

produces a plot with the correct orientation. For 2D applications (we don’t have any), one would have to flip the y when displaying.

3.

Finally, we always need to consider that when looking at the sliced data ( both with matplotlib and napari viewer in 2D mode), we are looking from the perspective of the slicing coordinate at point 0: for asr orientation, it means looking from the front, which always invert left and right hemispheres (left side of the image is right hemisphere).

This is 100% a display problem, as it will arise with matching underling stack and probe coordinates just by using different kinds of views. So I would not solve it messing with data (eg, exporting “brainrender compatible” coordinates), but by making one application (i would suggest brainrender, flipping an axis) compatible with the other.

What needs to be done:

FedeClaudi commented 4 years ago

making sure brainrender display images coherent with the napari viewer, ideally going through the above point.

would you suggest doing this by flipping an axis?

vigji commented 4 years ago

Flip an axis would work; but it introduces a random hardcoded flip that might be source of problems and confusion to users.

My suggestion to handle this (and future similar problems) would be 1) Having in the SpaceConvention class an additional specification on the way the origin is defined - either cartesian or in ij system; 2) Integrate a bit more the SpaceConvention object in the BrainGlobeAtlas object, so that it has a self.map_atlas_to_space() that you call and transforms lazily all the data you will use from the atlas.

In this way, you just call that method at the instantiation of your BrainRender atlas so that it matches a cartesian definition of the origin and it flips everything accordingly, and no random axes flips should be required brainrender-wise

FedeClaudi commented 4 years ago

that sounds like less work in brainrender so sounds very good to me :P

adamltyson commented 4 years ago

either simply move standard orientation to asr, or better address this concurrent description when instantiating the SpaceConvention from bg-space, in a way that addresses the ambiguity (see brainglobe/bg-space#6).

I think we should address any ambiguity, but that we should "offically" change the standard orientation to asr, as we're pretty much tied to numpy anyway, and I think this makes sense to most scientific python programmers.

In this way, you just call that method at the instantiation of your BrainRender atlas so that it matches a cartesian definition of the origin and it flips everything accordingly, and no random axes flips should be required brainrender-wise

Sounds good, as long as it's in the GUI too :P

FedeClaudi commented 4 years ago

once it's implemented in brainrender it will show up in the GUI too :P

vigji commented 4 years ago

I think we should address any ambiguity, but that we should "offically" change the standard orientation to asr, as we're pretty much tied to numpy anyway, and I think this makes sense to most scientific python programmers.

Agreed, so both changing it to ASR and add the method in bg-space

FedeClaudi commented 4 years ago

@vigji are you going to take care of it in bg-space? Do you mind giving me a shout when it's done?

vigji commented 4 years ago

yes, I am thinking about it right now! Will come back to you

vigji commented 4 years ago

Sorry, I am tinkering with things and discussing with a very good colleague, and I was getting worried that adding that in bg-space might provide too many degrees of freedom and ambiguity. Is it possible to just apply a transformation matrix to the entire scene in vtk? If that is doable, we can just: 1) Specify very clearly that all bg indexing is based on the ij-numpy conventional way for images; 2) change a single point in brainrender that would make it compatible with such indexing (the way napari in its meshes and points related stuff is).

FedeClaudi commented 4 years ago

mmm I don't think so. If there is I don't know about it... I could ask Marco though... I could apply a transform to each meshes' points just before render though..

vigji commented 4 years ago

I could apply a transform to each meshes' points just before render though..

No if that is required I would handle that in bg-atlasapi + bg-space. But if it is possible to set a global transformation, that would be the n.1 solution in terms of cleanness I believe

vigji commented 4 years ago

Let me know what Marco says. Looking into vtk https://vtk.org/doc/release/5.0/html/a01919.html it should be possible - don't know if vedo exposes something for it, but maybe it would be accessible through vedo's classes?

FedeClaudi commented 4 years ago

Sounds like there's a way, vedo is so great! https://github.com/marcomusy/vedo/issues/203


ps. it will have to be done on each mesh individually but it should be easy to do this when the mesh is created.

vigji commented 4 years ago

Mmm ok, I am not a big fan of the idea of doing it separately anywhere...but I guess that having a method for doing it in the atlas class on the atlas meshes would not solve it right? Because there's still all the neurons etc

FedeClaudi commented 4 years ago

yeah, it would have to be in scene.add_actor and make sure that all various add_region, add_neurons... methods go through that. It's fine really.

The alternative would be to have it happen in render (again for each neuron individuallu). Once you have a matrix transform ready we can play around with it and see whats best

vigji commented 4 years ago

well that should be easy! Something like

1  0  0  0
0  1  0  0
0  0 -1  0

Should already flip the correct axis

FedeClaudi commented 4 years ago

Seems to work.

These are MOs neurons from the mouse light project (note somas in the left hemisphere)

image

and in brainrender:

image

prior to the introduction of the transform the mouselight neurons would show up on the right.


I'd be tempted to implement this just in Render.render() so that it happens at once and we are sure to catch all actors. The only downside I can think of is if someone does stuff to/with a meshe's points before the flip. In that case there might be some surprises. The alternative is to implement it whenever/wherever a mesh object is loaded or created, but that's a bit cumbersome.

FedeClaudi commented 4 years ago

Might have spoken too soon.. it looks indeed that the actors have been reflected, but look at the coordinates on the ML axis:

image
vigji commented 4 years ago

I'd be tempted to implement this just in Render.render() so that it happens at once and we are sure to catch all actors. The only downside I can think of is if someone does stuff to/with a meshe's points before the flip. In that case there might be some surprises.

Definitively go for that option! In this way, you basically turn brainbender into a, ij-based viewer as napari is.

The negative axis is normal, as with that transformation matrix you are flipping around the o. To prevent that, use

1  0  0  0
0  1  0  0
0  0 -1  x

Where x is the size of your space on the L-R axis, in micron

FedeClaudi commented 4 years ago

Definitively go for that option! In this way, you basically turn brainbender into a, ij-based viewer as napari is.

Which option?

  1. at render (i.e. after all actors have been added to scene)
  2. at load (happens once per actor) ?

also just for reference the matrix needs to be 4x4 so:

1  0  0  0
0  1  0  0
0  0 -1  x
0  0  0  1
vigji commented 4 years ago

At render! So you have a single entry point for handling everything

FedeClaudi commented 4 years ago

gotcha!

Regarding the x business above:

space on the L-R axis

is this number somwhere in bg-space or bg-atlasapi? I've been using the bounds of the root mesh, but that's not very accurate (e.g. in the allen atlas there's 200um to the side of the brain which are not included in the meshes' bounds).

Other than that the new matrix works:

image
vigji commented 4 years ago

(e.g. in the allen atlas there's 200um to the side of the brain which are not included in the meshes' bounds).

What do you mean? The mesh is cut?

is this number somwhere in bg-space or bg-atlasapi?

It is shape of the allen stack * resolution, that's as good as we can get I guess...

vigji commented 4 years ago

Still, note that if you add an offset, a coordinate [0, 0, 0] will be placed at a different side of the brain compared to the standard stack andbg-atlasapi [0, 0, 0]

FedeClaudi commented 4 years ago

no, the left-most point in the root mesh is at 200um from the origin along the LR direction.
If you load the atlas in volumetric mode, or if you look at a 2d slice, around the brain's outline there's a bit of space, that's where the 200um come from.

So the mesh. doesn't "touch the edges of the atlas' volume" if that helps. So when you say:

Where x is the size of your space on the L-R axis, in micron

the size of the L-R axis of the entire volume is larger than that of the root mesh.

Still, note that if you add an offset, a coordinate [0, 0, 0] will be placed at a different side of the brain compared to the standard stack and bg-atlasapi [0, 0, 0]

that's why I'm point this out, a cell that is at a specific x,y,z point won't be in the correct coordinate after mirroring as it's currently set up (though ofc visually it will be fine since all meshes will be equally shifted).

I'll try

It is shape of the allen stack * resolution, that's as good as we can get I guess...

vigji commented 4 years ago

that's why I'm point this out, a cell that is at a specific x,y,z point won't be in the correct coordinate after mirroring as it's currently set up (though ofc visually it will be fine since all meshes will be equally shifted).

Yes, so maybe keeping the negative axis would actually be less confusing the offsetting. At least in that way if I have a cell detected at voxel [10, 20, 30] of my stack at 100um res it will be displayed at [1000, 2000, -3000]; otherwise the 3rd value is completely unrelated to the original one

vigji commented 4 years ago

Does vedo support changing the axis ticks labels à là matplotlib?

FedeClaudi commented 4 years ago

I'd have to play around with it.

But I think I'm misunderstanding something, the problem is not with the offset x, the problem is that x needs to be computed using the volume's size, not the root mesh' size.

E.g., say a cell is at 400um from the origin along the LR axis, and let's say that the total volume was 1000um in width. Further assume that the root mesh leaves 200um on either side. So volume goes 0 -> 1000um and root goes 200->800um.

Currently I'm using root to compute x as 800-200=600. So when I mirror the cell which is originally at 400 I get -400 + 600=200.

If instead I computed x with the volume size (x=1000) I'd get -400+1000=600 which is correct: the cell was at 400 from one edge of the volume and now it's at 400 from the other.

Does this make sense to you? If I use the proper volume size from bg-atlasapi then it should be okay to add the offset I think. I do get your point about leaving the negative value, but I think it's better to add the proper offset, mostly for consistency with other parts of the bg ecosystem.

marcomusy commented 4 years ago

Does vedo support changing the axis ticks labels à là matplotlib?

Yes - check out example vedo -r customaxes

Currently I'm using root to compute x as 800-200=600. So when I mirror the cell which is originally at 400 I get -400 + 600=200.

about offsets, maybe one possibility is to define a different origin point with mesh.origin() or volume.origin()

vigji commented 4 years ago

If instead I computed x with the volume size (x=1000) I'd get -400+1000=600 which is correct: the cell was at 400 from one edge of the volume and now it's at 400 from the other.

Yes, exactly, that precisely how I would use the volume size! Actually, we left a shape key in the metadata exactly for calculating the volume size in these scenarios .

But I insist that placing an offset will confuse users: if I detect my probe at some pixel, and I calculate its position in micron, I don't want that to be converted to the position w/r/t the other side of the brain, I will see a different value instead. The correct number would be shown if we take the negative axis, and show it with positive values; so maybe what @marcomusy pointed out could solve this in the best way

FedeClaudi commented 4 years ago

Okay, maybe I'm missing something here.

Let's say I have a point at coordinates 5000, 3000, 2000 as shown in figure, it's in the left hemisphere. screenshot_20200902_085629

If I change mesh.origin from 0, 0, 0 to 0, 0, x, nothing happens. screenshot_20200902_085603

Note that if I print mesh.origin I get the new value, but visually everything's the same and the point is still in the left hemisphere.

If instead I apply the transform first and then change the origin, the transform flips the hemisphere but again the change of origin doesn't seem to change anyting (i.e. the point is now at 5000, 3000, 7000 same as if I did the transform only). screenshot_20200902_085914

what am I missing here?

vigji commented 4 years ago

I don't know what mesh.origin() does and how it interacts with the other transformations. However, it exemplifies why I don't like the idea of applying an offset: one would like that a point 5000, 3000, 2000, coming from the right hemisphere in the numpy stack, is both visualised at the right hemisphere and has a corresponding coordinate on the axis of 5000, 3000, 2000. So for doing that, one would need to set the transformation without the offset, and then change the labels to have their absolute value

FedeClaudi commented 4 years ago

So for doing that, one would need to set the transformation without the offset, and then change the labels to have their absolute value

Both of these things can be done easily, but I'm still not very happy with the solution. It feels hacky. It is true that the transforms would only happen at render time so it shouldn't be messing things up if e.g. people want to do something with a point's coordinates, but somehow it doesn't feel right to leave a coordinate with negative values.

This might become a problem on more interactive visualisations like the GUI.

@adamltyson do you have any preference on this?

vigji commented 4 years ago

I agree on it feeling hacky; but it is the only way of making the brainrender system compatible with napari. In napari, scattering a plot in 5000, 3000, 2000 would actually place it where it would be in brainrender after the flip. But yes, important to hear also from @adamltyson here

marcomusy commented 4 years ago

Hi, why then not using mesh.mirror() which flips the underlying polygonal data ignoring the current transformation:

FedeClaudi commented 4 years ago

Hi,

I don't think that mirror is enough, unless we can mirror around a point. If I understood it correctly, mirror moves the meshes points around along a specified direction while keeping the mesh in place. We need some meshes to move.

For instance, imagine in your example that we put a point at 2, 3, 2 so that it's at the top of the teapot from our perspective. After the transform we need the point to be on the other side of the teapot (so in figure it would be at 2, 1, 2) but we don't want the actual coordinates value to change (i.e. the point should be on the other side of the teapot but its coordinates should still show 2, 3, 2). Using mirror would leave the point in the same location, which is not what we need.

So it's not a matter of mirroring each actor in place, but rather transforming the whole scene in a way that makes brainrender more consistent with other applications based on napari and doesn't mess up the actual coordinates values.

adamltyson commented 4 years ago

@adamltyson do you have any preference on this?

I've got a bit lost.

I don't have a very strong preference, as brainrender is mostly for visualisation, so if it looks right, it's ok. However, vedo has some really useful tools that users may end up using for further analysis, so getting a negative coordinate could be very confusing.

The whole image -> atlas -> brainrender space set of transformations is already confusing enough for many people (or at least when I try to explain it), so anything we can do to simplify it the better, but I don't have any good suggestions.

N.B. @FedeClaudi, I find using ML in these plots confusing. The axis is medial-lateral, but the points are left-right. Many people assume a point at 0 ML to be at the midline.

FedeClaudi commented 4 years ago

N.B. @FedeClaudi, I find using ML in these plots confusing. The axis is medial-lateral, but the points are left-right. Many people assume a point at 0 ML to be at the midline.

Good point, I'll change.

I don't have a very strong preference, as brainrender is mostly for visualisation, so if it looks right, it's ok. However, vedo has some really useful tools that users may end up using for further analysis, so getting a negative coordinate could be very confusing.

keep in mind that this is only at render time, and we can have custom axes that hide the fact that the coords are negative. So users should be able to do all normal operations just fine, just things get transformed when rendering happens

vigji commented 4 years ago

So it's not a matter of mirroring each actor in place, but rather transforming the whole scene in a way that makes brainrender more consistent with other applications based on napari and doesn't mess up the actual coordinates values.

Ok, entirely with @FedeClaudi here; @marcomusy this would not fix the problem for us.

The whole image -> atlas -> brainrender space set of transformations is already confusing enough for many people (or at least when I try to explain it), so anything we can do to simplify it the better, but I don't have any good suggestions.

@adamltyson , as vedo seems to be based on a system with different handedness from napari, and there is legitimately no room for customizability, there is no "clean" way out of this. Either we have negative coordinates, or we transform points in a way that results in a translation of coordinates. Among the two, at least negative coordinates keep the right values, and won't be exposed to users if we change the axes.

I truly believe that the only way out of here is to implicitely use negative coordinates for that axis: 1) filter all rendering through one single transformation point; 2) changing axis labelling, and 3) warn users of the risk of using vedo functions directly and not through the way brainrender expose them.

adamltyson commented 4 years ago

Among the two, at least negative coordinates keep the right values, and won't be exposed to users if we change the axes.

Is this just for visualisation though? So if a user queries a point, it could be [100, 100 -100]? It would then just be rendered in the correct hemisphere, and appear at the [100, 100, 100] tickmark on the axis.

I truly believe that the only way out of here is to implicitely use negative coordinates for that axis: 1) filter all rendering through one single transformation point; 2) changing axis labelling, and 3) warn users of the risk of using vedo functions directly and not through the way brainrender expose them.

That sounds good to me. My original issue was that I was doing something like this in a hacky way on my own. So the workflow for data from 3rd party software would be:

vigji commented 4 years ago

No, my suggested pipeline is: the negative sign never appear anywhere on the data. It is just introduced in rendering stuff in brainrender. So for a user to scatter their data would do scene.add_point()or whatherver and the brainrender Scene methods make sure the axis is flipped. There should be nothing coming from the vedo visualised space back to the data, no? If it has to be the case, than those data will have to be absolute-valued the moment they leave the vedo classes.

The warning should be there for users that for some weird aim want to bypass the already very complete brainrender and work directly with the underlying vedo stuff; but I never did that and I guess is very edge situation.

adamltyson commented 4 years ago

No, my suggested pipeline is: the negative sign never appear anywhere on the data. It is just introduced in rendering stuff in brainrender.

Sounds good

There should be nothing coming from the vedo visualised space back to the data, no?

I would guess not

The warning should be there for users that for some weird aim want to bypass the already very complete brainrender and work directly with the underlying vedo stuff; but I never did that and I guess is very edge situation.

I previously would have agreed, but having played with some of the stuff in vedo, it provides some very convenient functionality for analysis.

Sorry to keep laboring this point, but I want to be 100% clear where we're going (and I'm just about to reimplement all the points/cells stuff in cellfinder). So the workflow is:

FedeClaudi commented 4 years ago

yeah that sounds right.. I don't think that working with vedo is going to be an issue after the transform, unless an user cares about the actual numerical value one gets from any operation done in brainrender/vedo. To that end, we should make it clear that brainrender is for visualization. It's good to be able to do stuff in it (e.g. threshold a volume data as per the issue opened yesterday in brainrender), but that's still just for visualization.

marcomusy commented 4 years ago

as vedo seems to be based on a system with different handedness from napari, and there is legitimately no room for customizability, there is no "clean" way out of this.

i tend to agree on this, I just wonder why napari uses the "wrong" one (from a physicist perspective).

I don't think that mirror is enough, unless we can mirror around a point.

@FedeClaudi you can try playing around with this enhanced version (in pointcloud.py)


    def mirror(self, axis="x", origin=[0,0,0], reset=False):
        """
        Mirror the mesh  along one of the cartesian axes

        :param str axis: axis to use for mirroring, must be set to x, y, z or n.
            Or any combination of those. Adding 'n' reverses mesh faces (hence normals).

        :param list origin: use this point as the origin of the mirroring transformation.

        :param bool reset: if True keep into account the current position of the object,
            and then reset its internal transformation matrix to Identity.

        |mirror| |mirror.py|_
        """
        sx, sy, sz = 1, 1, 1
        if "x" in axis.lower(): sx = -1
        if "y" in axis.lower(): sy = -1
        if "z" in axis.lower(): sz = -1
        origin = np.array(origin)
        tr = vtk.vtkTransform()
        tr.PostMultiply()
        tr.Translate(-origin)
        tr.Scale(sx, sy, sz)
        tr.Translate(origin)
        tf = vtk.vtkTransformPolyDataFilter()
        tf.SetTransform(tr)
        tf.SetInputData(self.polydata(reset))
        tf.Update()
        outpoly = tf.GetOutput()
        if reset:
            self.PokeMatrix(vtk.vtkMatrix4x4())  # reset to identity
        if sx*sy*sz<0 or 'n' in axis:
            rs = vtk.vtkReverseSense()
            rs.SetInputData(outpoly)
            rs.ReverseNormalsOff()
            rs.Update()
            outpoly = rs.GetOutput()
        return self._update(outpoly)

test:

from vedo import *
m = Mesh(datadir+'teapot.vtk').wireframe(True).c('r')
m.pos([2,2,2])
mo = m.clone().wireframe(False).c('gold')
m.mirror('x', origin=[1,0,0], reset=1)
show(Point(), mo, m, axes=1)

image

vigji commented 4 years ago

i tend to agree on this, I just wonder why napari uses the "wrong" one (from a physicist perspective).

That's because someone f**ked up the image processing world some time ago, so that image origin maps to the upper-left corner...

adamltyson commented 4 years ago

That's because someone f**ked up the image processing world some time ago, so that image origin maps to the upper-left corner...

Isn't it inherited from numpy, which is inherited from linear algebra notation? To my knowledge, this is always how matrices have been, (rather than points in cartesian space).

vigji commented 4 years ago

True, it is actually the collision between representing matrices-listed from to, and representing cartesian spaces. Probably the original f**king up is Descartes, who came out with an axis increasing in the opposite direction to what we normally read 😄

FedeClaudi commented 4 years ago

@marcomusy in the end we will simply go for the transform, as discussed. But thanks for the improved version of mirror, that might turn out to be useful at some point :)

FedeClaudi commented 4 years ago

Update, I've implemented the custom axes to show only the absolute value, so it should be fine:

image

However this means that, for now at least, brainrender only supports this axes style, other styles like RulerAxis are not supported anymore and will give a NotImplementedError, at least for now. That's because you get things like:

image

There's a couple things that need to be sorted out to finalize the look of the axes, but I wanted to give you an idea of where this is going.

By the way, I couldn't move the origin of the axes, that means that the LR ax will be decreasing as you move away from the origin, the others will be increasing... I think that ultimately we'll have to build our own BuildAxes class as there's no reason to expect that vedo should support such level of customization.


Also, @vigji I have a question. I need to get the size of the volume along the LR direction to specify the axis ticks and range, so I do:

atlas_shape = np.array(atlas.metadata['shape']) * np.array(atlas.metadata['resolution'])

Then I use atlas_shape[2] as the atlas size. Is the axis index alway 2 or does it depend on the atlas? If so, is there a way to infer that from the metadata?

vigji commented 4 years ago

I think that ultimately we'll have to build our own BuildAxes class as there's no reason to expect that vedo should support such level of customization.

Yes I think that long term that would be the best solution, if doable!

atlas_shape = np.array(atlas.metadata['shape']) * np.array(atlas.metadata['resolution'])

from next release you should be able to shorten to atlas.shape_um property! But anyway, for the indexing: BG atlases are all fixed to have asr orientation, but still I would not rely on that (eg., our custom region atlas in the lab is not); so the cleanest way you can, in general, infer L-R from the atlas is atlas.space.axes_order.index("frontal")

I should probably shorten this to a space.get_axis_idx() method.