Closed ckemere closed 2 years ago
Or perhaps by taking the points from the regions and forming a new mesh from them?
maybe brainrender has a better way of doing this, if not you can merge two or more meshes with vedo.merge([m1,m2,m3..])
I tried that. But the atlas regions are just marginally departed, so the merged mesh has a bunch of spaces in it.
That's right, Brainrender's graphics is all done in vedo
which has many methods to manipulate meshes.
But also brainrender uses Brainglobe's Atlas API. Most atlases in there are hierarchical and there is generally meshes for regions at different levels of the hierarchy (you can do print(scene.atlas.hierarchy)
to view the brain regions tree for a given atlas).
For instance in the mouse atlas you have Isocortex > VISp > VISp5 (all cortex, visual cortex primary, visual cortex primary layer 5), you can render any of these as you would any brain region:
from brainrender import Scene, settings
from myterial import blue_grey
settings.SHOW_AXES = False
# Create a brainrender scene
scene = Scene(title="brain regions hierarchy")
# Add main brain region
scene.add_brain_region("Isocortex", alpha=.2, silhouette=False, color="blue_grey")
# add second level
scene.add_brain_region("VISp", alpha=.2)
# add another level
scene.add_brain_region("VISp5", "VISp6a", "VISp6b")
# Render!
scene.render()
Hope this helps?
Otherwise you could vedo
to get the external hull of the set of points of two regions, but it wouldn't look as smooth as you might like. Maybe some details about what you're trying to achieve exactly would help.
Thanks so much for that! I was trying to combine the fornix and hippocampus to generate something like the classic image from Amaral and Witter, reproduced here as:
But one quick question - when I take the return values from scene.add_brain_region() and manipulate them, I find that they're off center of the root. Any thoughts about that?
thanks!
You've come across Brainrender's main unresolved problem.
In order for the medio-lateral axis to be visualized consistently with other visualization tools in brainglobe (e.g. napari based ones) all meshes are mirrored with respect to the origin in the medio lateral direction (and a custom axis is created to make things look normal).
This is done in Render
: https://github.com/brainglobe/brainrender/blob/19c63b97a34336898871d66fb24484e8a55d4fa7/brainrender/render.py#L123
and it creates a actor._mesh
attribute for each brainrender actor (which are mostly just thin wrappers around vedo actors, actor.mesh
and actor._mesh
point towards the underlying vedo actor).
So, after you've rendered the scene for each actor you have:
actor.mesh
that points towards the vedo actor in the original orientation
actor._mesh
which points towards a vedo actor in the "mirrored" orientation.
When you call a vedo
method on a brainrender actor, by default it tries to get the corresponding method of actor._mesh
, but if that fails it calls the corresponding method in actor.mesh
. So if you call for instance vedo.scale
before having rendered the scene it will apply the transform to the original vedo mesh but if you do it after having rendered it it will apply the transform to the mirrored mesh.
The solution is to do something like:
scene.render(interactive=False) # this creates the actor._mesh mirrored meshes
region._mesh.rotate() # or any other vedo method
scene.render() # this time for real
I know this is a bit confusing and very ugly... It's this way because 90% of the users just want brainrender to make nice pictures so I've hidden this complexity far away from the users, but as you try to do more advanced stuff it can start causing some annoying problems until you get a handle of it.
Is this solved?
It's not solved, but brainrender still helped me make some good looking figures! Thanks!
oh no! Let me know if there's anything I can do to help! Glad it was somewhat useful in the end
I'm wondering if there's a way to do a merge across atlas regions. I'm not exactly sure how it should work - I guess a convex hull over multiple scene objects?
thanks!