Open effigies opened 4 years ago
We should have a look at Trimesh. FSLeyes uses it for working with meshes, and it only depends on numpy
and setuptools
for its base functionality.
Here we are, and it's nilearn dev days 2021, and we've had another discussion on this topic. Nilearn devs put together a doc describing the current API and issues, which I will summarize here:
Mesh = namedtuple("mesh", ["coordinates", "faces"])
Surface = namedtuple("surface", ["mesh", "data"])
surf nilearn.surface.Surface
surf.mesh nilearn.surface.Mesh
surf.mesh.coordinates numpy.ndarray
surf.mesh.faces numpy.ndarray
surf.data numpy.ndarray
1) Multiple surfaces cannot currently be associated, e.g. L+R hemisphere 2) Lack of a proxying mechanism means geometry is expensive to keep associated with surface data, especially when that involves copying
While discussing the concept of SurfaceMasker
s (analogous to nilearn's NiftiMasker
that takes a mask and can ravel/unravel data between volume and vector format), it occurred to me that a lot of these problems are exactly what CIFTI-2 has solved with its BrainModelAxis
: The matrix index maps associate geometric structures into indices in a data array. Everything but the map back onto surface geometries is directly embedded in a header, while a wb.spec
file provides references to those.
It seems like a SurfaceMasker
could probably be something like (Apologies for the reckless mixing of types and procedural code):
class SurfaceMasker:
structures = BrainModelAxis
geometry_map = {
structure_name: Mesh
for structure_name in BrainModelAxis
}
There's a question of exactly what a SurfaceImage
would be, then. If an individual file is difficult to interpret without this collection of structures, then it may be that there is no separation between a SurfaceImage
and a SurfaceMasker
.
The previous discussion addressed (although stopped short of a clear solution for) limitation 1, but I didn't really get into limitation 2.
Whatever the solution, we'll want something like an ArrayProxy
or ImageOpener
to pass around and copy that will typically hold onto the filename and only load the data on request.
An additional notion that came up is that often the geometry is a standard geometry, such as FreeSurfer's fsaverage
or HCP's fsLR
. My feeling is that it's not going to be a NiBabel concern to provide a set of standard geometries, but we may want to consider the case where the reference is a URL instead of a local file.
When operating on or plotting surface data, generally you will need two things:
1) A data array containing a scalar or vector at loci sampled over a surface 1) A description of that surface, typically a mesh (vertices and faces)
This mirrors roughly the data+affine model of SpatialImage. Unlike most SpatialImages, we cannot expect data to be placed in the same file as the geometric description. Additionally, while the data is generally tied to the mesh, the specific coordinates are often considered mutable, at least when it comes to plotting.
From talking with the nilearn team, they are uninterested in proposing any new data structures.
namedtuple
s ofndarray
s can summarize their goal:So you would expect the following:
An alternative approach they considered is using
GiftiImage
as their base object, which has a definite API they can count on, and other formats would be internally built intoGiftiImage
s. In my opinion, turning the FreeSurfer IO (which gets you tuples ofndarray
s, close to their preferred API) into a FS-GIFTI conversion would be massive overkill. The end result is a pretty unwieldy mirror of the XML structure.It would be ideal if we could come up with a coherent but simple API, which I am provisionally dubbing a
SurfaceImage
that the various surface formats could implement. We should reach out to various projects which make use of surfaces to see what use cases need to be supported. I will try to come up with a list of contacts tomorrow, but I want to send this tonight before it sits in my open tabs for another week.Related: nilearn/nilearn#2171
This is a delayed write-up from nilearn dev days, which is now a couple months gone, so my deepest apologies for both the delay and the inevitable failures of memory.