Closed vtwang closed 1 year ago
Hi @DottedGlass thanks for your interest in napari, that sounds like an exciting application! I used to work in neuroscience so I'm always partial to that stuff.
napari itself will probably not directly read an swc
file at this point - we still need to think about how we will handle reading more specialized file types, but we definitely want to make it easy for you to visualize this sort of data, and maybe further down the line there can be a plugin for reading and viewing swc files.
Right now, to read an swc
file into python you can use the neurom.io.swc.function
from the neurom
package which you can get via pip install neurom
- see their docs here.
Once you can read the swc file you can then pass results to napari.view_shapes
and as follows
from neurom.io import swc
import napari
data = swc.read('path_to_your.swc')
napari.view_shapes([data.data_block[:, :3]], shape_type='path', edge_color='red', ndisplay=3)
and you get the following - note this is a random neuron I downloaded from neuromorpho:
Unfortunately note that it has two things wrong with it!
path
while what we actually have is more like a tree with segments. Because we assumed a path we just connected all the vertices next to each other, but that means that you only have two endpoints instead of a having endpoints at all the places where you need them. You could break the data apart and add many paths, but really I think it would be much better if napari added support for a new shape type called a tree
. We'd have to think about input data / api, but making that conform to the swc spec or something like it makes sense to me.How does all this sound to you @DottedGlass?
Thinking about this a bit more, maybe napari shouldn't support a tree
and that lists of paths is sufficient and that any plugin needs to be responsible for parsing it's tree
structure into a list of paths - for example doing the following
import numpy as np
from neurom.io import swc
import napari
data = swc.read('path_to_your.swc').data_block
def parse_swc(data):
break_points = [0] + list(np.nonzero(np.diff(data[:, 6]) < 0)[0]+1) + [len(data)-1]
paths = []
for i in range(len(break_points)-1):
if break_points[i+1] - break_points[i] > 2:
paths.append(data[break_points[i]:break_points[i+1], :3])
return paths
paths = parse_swc(data)
napari.view_shapes(paths, shape_type='path', edge_color='blue', ndisplay=3)
now yields the following:
Note that it still isn't perfect, it seems like we need to add some connections at the branch points, but this should be some pretty simple python that should happen outside napari.
Providing a way to scale the radii still seems valuable though
I'd be happy to work on scaling the radii for path
Is there an established framework for plugins if I want to make one for reading swcs via neurom?
We're still developing the plugin framework, but will keep this use case in mind as we go, and we'll let you know when there is something ready.
That's great that you can work on the scaled radii. For 2D we'll need to modify generate_2D_edge_meshes
https://github.com/napari/napari/blob/8ce944a1dff680f4d06b238f7556211550ba1d4b/napari/layers/shapes/shape_util.py#L614 and for 3D we'll need to modify generate_tube_meshes
https://github.com/napari/napari/blob/8ce944a1dff680f4d06b238f7556211550ba1d4b/napari/layers/shapes/shape_util.py#L780 in both cases adding an optional radii
parameter that has the same length as the number of points and scales edge radii based on those values. That would end up modifying the offsets
values. It is probably best to think about radii
here as a relative scaling, as we still have a scalar value edge_width
that then multiplies those offsets
. (i.e. we'll probably want to normalize those radii by edge_width
before applying them).
We also need to think about the api for exposing this to Shapes though. I'm not sure variable radii makes sense for all our shapes, and I worry about the api becoming very complex.
@sofroniewn
Is there a way to see the path shape layer
while in 2D rendering? The main use case for this feature is to be able to use the neuron traces to guide manual segmentation of neurons. I assumed that in 2D rendering, the path would show up as a intersection on the slice it is passing through.
If 2D rendering of 3D shapes is not supported, do you know of which files I should look at to build this feature?
So right now a path will only be visible in a 2D slice if it is entirely contained within that slice, otherwise it will be ignored. Instead you could estimate the top / bottom intersections of each line segment with the current slice and render just the part between them.
Right now this slicing is done here: https://github.com/napari/napari/blob/6fd963468335d1ea30319b596b20b7d6c9c520d4/napari/layers/shapes/shape_list.py#L159
It might be best to first think about this just in the context of our Line
shape, as that is the simplest, and in many ways it might be best for you to represent your neuron tree as a list of Line
shapes, one for each line segment. These can also be easily given a different radius and you will probably be very far towards achieving the functionality you are looking for.
Nice, glad to see someone else is working on this usage too! @sofroniewn I don't know if you recall me at BioImage Informatics, but my question about the availability of 3D sphere shapes was exactly for the connections between branches :) Cylinders with different radii at each end would be great for the SWC drawings too.
Current rendering looks like this
(and if it helps, code that produces that visualization is on github, with radii being added tomorrow)
I'm also very interested in the 2D version, as that software (pydynamo_brain) currently uses a manual QtImageViewer to handle the 2D view. I'd ideally migrate that to napari, but I would require view synchronization across multiple windows which from memory was on the future feature list? (we analyze multiple time points simultaneously, so want to pan/zoom them all together)
That said, @DottedGlass if you're after a python app to help with neuron reconstruction & segmenting that supports both napari 3D view but also 2D scrollable view (as well as custom mouse interaction with the arbor/segments) feel free to ask anything about pydynamo too, it's designed for others to use.
@padster yes I do remember you and that question. Unfortunately no 3D spheres yet, but they should be very addable. Thanks for sharing that code too. We've also got a new discussion going around adding a "tracks" layer in #539 that you might be interested in too
So right now a path will only be visible in a 2D slice if it is entirely contained within that slice
Hi @sofroniewn, I was wondering if there are any plans to update this?
Currently I'm getting round this by interpolating the path, and then rendering as a sequence of points. If these are 3D, they are visible in adjacent slices, but it would be nice if this was supported by the shapes layer. It would be great to use the edge_width
parameter passed to viewer.add_shapes
, and optionally show parts of the path that are in the range [(slice - edge_width / 2) : (slice + edge_width / 2)]
.
More generally, I think it could be confusing to users that any object in napari is visible in 3D but not in 2D.
@adamltyson I personally would be super excited to see this, but I think it requires significant engineering effort, and we have quite a few higher-priority items in the queue (you can see our current roadmap here). If you already have code for doing the interpolation, you're probably best placed to add this to our shapes layer! Any chance you would be interested in contributing this? We of course would be pretty excited to help you get started!
Hi @jni, I'm definitely happy to submit a PR for this. I feel like I know how to do this, but not necessarily how to integrate it with napari.
@adamltyson great that you'd be willing to take this on. I can help you through this. @kevinyamauchi also might be able to help review as he's been looking at Shapes lately too (though not this exact part of the code).
Hopefully all you changes should be restricted to this _shape_list.py file.
In particular https://github.com/napari/napari/blob/master/napari/layers/shapes/_shape_list.py#L190 will need to change. The layer has a concept of slice_keys
https://github.com/napari/napari/blob/master/napari/layers/shapes/_shape_list.py#L26-L31 which are
slice_keys : (N, 2, P) array
Array of slice keys for each shape. Each slice key has the min and max
values of the P non-displayed dimensions, useful for slicing
multidimensional shapes. If the both min and max values of shape are
equal then the shape is entirely contained within the slice specified
by those values.
This will tell you what shape you want to look at. But you'd still have a problem of determining which part of the shape you'd want to look at and get vertices for it. If you just wanted the whole shape it would now be easy, but I think you don't.
Part of what makes this complex is because the shapes layer can support many types shapes with complex meshes, not just lines (and even our lines are meshes).
Maybe a better approach would be to put more of the slicing on the shape
model that each shape has. https://github.com/napari/napari/blob/master/napari/layers/shapes/_shapes_models/shape.py
Here we could think of a sheet with thickness slicing through the whole shape, but that might be really complex, including how we precalculate vs calculate meshes on the fly.
As an aside, I do wonder @jni if we'd gone more with the 0D, 1D, 2D
approach to shapes I know you like then this might be easier as we'd only need to solve the 1D
case now.
At this point I recommend @adamltyson you poke around a bit and let us know what you see.
Thanks for the info @sofroniewn. I'm using napari for nearly every project now, so I should probably spend some time to dig into how it works.
I've started a new issue (#1334) because I'll probably need some hand-holding.
Now that #1361 is merged which added Tracks
I wonder if it might be worth looking at some of this again, and thinking about either using that layer or adding a new Traces
layer. I know @jni is also interested in a Skeleton
layer
Definitely worth looking at! Yes, my (totally half-baked) idea would be to have a graph
or network
layer that both skeleton and tracks inherit from. And yes, graph itself could inherit from a 1D-shapes layer. It would be great to get to "grammar of graphics"-like composition of all our layers... Again, these are all half-baked ideas but my main point is that we should think about how to maximise reuse and consistency between our components.
As just one example of how complex it might get, @AbigailMcGovern is going to start looking at tracking mitochondrial networks over time — so now you have dynamically-reshaping skeletons to contend with. 😬
Thanks for the comments. I have a problem is that the neurom.io does'nt have .swc to be loaded. So I cant read the SWC file
Thanks for the comments. I have a problem is that the neurom.io does'nt have .swc to be loaded. So I cant read the SWC file
Yep, that's exactly what happens. I think there packages are not compatible.
For the newer version of neurom (3.2.2)
import napari
from neurom import load_morphology
m = load_morphology('/path/to/your.swc')
p = []
for n in m.neurites:
for section in n.iter_sections():
p.append(section.points[:, :3])
viewer = napari.Viewer(ndisplay=3)
layer = viewer.add_shapes(p, shape_type='path', edge_width=2, edge_color=['green'])
Please feel free to reuse the .swc reader from the napari-bil-data-viewer plugin: https://github.com/brain-image-library/napari-bil-data-viewer/blob/main/napari_bil_data_viewer/reader.py
Thanks @noisysky! I think a plugin reader is exactly the right solution here — this issue predates plugins! So thank you for the tip and I will close this as resolved! 🙏
(Note: see #5861 for ongoing work on a Graph layer, which is probably the better model in the future.)
🚀 Feature
Neuron traces are used to determine the pathway of axons/dendrites in 3D images of the brain. One common file format for traces is SWC, which stores the traces as a structually connected components (e.g. cylinders)
We would like to have a easy way to load data from a SWC into napari.
example of neuron traces
Motivation
It is of interest to neuroscience researchers to develop and evaluate automatic neuron tracing algorithms. Visualizing SWC neuron traces as a layer on top of the brain image can help researchers test these tracing algorithms.
I am on a project working with the neuron traces from the MouseLight dataset, and we are approaching the automatic neuron tracing problem as a semantic segmentation problem. As a first step we are converting the ground truth SWCs into binary image labels, so that we can use supervised learning methods to generate neuron segmentations/labels. Conversions are done semi-automatically. Labels generated from methods such as Ostu thresholding will need to be manually corrected.
Because napari is built with python, it is easy to visualize the effects of image processing algorithms or segmentation algorithms from common python packages. Furthermore, napari has great tools for manually making and editing labels.
Pitch
A simple way to implement this feature is to read a SWC and render it as a path shape layer. Adjustments will need to be made to path.py to handle an array of edge_widths, as each cylinder component in a SWC can have different widths.