Open JohnTigue opened 4 years ago
E.g. a 2016 thesis, Object Representation and Matching Based on Skeletons and Curves
The actual recognition process has been implemented in analogy to the Path Similarity Skeleton Graph Matching (PSSGM), an object categorisation approach for two-dimensional (2D) objects. The technique taken by itself starts with the skeletonisation of the query and the target to represent these instances by sampling all shortest paths which can be derived from both skeletons, respectively. The notion behind this scanning is the inclusion of geometrical properties of the object’s boundary. Finally, the Hungarian method is employed to perform the matching with the aim of calculating the overall similarity between these objects. The contribution of the current work is now to map the previously described concept into three-dimensional space in order to apply it to 3D objects. Hence, the thesis begins with a deeper investigation of the PSSGM to identify its strengths and weaknesses.
Janelia's 2019 1000 neurons paper seems to have some metric.
Since Allen Brightfield Challenge data has SWCs with radii for each segment, that can be used to create a mesh, which will have a # of voxels inside it. Compare raw volumetric overlap of two SWCs?
What do the EM folks do?
The FFN folks have their ERL "expected run length"
It would be really nice to have an evaluation metric of SWC similarity in order to auto-judge results. Such code cannot be radically new; surely there is code that can already judge the similarity of 3D stick figures.
Using NeuroMorphoVis's terminology, this would be a "morphometric analysis" tool.