The classify_nodes function is called at TreeNeuron initialisation. In master, this function uses the igraph (or networkx) representation of the neuron. Generating the graph for a skeleton comes at a sizeable overhead but it will be cached for future use.
This PR re-implements the classify_nodes function using pure pandas. It's overall as fast as the current graph-based function minus the overhead for the construction of the graph. In my hands this speeds up import of skeletons by 5-6X.
Many workflows (e.g. making dotprops from skeletons) do not need the graph representation, so being able to skip it is advantageous. For workflows that do need the graph, we simply delay its generation but shouldn't loose performance.
The
classify_nodes
function is called atTreeNeuron
initialisation. In master, this function uses the igraph (or networkx) representation of the neuron. Generating the graph for a skeleton comes at a sizeable overhead but it will be cached for future use.This PR re-implements the
classify_nodes
function using pure pandas. It's overall as fast as the current graph-based function minus the overhead for the construction of the graph. In my hands this speeds up import of skeletons by 5-6X.Many workflows (e.g. making dotprops from skeletons) do not need the graph representation, so being able to skip it is advantageous. For workflows that do need the graph, we simply delay its generation but shouldn't loose performance.
Thoughts @clbarnes?