google / neuroglancer

WebGL-based viewer for volumetric data
Apache License 2.0
1.07k stars 295 forks source link

feat(segmentation_user_layer) implemented getObjectPosition for MeshLayer #531

Closed chrisj closed 6 months ago

chrisj commented 7 months ago

finds the closest loaded associated mesh vertex to the global position

I also did an implementation that just returns the first vertex (in the order that I iterate through them here) and one where it finds the vertex closest to the mean vertex position. I still need to get some feedback to confirm if this is preferred but this seems to be ideal as it minimizes movement.

I might be making an assumption on using vertexPositions. The type is EncodedVertexPositions but it seems to be work without any kind of decoding.

I'm not using localPosition when comparing the globalPosition with the vertexPositions, only globalToRenderLayerDimensions and transform.modelToRenderLayerTransform. I tried to do the inverse of how globalPosition is modified in moveToSegment but I am a little confused why I don't see how localPosition comes into play.

jbms commented 7 months ago

Thanks for this change!

finds the closest loaded associated mesh vertex to the global position

I also did an implementation that just returns the first vertex (in the order that I iterate through them here) and one where it finds the vertex closest to the mean vertex position. I still need to get some feedback to confirm if this is preferred but this seems to be ideal as it minimizes movement.

Finding the closest in order to minimize movement does seem preferable. I am a bit worried about UI hangs in the case of a very large mesh, though this is also only in response to a user action so it is less concerning than if it were happening without user action. Perhaps if the mesh is large the closest point calculation could stop after looking at a certain amount of mesh data, or randomly sample some number of points and pick the closest.

I might be making an assumption on using vertexPositions. The type is EncodedVertexPositions but it seems to be work without any kind of decoding.

See VertexPositionFormat --- for the non-multiscale mesh the format is always float32 so no deocding is needed. For multiscale meshes other formats are supported.

I'm not using localPosition when comparing the globalPosition with the vertexPositions, only globalToRenderLayerDimensions and transform.modelToRenderLayerTransform. I tried to do the inverse of how globalPosition is modified in moveToSegment but I am a little confused why I don't see how localPosition comes into play.

localPosition would be needed if some of the dimensions of mesh were marked as local dimensions rather than global dimensions. However, that isn't currently supported for mesh sources, so you don't have to worry about that.

chrisj commented 7 months ago

Some performance numbers.

Using this neuron (10M vertices) https://spelunker.cave-explorer.org/#!middleauth+https://global.daf-apis.com/nglstate/api/v1/6557565452288000

It takes 150 MS on avg to calculate the closest vertex. It spikes up to 500-600 ms near the start, probably while the JIT compiler is working.

I can calculate the vertex count in under 4ms. We could use that to sample the vertices so that we only check something like 1M or 100K to keep things snappy.

jbms commented 7 months ago

Some more thoughts on this:

chrisj commented 7 months ago

I updated it so that it samples up to 100,000 vertices using a naive but fast approach. This brings the execution down to 8-15 ms.

We could do the graphene specific optimization but I don't think it is necessary, hopefully there are other datasources that can benefit from this.