google / neuroglancer

WebGL-based viewer for volumetric data
Apache License 2.0
1.06k stars 294 forks source link

Method of modifying point annotations #353

Open Martiantian opened 2 years ago

Martiantian commented 2 years ago

According to our understanding, Point annotations use a fixed uniform pixel value to render different spatial positions. Now we want to adjust based on point annotations to bind the size of the point to the actual size, similar to the rendering effect of Ellipsoid annotations.

When comparing with Ellipsoid annotations, we found that the ellipsoid first calculates the coordinates of the spherical vertices corresponding to the input data, and then obtains the coordinates of the vertices in the clipping space through the projection matrix; and the point seems to be the coordinates of the center of the circle in the clipping space. And then go to render a fixed-size circle. The rendering of the circle is obtained by drawing two triangles by drawquad and judging the distance through the fragment shader, but some details are not understood. First, how the coordinates of the 6 vertices of the quads are obtained through the gl_position of the center position coordinates; Second, how the diameter actually controls the size of the circle.

We tried to directly multiply the ng_markerDiameter and the uModelViewProjection, and another method to multiply the circleCornerOffset with the inverse matrix of the uModelViewProjection , add it back to the original modelposition, and then re-multiply the uModelViewProjection . The above two methods did not achieve the desired effect.

Finally, we want to directly calculate the proper ng_markerDiameter. Is there a suitable calculation method to get the corresponding coefficient directly?

In addition, I would like to ask how to debug the ng and webgl code conveniently. At present, we are using vs+Chrome DevTools, which still feels not very convenient.

jbms commented 2 years ago

It sounds like you wish to have point annotations that display the same as ellipsoid annotations --- I suppose you want the physical size of the point annotation to be specified by a custom annotation property?

Is there a specific reason why you don't want to use ellipsoid annotations?

As you may have noticed the annotation rendering code is rather complicated, due to the need to deal with varying numbers of data dimensions and display dimensions, and support both cross section views as well as 3-d projection views. The rendering of point annotations in the common case of 3 data dimensions corresponding to 3 display dimensions is here:

https://github.com/google/neuroglancer/blob/73cb949f0b106a0af1e797d1bbc367a97875b3d5/src/neuroglancer/annotation/point.ts#L76 https://github.com/google/neuroglancer/blob/master/src/neuroglancer/webgl/circles.ts

For debugging javascript code the browser devtools work quite well in my experience. Debugging GLSL code is quite a bit more challenging since you can't set breakpoints and can't directly print anything.

It is often helpful to first write some unit tests, as the existing fragmentShaderTest function allows you to directly transfer data into and out of a GLSL program in a convenient way.

For debugging GLSL code in Neuroglancer itself there are a few things you can do:

Martiantian commented 2 years ago

Thank you for your patient reply。Our needs are basically the same as what you described.

Because we want to display a single cell in space, but we do not have the actual shape of each cell, only approximate size information, so we have tried to use the ellipsoid type to render before. However, the rendering resources of the ellipsoid are much more expensive than points, and There is no good specific layering strategy for spatial id, so we consider using points instead of ellipsoid. Our specific data can refer to the description here.#348

In the process of viewing the source code, we found that the related code design of the two types of annotation data of ellipsoid and points is very complete, there are many aspects to consider, and it feels very complicated as you said. We are not very familiar with webgl, and attempts to modify the codes directly in points and circle have not been able to get a suitable result. So I want to ask you about the details of circle rendering above to deepen our understanding of the code.

https://github.com/google/neuroglancer/blob/73cb949f0b106a0af1e797d1bbc367a97875b3d5/src/neuroglancer/webgl/circles.ts 41 gl_Position.xy += circleCornerOffset * uCircleParams.xy * gl_Position.w * totalDiameter;

I guess the code here is the vertex position of the quad in the clip space, but I didn’t find how a circle center coordinate can get the six vertex coordinates of triangles, or I have a problem with my understanding here?

We will also try the debug method you mentioned, thank you very much.

jbms commented 2 years ago

Can you explain a bit more what you mean as far as "There is no good specific layering strategy for spatial id".

I expect that in the cross section views, ellipsoids should not be significantly more expensive than points.

However, it is true that in the 3-d view, ellipsoids are more expensive than points because they are rendered in a "conventional" fashion, i.e. the geometry of the ellipsoid is triangulated in the normal way, and those triangles are rendered in a straightforward way. Furthermore, the same number of triangles are used for each ellipsoid in all cases --- there is currently no reduction in number of triangles even if the ellipsoid is small in terms of number of pixels.

In contrast, as you observed, point annotations are rendered as "quads" represented by two triangles that are always aligned to the viewport plane, and the fragment shader is responsible for excluding the exterior portion so that it appears round.

For spheres there is a rendering technique called "imposter" rendering where instead of actually triangulating the sphere, you also render it in basically the same way as neuroglancer renders point annotations, except that you use the fragment shader to shade it so that it appears 3-d, and set gl_FragDepth so that it interoperates correctly with the depth test. For perfect spheres this is relatively simple to implement.

In principle "imposter" rendering could also be used for arbitrary ellipsoids. I originally attempted to implement ellipsoid rendering in this way, but the formula to compute the depth to the nearest surface at each point seemed to be quite complex and it seemed that it might end up being more expensive than just using conventional rendering, so I abandoned that effort. However, there might be a more elegant way to do the calculations, or my assumption of the cost of the "imposter" rendering relative to the current "conventional" rendering may have been incorrect. But if you are interested you could try to implement that.

As far as the gl_Position computation you referenced, what happens is that the vertex shader is executed (logically) independently for every vertex of every instance. In the case of point annotations each instance corresponds to a single point, and we use two triangles per instance. We determine which of the 6 vertices we are supposed to compute based on gl_VertexID, which is a built-in variable and will be 0 ... 5 in this case:

https://github.com/google/neuroglancer/blob/73cb949f0b106a0af1e797d1bbc367a97875b3d5/src/neuroglancer/webgl/circles.ts#L92 https://github.com/google/neuroglancer/blob/73cb949f0b106a0af1e797d1bbc367a97875b3d5/src/neuroglancer/webgl/quad.ts#L41

You will notice that in the emitCircle function circleCornerOffset is obtained by calling getQuadVertexPosition.

jbms commented 2 years ago

I noticed, by the way, that the drawQuads function used to draw the circles does not in fact call the drawArraysInstanced function defined in webgl/shader.ts, but instead calls drawArraysInstanced directly. If you want to use the vertex shader debug output mechanism, you will have to change it to call the drawArraysInstanced function from webgl/shader.ts.