google / neuroglancer

WebGL-based viewer for volumetric data
Apache License 2.0
1.07k stars 295 forks source link

Volume rendering compositing changes and wireframe rendering to vis chunks in volume rendering mode #503

Closed seankmartin closed 8 months ago

seankmartin commented 10 months ago

Performs front-to-back compositing in three steps:

  1. The opacity from the current data point is corrected based on the ratio between the optimal number of depth samples to use for the current data resolution, and the actual number of chosen depth samples. This avoids the output image changing opacity if the volume is over/undersampled.
  2. The corrected opacity is composited with the already accumulated opacity along the ray.
  3. The resulting composited opacity is multiplied by the already accumulated color, and then the accumulated color and opacity information is updated with the opacity multiplied color and the composited opacity.

Also adds a chunk visualiser to VR if wireframe mode is enabled in settings

seankmartin commented 8 months ago

This pull request implements four main features:

  1. OIT is performed along rays during volume rendering ray marching. This is to try and remove chunk border artifacts.
  2. During ray marching, the value stored in the offscreen Z buffer is compared against the depth computed at the current ray position. If the ray has been deemed to pass behind an opaque object, the compositing loop for that ray ends.
  3. Wireframe rendering mode shows volume rendering chunks by a different color.
  4. A gain parameter has been included in volume rendering mode, to heighten or lessen the opacity value by scaling with the gain.

Additionally, the volume rendering flag has been bound to a Python controllable tool.

I think it should be good for another round of review if you get the time @jbms. Thanks again for all the thoughtful input on this one so far, really helpful and much appreciated!

jbms commented 8 months ago

Thanks! This looks good --- I left just a few minor comments.