MRtrix3 / mrtrix3

MRtrix3 provides a set of tools to perform various advanced diffusion MRI analyses, including constrained spherical deconvolution (CSD), probabilistic tractography, track-density imaging, and apparent fibre density
http://www.mrtrix.org
Mozilla Public License 2.0
293 stars 180 forks source link

mrview: volume rendering, fixel and clip plane bug #2409

Open maxpietsch opened 2 years ago

maxpietsch commented 2 years ago

fixel opacity close to but not 100% hides fixels that overlap with the main image: image

fixel opacity at 100% shows them: image

When rotating the main image the fixels rotate in the opposite direction if opacity < 100%, at 100% they rotate with the image. Looks like the fixels' centre of the coordinate system is shifted if opacity < 100%. Mismatched rotations seem to be persistent:

https://user-images.githubusercontent.com/10046944/144111930-c54197a1-6e64-457b-9301-c8632568cf98.mp4

Qt 5.15.2. GL 4.1 ATI-4.7.29 intel macbook pro, monterey latest master 3.0.3-49-g99449855, compiled from source

jdtournier commented 2 years ago

Yes, I can confirm this on my Arch Linux AMD Radeon RX 590 system. This is because the render isn't using true transparency, but an approximation of it: it just sums up all the contributions. For proper transparency, we'd need to render in back to front (or front to back) depth order, and use a different mixing function (glBlendFunc (GL_CONSTANT_ALPHA, GL_ONE_MINUS_CONSTANT_ALPHA)). But this can only be done if we can sort all the fragments in depth order, which comes with a very hefty performance penalty. This is what gives rise to the illusion that the rotation is going in the opposite direction (because what's in front looks like it could be at the back, basically).

The reason for the volume render obscuring the fixels is that the depth of each fragment for the fixels is not being written to the depth buffer. Indeed, even if it was, when transparency is enabled, the presence of a fixel fragment in front of a volume render fragment doesn't mean you can discard the volume render fragment behind it, like you would without transparency, so it becomes essentially impossible to use standard depth testing techniques. You'll find more discussions on the topic here, for example.

As to what to do about this, unfortunately I can't see any simple way to deal with this right now... Open to suggestions!

neurolabusc commented 2 years ago

It seems like you could use the same approach as MRIcroGL for overlays, the example below is generated by Scripting/Templates/Mosaic2. The volume ray caster samples from two GPU textures: the background image (e.g. T1 scan) and the overlay (statistical map, fixels, etc) with the opacity of the latter adjusted by its depth. Each MRIcroGL GLSL shader is a separatee text file stored in /Resources/shader, so you can look at those for the algorithm. I would be happy to help.

mricrogl

jdtournier commented 2 years ago

Thanks @neurolabusc - but this is a slightly different problem. The approach we use for image-based overlays is already pretty much exactly what you describe. We render all overlays and the main volume in a single ray trace within the fragment shader. But this problem relates to vector plots: basically, we'll render a line for each voxel. I don't think we can do this within the volume render pass in the fragment shader - at least I can't see how we could do that efficiently.

What we do is to first render the lines using GL_LINES geometry with depth buffering enabled, then perform the volume render pass, stopping at the depth stored in the depth buffer for each fragment. That works fine as long as we don't need transparency for the lines. As soon as we do want transparency however, all bets are off. There is no single well-defined depth at which we should stop the volume render, and even if there were, the blending operations should be order-dependent, so we'd need to store the full RGBA values of all line fragments along the ray so that we can properly blend them into the volume render. This is not an easy issue - though I do have ideas on how we could approach the problem. We also have the exact same problem for the tractography render, for the very same reasons.

The approach that was implemented to somehow enable this is a total hack that I was never particularly keen on, but produces vaguely acceptable results in most situations. But it's clearly very limited, and isn't going to produce the results we'd like every time.

I'm open to suggestions as to how we could efficiently render transparent lines / surfaces (often very dense) within a ray-traced volume render though, so if you have ideas, I'm all ears!

neurolabusc commented 2 years ago

My idea was to convert the lines to a volume texture with the same unit dimensions as the background image. You could then sample the volume in your ray casting path. It does mean that all lines must be in the background volume and you would have to consider the resolution of the texture.

Somewhat related, I wrote a shader to display DTI lines on 2D slices using an RGBA texture and the fragment shader rather than storing a huge number of GL_LINES. With the GL_LINES approach you need to store two vertices for each line, and we often have one DTI line for each voxel of our DWI scan, so that is a minimum of 2 vertices 3 x/y/z coordinates float32 = 24 bytes. However, with modern OpenGL limiting lines to 1 pixel, which is very small on a high-DPI display, one must increase the storage (e.g. create a billboard for each line). An alternative is to use an RGBA textures, where each voxel is a single RGBA32 (4 bytes per voxel). The RGBA components store the X,Y,Z direction and the length. The fragment shader can detect its position in the pixel grid and select the color based on the nearest neighbor sample of the texture.

jdtournier commented 2 years ago

Yes, that can also work. I've often produced high-resolution track density maps and rendered them in the volume render, that can work nicely. Not a very efficient way of doing it though...

As to the lines render, yes, that is a problem. We don't use an RGBA texture, but straight VBAs containing only the data we need. Because the backend caters for more than just DTI, and we can have multiple directions per voxel, we store both the position and direction vector, and optionally the length and colour, all as separate VBAs. We then rely on the geometry shader to convert these to quads on the fly - optionally with pseudo tube lighting (basically also generate surface normals that vary across the surface as they would for a cylinder). That allows us to control the line thickness with no increase in RAM consumption. Maths was a bit hairy to work out - need to ensure the quads face the viewer, deal with corners for a line strip, etc. - but it seems to work pretty well (apart from the mac M1, as per #2281 - but that's now magically fixed itself apparently).

But I think I can see how that could work within the volume render, if you're doing all the work in the fragment shader. We'd need to figure out how to support multiple directions per voxel, but there's probably a way around that. No go for streamlines though, they're far too unstructured for that. Other limitation presumably is that the line length can't exceed its corresponding voxel boundary, right? Worth thinking about, at any rate...

neurolabusc commented 2 years ago

Yes, for my 2D method the line is constrained to the voxel boundary. You could always generate a full 3D texture for your volume renderer where the lines texture has a higher resolution than the background image, and where the lines are pre-generated in the texture so they can overlap boundaries.

However, there are tradeoffs with each approach.