Open tischi opened 2 weeks ago
Hello @tischi,
before we start digging. Are meshes loaded from the disk, where they are stored in some format (.stl)? Or do you generate them in Mobie from some thresholded source, programmatically? Or both?
The are created on the fly; they are not stored on disk.
I slightly widened the title of the issue. For me it would also be interesting to check how a label mask image volume rendered with Glasbey LUT would look like; maximum projection would likely not be useful for label masks :-)
Max proj no, but 'volumetric' should be ok. You just need to narrow down alpha value so everything is not transparent, but keep LUT wide.
Is the alpha value adjustable?
I would add the "normal" volume rendering as an option for the label masks to our branch such that we can test this, ok?
Is there any way that I could test the Glasbey LUT from within my IntelliJ IDE? I think there was some trick to "link in" the Fiji folder, but I am not sure...
It is adjustable, check the readme of bvv-playground
.
There is a method to load IndexColorModel in the Converter setup, so you just need to read glabsbey values into it from disk (ImageJ LUT) or somewhere else.
I can check it for you tomorrow.
There is a method to load IndexColorModel in the Converter setup, so you just need to read glabsbey values into it from disk (ImageJ LUT) or somewhere else. I can check it for you tomorrow.
Thanks! That would be very helpful for the testing.
I added code to display label mask images (aka segmentations) with BVV.
However, the "classic example", which are the cells
from the platynereis dataset are of Long datatype, which throws an error:
Cannot display cells in BVV, incompatible data type:
net.imglib2.type.numeric.integer.UnsignedLongType
Is that expected? Could be that there is no support in SpimData... Would BDV be able to display this at all or would be need to convert to something else? I think you mentioned that BVV can only do uint16, is it?
For cached multires it is only UnsignedShort
, uint16
, indeed.
I can add loading of Long
either truncated to max of 65535 or 'cyclic', i.e. reminder of division by this max.
That would be great! Maybe cyclic would be best such that segments >65535 would still be rendered with different colours.
Hello @tischi,
I've made Glasbey LUT version and UnsignedLong
data loading. In the end the cycling enough to do for 256, since it is the range of the LUT.
You can check results and play with it a bit here.
I've made two options, one with "dark render" (version 1) and one with clipped volume (version 2).
This is version 1 view
But main conclusion to me is that it seems like the multires image is not the best way to show segmentation results in 3D. The reason behind it is that scaling by factor of 2 scrambles all the labels in 3D (average intensity becomes weird). That leads to the "scrambled" segmentation. See version 2 initial view And after it loaded a bit better resolution
The areas of fine, thin labels (with thickness below the current optimal/displayed resolution) are getting scrambled. It happens also in BDV, but it is less noticeable, because higher resolution can be loaded any time and fast in one plane. For BVV to have fine details correctly one would need to load it all to the GPU.
Well, you can check the result and play with it yourself. So I guess meshes would be a solution. How many objects do you have in this segmentation? Some time later we can try to implement meshes generation and loading.
Not sure if that information helps, maybe you know all of this already: The resolution pyramid for the labels was created using a nearest neighbour / sampling strategy, thus there should be no averaging of label values. In BDV, for the display one has to also use nearest neighbour interpolation (this can be configured, pressing the I
key). In fact, I think my code in MoBIE prevents label masks from ever being interpolated, even if the user is pressing the I
key.
The reason behind it is that scaling by factor of 2 scrambles all the labels in 3D
Where does that factor 2 scaling happen in BVV? Could one configure it to do something else than averaging? For instance, taking a random sample?
How many objects do you have in this segmentation?
Around 16000 I think
So I guess meshes would be a solution.
Yes, that's why so far in MoBIE I am using meshes for the display of segmentations. However, only (the few) segments that are actively selected by the user are rendered, because creating 16000 meshes on the fly would be too slow and I am not sure whether the 3D Image Viewer could handle it. But I don't think I ever really tried to benchmark/push this... Anyway, I think a good start would be to just reproducing my current 3D Viewer Mesh implementation with BVV.
Well, you can check the result and play with it yourself.
Will do.
I can reproduce the scrambling of values at the borders of the labels, but don't really understand why that happens. Do you have a reference for how the volume rendering algorithm works? Does it just take the first non-zero value that it finds along a ray?
Version 1 is better I think, one can get to Version 2 manually by "moving into the sample".
If the pyramid was built using nearest neighbor, than I guess my hypothesis was wrong and it is not a culprit.
Do you have a reference for how the volume rendering algorithm works?
Not in a written form, no, it is unpublished.
I know from reading/tinkering with the code and conversations with Tobias :)
In principle, volume rendering in BVV is optimized for the speed and it makes some assumptions about data.
We can remove those assumptions and see if the quality of the picture improves.
So there are a few possible suspects that we can test:
1) First one is dithering (explained here). I will try to remove it and see if it is a problem.
2) The second one is a variable step along the ray. Each screen pixel shoots a ray through the volume and sample/accumulates max (for max intensity) or "alpha blended" intensity values. In the current form the step size varies: BVV takes smaller steps on the part of the ray closer to the camera and makes larger steps further away. In many other 3D renderers (sciview) the step size is constant to avoid artifacts. That we can also change and see if the scrambling goes away.
3) There is some bug in my conversion of UnsignedLong
. I need to think about it, so far it looks ok.
4) Something else.
Does it just take the first non-zero value that it finds along a ray?
Kind of, not really. It is making alpha blending of accumulated voxels along the ray in the shaders . It should stop when alpha value is more than 1.
But! If we put alpha range in the ConverterSetup
of BVV to 0-1, it should stop at the first sampled voxel, so yes (unless I miss something). But then we have point 2) from above.
I am going investigate a bit.
manually by "moving into the sample".
Yeah, this is why I think that future bvv-minimal
BVVBrowser
should have a clipping controls.
Ok, I think I figured it out.
So if I load low resolution level of your labels converted to UnsignedShort
and display with current version, I get this beautiful rainbow render
And now if I clip the view to just one voxel and zoom in on it, then I get this picture
Now I understand where the "rainbow" comes from. Basically, data is uploaded to the GPU cache (texture). When renderer engine samples "view ray", it gets float coordinate values inside one pixel. The interpolation mode is set to nearest neighbor (on the uploaded GPU texture), but it does not "rounds up" the value, but indeed looks for the nearest voxel in 3D and that is different depending on the float coordinates inside the voxel of interest. So in Glasbey LUT, it is pretty drastic change in color. What we see is basically "nearest neighbor" subpixel distance map.
So what I did, I've tweaked the rendered to round up (actually floor
) accessed voxel values. Then I get something more "expected":
And one voxel becomes
Does it makes sense?
Of course, this voxel "floor" value method is not acceptable for the normal volumetric microscopy data rendering, since it becomes super "voxelized", see below
I guess I can add an option to bvv-playground
's converter setup to render a specific source in this "labels" mode.
Would this be a solution?
I think it looks much more realistic (left = previous version, right = "floor" voxel method) especially "outer" surface level
Here is the "proper" mipmap loading upon the start on my laptop
https://github.com/user-attachments/assets/aa49e298-c2df-48a6-ba0e-b102bf2e2ca7
Wow, super interesting! Thanks for digging!
In the "floor-rendering-mode": What happens if you zoom in so much that you the viewer canvas is within the specimen. In other words: How does this view look now?
I would hope that all the scrambled stuff between the labels is gone...?!
It depends on the pyramid level, since in lowest level the data is "scrabled" Here is the same slice before "full res" loading
and after
The high-res looks perfect!
I guess I can add an option to bvv-playground's converter setup to render a specific source in this "labels" mode. Would this be a solution?
Yes, that's what I am also sort of doing:
@Override
public RealRandomAccessible< AnnotationType< A > > getInterpolatedSource( final int t, final int level, final Interpolation method)
{
final RealRandomAccessible< T > rra = source.getInterpolatedSource( t, level, Interpolation.NEARESTNEIGHBOR );
return Converters.convert( rra,
( T input, AnnotationType< A > output ) ->
setOutput( input, t, output ),
new AnnotationType<>() );
}
☝️ I am ignoring here the Interpolation method
input argument and always use Interpolation.NEARESTNEIGHBOR
for Source
s of AnnotationType
.
What is your algorithm for assigning colors? So that BVV render is the same?
But this is tricky, because this converts AnnotationType
to a colour.
I guess you are currently working with the underlying label mask image, which is some unsigned integer type....
The mapping from the label-id in the label mask to the AnnotationType is done here: https://github.com/mobie/mobie-viewer-fiji/blob/5d95facf26350d278b787049fe2480f3cc7f3090/src/main/java/org/embl/mobie/lib/annotation/DefaultAnnotationAdapter.java#L78
This is all quite involved. I am not sure you will be able to reverse-engineer all of this....
I think the easiest would be if we could just do
BvvFunctions.add( SourceAndConverter< ? > sac )
Because then we could simply add the sac that already outputs the correct colours.
Assuming that the volume rendering operates directly on the ARGBType
?! Or does it need to access the integer valued data at any point?
I could also dig a bit into BvvFunctions.show
myself to better understand what is going on....do you think that could help?
I guess you are currently working with the underlying label mask image, which is some unsigned integer type.
Yes, exactly.
Assuming that the volume rendering operates directly on the ARGBType?! Or does it need to access the integer valued data at any point?
Cached multires sources of ARGBType
are not supported in BVV, only 16 bit data (UnsignedShort
).
Therefore everything "multires cached" is wrapped into spimdata
.
The coloring I show right now is made by applying LUT.
If you have somewhere a table of "voxel number in the segmentation volume corresponding to the label" <-> color, we can make a very specific LUT like this and load it to BVV to display the source. It can be done at the runtime.
I guess I can use sac.getConverter().convert( UnsignedLong in, ARGBType out)
to build the LUT?
The only thing I need to know is the maximum number (index) of annotations.
Is it possible to get it somehow from this Annotation
source?
Maybe from something like this?
int nTest = ( ( AnnotationLabelImage<?> ) image ).getAnnData().getTable().numAnnotations();
?
Ok, I got the colors for all annotations ~32000.
Turns out, that my implementation of LUTs for BVV does not support a LUT with 32000 colors. It is a shame. I am gonna try to fix this.
I guess I can use sac.getConverter().convert( UnsignedLong in, ARGBType out) to build the LUT?
Unfortunately, I don't think so, because the pixel type for which I have a converter is AnnotationType
.
The logic is: Integer(Label Mask) ----AnnotationAdapter----> AnnotationType ----Converter----> ARGBType
I think I will have to look at the code myself in a bit more detail to see what could be done.
I would suggest you push your latest additions into the bvvpg branch and wait until I get back to you. I will try this week. OK?
Hi @ekatrukha,
I added some code for converting the integer to an ARGBType that seems to work: https://github.com/mobie/mobie-viewer-fiji/blob/7966f36a5b93b9c7d9804bbf45ba837df41b0d03/src/main/java/org/embl/mobie/lib/bvv/ImageBVViewer.java#L131
I think this is what you could use instead of (inside of) your current getGlasbeyICM()
.
Thank youuuu. I've wrapped it into a separate function for all labels.
Now I am going to modify bvv-playground
so it can do
1) the "floor rendering" we discussed above and
2) load large LUTs (>2000 colors).
and cut a new release of it.
Once it is done, I will ping you.
Just a detail behind the LUT story: so far I have been uploading sources LUTs as a linear 1D texture to GPU, but OpenGL has a limitation on the maximum size of this array. I would need to wrap it as 2D or 3D image, in this case the limitation should be size ^2 or even ^3.
By the way, there is annotationType.setAnnotation( A annotation );
Thus, also the annotationType
variable can be reused and does not have to be instantiated every time.
...I am saying that because I assuming that the the conversion function will be called a lot during the rendering?
Ah, I see, I will add that. No, it will be called once to generate LUT, a color array that will be subsequently uploaded to GPU (once) and stored there.
OK, then we would need, at some point, to make ImageBVViewer
implement ColoringListener
to update this LUT if needed and request a repaint (if possible); the corresponding AnnotationSliceView.class
is doing that.
Hello @tischi,
I've updated max LUT size in BVV, now it should support up to 65536 values. That means it can show up to 65535 annotations.
I pushed changes to bvvpg
fork, now it should display labels, from my tests it looks identical
(left Mobie's BDV, right BVV)
By default I put alpha (opacity) range to 0-1, but one can change it and even observe labels over the data
(left Mobie's BDV, right BVV)
I guess this part is working now.
What is left (in my opinion).
1) Annotation LUT update with ColoringListener
. Do you want me to look into that?
2) BVV settings change dialog somewhere.
Let me know what you think and what are results of your tests.
Hello @tischi
with this LUT mapping. In principle, if we put all LUT alpha values to zero and some selected to 0.5 or 1.0, we can show only specific (for example, user selected) labels. It is a bit of an overshoot, since the whole segmentation volume is going to be loaded. But it works quite ok, see example below, I "selected" only two labels. I think the LUT update/GPU upload should be relatively quick.
https://github.com/user-attachments/assets/78184bcc-eb25-4398-b994-85938abe2a36
Hi @ekatrukha,
Starting an issue about showing meshes in BVV from MoBIE.
The current mesh code is all here:
https://github.com/mobie/mobie-viewer-fiji/tree/main/src/main/java/org/embl/mobie/lib/volume
Here is where a specific mesh is added to the current volume viewer:
https://github.com/mobie/mobie-viewer-fiji/blob/5d95facf26350d278b787049fe2480f3cc7f3090/src/main/java/org/embl/mobie/lib/volume/SegmentVolumeViewer.java#L283
I guess converting the current mesh into an imagej-mesh should not be a big deal. I can also help looking into this. Let me know if I should have a look!