google / neuroglancer

WebGL-based viewer for volumetric data
Apache License 2.0
1.05k stars 292 forks source link

position-dependent image chunk rendering #255

Open d-v-b opened 3 years ago

d-v-b commented 3 years ago

When panning this image layer on firefox, it appears that the chunks near the center of the field of view are rendered while chunks outside the center appear to "pop out" of existence. What controls this behavior? Is there anything we can do to prevent it?

http://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B1e-9%2C%22m%22%5D%2C%22y%22:%5B1e-9%2C%22m%22%5D%2C%22z%22:%5B1e-9%2C%22m%22%5D%7D%2C%22position%22:%5B30196.6328125%2C4127.5%2C15657.4833984375%5D%2C%22crossSectionScale%22:58.96965593556915%2C%22projectionScale%22:65536%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22n5://https://janelia-cosem-datasets.s3.amazonaws.com/jrc_hela-2/jrc_hela-2.n5/predictions/plasma_membrane_seg%22%2C%22shader%22:%22#uicontrol%20vec3%20color%20color%28default=%5C%22orange%5C%22%29%5Cn%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20void%20main%28%29%20%7BemitRGB%28color%20%2A%20ceil%28float%28getDataValue%28%29.value%5B0%5D%29%20/%204294967295.0%29%29%3B%7D%5Cn%22%2C%22crossSectionRenderScale%22:0.08005541811026703%2C%22name%22:%22Plasma%20membrane%22%7D%5D%2C%22selectedLayer%22:%7B%22layer%22:%22Plasma%20membrane%22%2C%22visible%22:true%7D%2C%22crossSectionBackgroundColor%22:%22#000000%22%2C%22layout%22:%22xz%22%2C%22partialViewport%22:%5B0%2C0%2C1%2C1%5D%7D

jbms commented 3 years ago

Neuroglancer prioritizes chunks to determine in which order they are downloaded, and which one to keep in the cache if memory is limited. Lower resolution chunks are prioritized, and then priority is based on distance from the center of the screen.

I see that is a uint64 volume with chunk size of 96^3. That will require a very large amount of memory uncompressed. For example, if you have a screen resolution of 1920x1080, to show a fullscreen cross section would require 1920 1080 96 * 8 bytes = 1.5GB of data, which already exceeds the default limit of 1GB of GPU memory. Additionally, you have set the "Resolution (slice)" control to smaller than one pixel, which would increase the memory requirement further, except that you are already hitting the limit. Note that you can see the memory usage by opening the chunk statistics panel: press the backslash key, or right click on the top bar next to the position and enable "Show chunk statistics".

Neuroglancer supports a "compressed_segmentation" format for uint32 and uint64 data types that can greatly reduce system+gpu memory usage for discrete label data, and can automatically encode data into that format. However, that compression format is not advantageous for continuous data. Previously, the n5 datasource never re-encoded into compressed_segmentation format, which was a bug. I just pushed out a fix so that the n5 datasource will use compressed_segmentation format if you are using a segmentation layer (it treats that as a hint that the data is discrete). Currently there is still no way to force the compressed segmentation encoding when using an image layer.

Here is a link with the layer changed to a segmentation layer (with the newly deployed update, memory usage is reduced about 100x): https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B1e-9%2C%22m%22%5D%2C%22y%22:%5B1e-9%2C%22m%22%5D%2C%22z%22:%5B1e-9%2C%22m%22%5D%7D%2C%22position%22:%5B33793.78515625%2C4123.5%2C16070.2880859375%5D%2C%22crossSectionScale%22:58.96965593556915%2C%22projectionScale%22:65536%2C%22layers%22:%5B%7B%22type%22:%22segmentation%22%2C%22source%22:%22n5://https://janelia-cosem-datasets.s3.amazonaws.com/jrc_hela-2/jrc_hela-2.n5/predictions/plasma_membrane_seg%22%2C%22crossSectionRenderScale%22:0.5%2C%22name%22:%22Plasma%20membrane%22%7D%5D%2C%22selectedLayer%22:%7B%22layer%22:%22Plasma%20membrane%22%2C%22visible%22:true%7D%2C%22crossSectionBackgroundColor%22:%22#000000%22%2C%22layout%22:%22xz%22%2C%22statistics%22:%7B%22visible%22:true%2C%22size%22:240%7D%2C%22partialViewport%22:%5B0%2C0%2C1%2C1%5D%7D

d-v-b commented 3 years ago

Thanks for the clarification, that's very helpful.

The reason why I was initially viewing the data as an image layer is because in other volumes we have many different labels but we want to view all labels with single color. Is there a way to do this with a segmentation layer?

jbms commented 3 years ago

It is certainly possible to add a way to hint to the n5 datasource that the data would benefit from compressed_segmentation encoding --- e.g.

n5://whatever?type=segmentation

Unfortunately that is not so easily discoverable, so I imagine most users would be unaware of that option.

As far as viewing all labels with a single color in a segmentation layer, that is not directly supported by there are some workarounds: