Open tathey1 opened 3 years ago
Neuroglancer does not currently support maximum intensity projection. However, it does have experimental and limited support for volume rendering. See here for an example: https://github.com/google/neuroglancer/issues/204#issuecomment-611244824
The performance is also not great.
Improvements to the volume rendering support in Neuroglancer would be great --- however, volume rendering involves a lot of implementation challenges. If you don't have prior experience with WebGL or computer graphics you may not be able to make a lot of progress on it, unless you are very motivated to learn about it and get up to speed on it.
Neuroglancer has been incredibly useful for my colleagues and I in viewing whole-brain images with sparsely labeled neurons, and I am wondering if it could be made more suitable for neuron reconstruction/tracing. After observing several manual annotators trace neurons, I believe that the biggest change that would need to be made would be improving the 3D view of the image.
Currently, the image is rendered as 3 perpendicular "sections," however I am envisioning a maximum intensity projection (mip) of a subset of the image data, such as is done in viewers like napari or Vaa3d. Given that there already is beautiful coordination between the neuroglancer panels in terms of translation and zoom, it seems like the main thing remaining is compute a mip from the camera state in the 3d panel, and display it.
I am wondering if this feature would be in line with the mission of the neuroglancer team and if so, if there would be support from the team if I made PR? Additionally, I am not very familiar with WebGL or computer graphics (yet:)), so are there any obstacles in this task that I am missing?
The most related existing issue I could find is this one, but it doesn't seem like any changes were contributed to the package.