google / neuroglancer

WebGL-based viewer for volumetric data
Apache License 2.0
1.02k stars 283 forks source link

Tutorial on using segmentation and annotation features? #242

Closed manoaman closed 3 years ago

manoaman commented 3 years ago

Hi jbms,

I wanted to learn more about how to use segmentation and annotation features of Neuroglancer. For example, drawing boundaries on an image and save it for later use. Or what do double clicked pixels on segmentation tab do and represent. (Picking up same rgb colors from the image?) Currently, it seems to load precomputed data but I can't seem to figure out how other features work. Do you have a tutorial or a YouTube video which describes the general use of these features?

Screen Shot 2020-09-11 at 4 20 50 PM

Thank you, m

jbms commented 3 years ago

Unfortunately there is no tutorial or video available as far as I am aware.

You might find this example more instructive to play with: https://neuroglancer-demo.appspot.com/#!%7B%22dimensions%22:%7B%22x%22:%5B8e-9%2C%22m%22%5D%2C%22y%22:%5B8e-9%2C%22m%22%5D%2C%22z%22:%5B8e-9%2C%22m%22%5D%7D%2C%22position%22:%5B2980.186767578125%2C3153.929443359375%2C4045%5D%2C%22crossSectionScale%22:2.8863709892679617%2C%22projectionOrientation%22:%5B0.2747986614704132%2C0.7059817314147949%2C0.6514520049095154%2C-0.041058510541915894%5D%2C%22projectionScale%22:4593.980956070107%2C%22layers%22:%5B%7B%22type%22:%22image%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/flyem_fib-25/image%22%2C%22name%22:%22image%22%7D%2C%7B%22type%22:%22segmentation%22%2C%22source%22:%22precomputed://gs://neuroglancer-public-data/flyem_fib-25/ground_truth%22%2C%22tab%22:%22segments%22%2C%22segments%22:%5B%22158571%22%2C%2221894%22%2C%2222060%22%2C%2224436%22%2C%222515%22%5D%2C%22name%22:%22ground-truth%22%7D%5D%2C%22showSlices%22:false%2C%22selectedLayer%22:%7B%22layer%22:%22ground-truth%22%2C%22visible%22:true%7D%2C%22layout%22:%224panel%22%7D

When you double click on a segmentation layer, the id the mouse is hovering over is "selected", and if the data source provides 3d meshes those will be displayed for the object.

As far as annotation, you can place points, lines, axis-aligned bounding boxes, and axis-aligned ellipsoids. To create an in-memory annotation layer, control+click on the + button on the layer bar. Then select one of the 4 icons (which select an annotation type) next to the color picker in the side bar to begin placing annotations with control+click. None of these annotation types are particularly well suited to drawing boundaries, though. Additionally, these annotations are just stored in memory, in the URL itself. If you want to save them, you will have to use the Python API (see python/neuroglancer/tool and python/examples) to create your own tool.

manoaman commented 3 years ago

Thank you for the information and resources to look at jbms.

I'd like to know more in detail when you say "data source provides 3D mesh". In the given example, colored 3D mesh in "ground-truth" layer is served precomputed dataset. What is the original file format used in this example before converting it into precomputed dataset? Both two layers seem precomputed but do they differ in formats and how they are generated? I suppose CloudVolume is used to convert?

Yes, I'd like to explore more with annotation feature and thank you for providing me with where (how) to start.

m

jbms commented 3 years ago

For these datasets, I generated the precomputed format using Google-internal infrastructure. The image (electron microscopy) layer uses jpeg encoding, while the ground truth layer uses "compressed_segmentation" encoding. The meshes are generated automatically from the segmentation.

You can write precomputed or other formats supported by Neuroglancer using tensorstore (github.com/google/tensorstore) or cloudvolume. For generating meshes, you can use igneous (https://github.com/seung-lab/igneous), which uses cloudvolume.

austinhoag commented 3 years ago

Hi @manoaman, I have started putting together some video tutorials on how to use these features for the collaboration I work with at Princeton: https://brainmaps.princeton.edu/. These are public, so feel free to browse them. Note that some features (like exporting annotations to a CSV file) are not supported by the google client. We have our own public fork of Neuroglancer (source code: https://github.com/braincogs/neuroglancer/), but it is privately hosted at the moment. We plan to publicly host it in the near future. In the meantime If you are interested in using some of those features, you could clone our repo and then build and run it on your own machine/server.

jbms commented 3 years ago

@austinhoag Your video tutorials are really awesome!

manoaman commented 3 years ago

Hi @austinhoag This is really nice. Thank you very much for sharing the awesome tutorials and repo. It is very helpful!

manoaman commented 3 years ago

Hi @austinhoag , I'm not sure if I should be asking here but is this custom annotation volume used in this example WHS_SD_rat_atlas_v3_annotation.tif open to the public? I didn't know tiff file could contain segmention ids and I was wondering how you created and customized in a single tiff file. (Sorry, probably not related to this post but very curious to know how and if it is generated from a third party software or in-house program.) Thank you again for the very nice tutorial and instructions. -m

austinhoag commented 3 years ago

That TIFF file just contains a 3D array with integer values, but it's not a segmentation volume yet. I converted to a precomputed segmentation volume using cloudvolume in steps 2 and 3 of the notebook. That exact file is not publicly available, but it is derived from the public Waxholm space Rat brain atlas: https://www.nitrc.org/projects/whs-sd-atlas.

We did not create the original file, the organization did. If you are just looking to explore one of these segmentation volumes, a quick way is to use cloudvolume to download one of the many publicly available datasets. Many of them on the readme of this repo are very large to download, but a few smaller ones are here: https://neurodata.io/ocp/

For example, to download the z-brain atlas segmentation volume from this dataset: https://neurodata.io/data/zbrain_atlas/ (click the eye icon to open the dataset in Neuroglancer) with cloudvolume:

from cloudvolume import CloudVolume
vol = CloudVolume('precomputed://https://zbrain-s3.neurodata.io/atlas_owen',parallel=True) 
# download to a numpy array
data=np.transpose(vol[:][...,0],(2,1,0))
# Save as a tiff file:
import tifffile
tifffile.imsave('./zbrain_annotation_volume.tif',data) # takes up ~ 1 GB just FYI 
manoaman commented 3 years ago

@jbms I realized an anatomical region name associated with a segmentation appears on the lower left of the screen when mouse enters. Are the regions provided as part of a info file? I wasn't sure if this is a custom feature of the Neuroglancer fork which @austinhoag mentioned.

https://neurodata.io/data/zbrain_atlas/

Screen Shot 2020-09-25 at 10 02 49 PM
jbms commented 3 years ago

Showing the region names on the bottom left like that is a custom feature of that Neuroglancer fork. However, there is similar functionality in the upstream Neuroglancer.

See this FlyEM hemibrain example: https://hemibrain-dot-neuroglancer-demo.appspot.com/#!gs://neuroglancer-janelia-flyem-hemibrain/v1.0/neuroglancer_demo_states/base.json

The cell types and abbreviated region names are shown in the layer "tab" bar, and the full region names are shown in the "Selection" panel on the right.