BICCN / cell-locator

manually align specimens to annotated 3D spaces
https://cell-locator.readthedocs.io
Other
19 stars 7 forks source link

Export annotation as "segmentation mask" #129

Closed jcfr closed 1 year ago

jcfr commented 4 years ago

What's the problem this feature will solve?

As described in https://github.com/BICCN/cell-locator/issues/107, @danielsf worked on a script allowing to re-create annotation based on the Json annotation file.

Since Slicer (and by extension CellLocator) is able to export such mask as nrrd or obj files (either directly or with a minimum about of scripting), I anticipate this would streamline the overall process.

Describe the solution you'd like

Along with the json file, saved all relevant annotation.

Use cases will have to be documented to understand what would be the most suitable format (nrrd, obj, ...)

Alternative Solutions NA

Additional context NA

danielsf commented 4 years ago

We have not done a comprehensive survey of use cases, but my understanding is that most of them will involve comparing the annotation regions to data products that are stored as voxel grids over the whole brain. Thus, we need either to store the annotations as .nrrd files, or have readily available functionality to rapidly get .nrrd files from the stored annotations.

Storing the annotations directly as .nrrd implies some storage burden. Based on my prototype, storing a single annotation as an .nrrd grid the size of a 10 micron-resolution CCF takes up 1.2 MB. I don't know how many annotations we hope to ultimately be storing, but this could be prohibitive.

Whatever solution we arrive at, we are going to have to reconcile the coordinate systems used by CellLocator and the CCF. Consulting brain-map.org

http://help.brain-map.org/display/mousebrain/API#API-DownloadAtlas3-DReferenceModels

the CCF maps between x,y,z and anatomical coordinates as follows

+x = posterior (-x = anterior) +y = inferior (-y = superior) +z = right (-z = left)

Inspecting the current .json annotation files produced by CellLocator, it appears that CellLocator regards

+x = right (-x = left) +y = anterior (-y = posterior) +z = superior (-z = inferior)

Whatever tool we use to generate the voxel grids will need to apply the transformation between these two coordinate systems.

Incidentally: the voxel grids used to store the CCF reference grids (an example of which is downloadable from the bain-map.org link above) are indexed so that 0th index is z, the 1st index is y, and the 2nd index is x, i.e.

voxel_grid[i][j][k]

will give the voxel at (xmin+k*dx, ymin+j*dy, zmin+i*dz) in physical coordinates.