brainhack-boston / brainhack-boston.github.io

Brainhack Boston
https://brainhack-boston.github.io
Apache License 2.0
1 stars 22 forks source link

[PROJECT] Visualize tractography streamlines and perform annotation with Neuroglancer + TrackVis #43

Closed kabilar closed 4 months ago

kabilar commented 8 months ago

Goal

Developers will be provided with the following datasets:

  1. Diffusion MRI (NIfTI file format)
  2. Tractography streamlines (trk file format)
neurolabusc commented 8 months ago

@kabilar can I suggest you use the streamline loaders from NiiVue. This will allow you to leverage proven JavaScript loaders and using a common library can will development on either project help the other. NiiVue uses the permissive BSD-2 clause license and has been adopted by the current and upcoming AFNI, brainLife (ezBIDS), FSL, FreeSurfer, OpenNeuro and NRDG tractoscope projects.

The one challenge is NiiVue is just now migrating to TypeScript, so you will probably want to use nvmesh-loaders.ts.

NiiVue already supports several tractography formats: TCK, TRK, TRX, VTK, AFNI .niml.tract

While TRK can specify per-vertex and per-streamline values, the new (and faster community TRX format also adds support for per-bundle values, which you can see in this live demo as well as tractoscope.

While I would suggest using the same library as NiiVue, you can also try this minimal JavaScript library.

Daniel Haehn is also going to talk about his TRAKO format at the brain hack which achieves excellent compression and might be an alternative to TRX and TRK, in particular for situations with limited internet bandwidth.

ayendiki commented 8 months ago

It may be helpful to get everyone on the same page by adding the target resolution and FOV of these projects, e.g., 15 μm iso whole human hemi for HiPCT, 500 μm iso whole human hemi for dMRI.

neurolabusc commented 8 months ago

@ayendiki streamline formats like TRK report vertex positions as floats, so they are not inherently limited to the resolution of the voxelwise data they are displayed with.

A tool the creates them may use voxel grid to create them, and might also decide to simplify streamlines by merging straight line segments. Beyond the core resolution, the number of fibers generated will also be an issue. Some tools might want to cull redundant streamlines, while other will retain them.

Do you have existing TRK files you want to provide as sample data so that the team can evaluate the performance of different formats? NiiVue provides sample streamlines in TCK, TRK, and VTK format for regression testing and TRX for the live demos, but these are all intentionally minimal to keep the distribution small.

ayendiki commented 8 months ago

The target image data are on DANDI, we'll add the streamlines by Monday. I should also add that our main use case is editing/annotation of the streamlines, and we want to have to reimplement as little of this functionality from scratch as possible.

balbasty commented 8 months ago

Hi @neurolabusc

We would like to visualize the tracts and micro-scale imaging volumes in the same space, so that tracts annotations can be driven by the microscopy information. This project is highly related to #42. We're kind of stuck with neuroglancer, as it is becoming a de factor standard to navigate in these very large tera-scale volumes.

We have no experience with javascript/typescript in our group, so our early "hacks" aimed to leverage the skeleton and mesh formats that neuroglancer already knows to display. We've started a playgroun in this repo: https://github.com/balbasty/dandi-io/tree/registration (excuse the mis-named branch). Specifically:

Since we do not manipulate directly the openGL code, any culling/filtering must be performed at the file creation stage. In this project, we're hoping to extract annotation data from neuroglancer, push it back to the webserver to filter tracts, and build a new filtered neuroglancer file.

I am sure that something less hacky could be done if we directly delved into the neuroglancer code, but our lack of experience in typescript led use to avoid that for now. How difficult do you think it would be to directly implement a niivue-based loader in neuroglancer?

Looking forward to the hackathon!

neurolabusc commented 8 months ago

@balbasty I think neuroglancer is the obvious choice for ultra-high resolution images. It seems like you could either use JavaScript or Python (e.g. dipy and trx converter) to handle existing formats. It seems like the simplest approach is to write a simple converter to translate existing diffusion streamline formats (TCX, TRK, TRX) to neuroglancer's existing encoded skeleton file format.

balbasty commented 8 months ago

I’ve setup a repo with the “tracts in neuroglancer” code here: https://github.com/balbasty/ngtracts Feel free to try the notebook, it’s reading the data from dandi so should work for everyone.

There’s a conda.yaml to make a compatible environment.

kabilar commented 6 months ago

cc @MikeSchutzman

kabilar commented 4 months ago

Thanks team. Closing this issue as we have some prototypes of this solution and are preparing for the upcoming hackathon.