SlicerDMRI / SupWMA

[MedIA 2023] "Superficial White Matter Analysis: An Efficient Point-cloud-based Deep Learning Framework with Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions", Medical Image Analysis
https://supwma.github.io
Other
22 stars 5 forks source link

If the output clustering results are in the atlas space? #1

Open fullcream9 opened 2 years ago

fullcream9 commented 2 years ago

I knew the subject-specific tractography data is transformed into the atlas space and then perform clustering. So I wonder if the output clustering results are in the atlas space?

If so. How can we transform it back into subject-specific space for further analysis?

We use SupWMA on HCP 7T data and merge all 198 clusters into one file. The result looks good in 3DSlicer 👍👍 but then we transform vtp file to tck file and loaded in MRview. The tractogram is not aligned with the subject-specific b0_mean.

image

161018259-a2657664-c712-4375-b7b2-33bd6d1310e2

If the 'Tractograpy Display' modules in 3dslicer would coregister tractogram with image automatically? However, they are not actually in the same space?

Thank you!

zhangfanmark commented 2 years ago

Hi!

glad to see the code works for you and the results look good in Slicer! For your questions:

  1. for transferring the clusters back to the diffusion space, you can use "wm_harden_transform.py" provided in the WMA package. You should have a transformation file saved when registering the subject-specific data to the atlas space. So you will apply the invert transform to the output clusters using the above script.

  2. from the figure being shown, this should be a data orientation issue. It is likely that Slicer uses RAS but mrtrix uses LPS. I think an easy fix should be reading vtk file in python and making sign change to the first and second dimension of each point's coordinates and save a new file. Here is example code to do so. The same code should be used with a change in L520 using refpoint = (-point[0], -point[1], point[2])

Please let me know how it works.

Regards, Fan

fullcream9 commented 2 years ago

Thank you for your patient answer!

Based on your guidance, I have transformed the clusters back to the diffusion space successfully using "wm_harden_transform.py". It helps a lot.

For 2, I realized the orientation issue with your reminder. Probably I don't need to transform orientation in my case. If any questions, I will give you feedback.

Best, Panshi.

houxiaolinzhengdan commented 2 years ago

i have solve the problem. thank you very much! but i have another question, When I imported the segmented nerve fibers into 3Dslicer, the spatial position was incorrect, Is there a solution for the problem Screenshot from 2022-05-04 11-22-06 !