daducci / COMMIT

Linear framework to combine tractography and tissue micro-structure estimation with diffusion MRI
Other
45 stars 33 forks source link

COMMIT2 #92

Closed sarwart closed 3 years ago

sarwart commented 3 years ago

Hi!

Recent paper "A new method for accurate in vivo mapping of human brain connections using microstructural and anatomical information" (https://advances.sciencemag.org/content/6/31/eaba8245) reports under "Code and data availability" section that the code of COMMIT2 is also available in this repository. Can you please guide how to use COMMIT2?

Tabinda

daducci commented 3 years ago

Hi Tabinda, yes, the code has been on the repository for a while but we wrote a tutorial only recently. You can find it on the project's Wiki at this link: https://github.com/daducci/COMMIT/wiki/COMMIT2. The reason for this delay is that we haven't figure out, yet, how to share the HCP data we had actually used in the paper (we are exploring some alternatives); thus, for the time being, we just uploaded a small demo dataset and did not publicize this demo very much.

You can give it a try, and let us know if you need help.

Best, Ale

sarwart commented 3 years ago

Hi Tabinda, yes, the code has been on the repository for a while but we wrote a tutorial only recently. You can find it on the project's Wiki at this link: https://github.com/daducci/COMMIT/wiki/COMMIT2. The reason for this delay is that we haven't figure out, yet, how to share the HCP data we had actually used in the paper (we are exploring some alternatives); thus, for the time being, we just uploaded a small demo dataset and did not publicize this demo very much.

You can give it a try, and let us know if you need help.

Best, Ale

Thanks Alessandro. I have tried this demo and it works fine. I will contact you again when I will run this on my dataset.

Thanks again

Tabinda

sarwart commented 3 years ago

Hi Alessandro,

I was trying to use the provided demo for HCP dataset. I have a tractogram of 2 million streamlines and I am interested in extracting the weight of each streamline using COMMIT and COMMIT2. Using the provided demo, the generated streamline weights file contains the weights for around 5000 streamlines. Can you please guide what am I doing wrong? I just replaced the demo data files with HCP data and the tractogram is in .tck format. How can I fix this to generate the weights for all 2 million streamlines?

For COMMIT2, I have received size mismatch error between variables at the following line "group_w[k] = np.sqrt(group_size[k]) / ( np.linalg.norm(x_nnls[group_idx[k]]) + 1e-12 )" specifically the error is listed below: IndexError: index 389538 is out of bounds for axis 0 with size 389538

It might be linked with the COMMIT configuration. Can you please help to successfully run COMMIT and COMMIT2 ?

Tabinda

MarioOcampo commented 3 years ago

Hello Tabinda,

can I ask you to share with us the terminal output that you get from the commands trk2dictionary.run, mit.load_data, mit.load_dictionary, and mit.save_results?

Best, Mario

sarwart commented 3 years ago
 commit.core.setup()

 trk2dictionary.run(
     filename_tractogram = '/home/ubuntu/storage/dmri/streamlines_21_connecting.tck',#demo01_fibers_connecting.tck',
     filename_mask       = '/home/ubuntu/storage/dmri/mask.nii.gz',#WM.nii.gz',
     fiber_shift         = 0.5
 )

-> Creating the dictionary from tractogram:

  • Configuration:

    • Segment position = COMPUTE INTERSECTIONS
    • Fiber shift X = 0.500 (voxel-size units)
    • Fiber shift Y = 0.500 (voxel-size units)
    • Fiber shift Z = 0.500 (voxel-size units)
    • Points to skip = 0
    • Min segment len = 0.001 mm
    • Min fiber len = 0.00 mm
    • Max fiber len = 250.00 mm
    • Do not blur fibers
    • Output written to "/home/ubuntu/storage/dmri/COMMIT"
  • Loading data:

    • Tractogram
      • geometry taken from "/home/ubuntu/storage/dmri/mask.nii.gz"
      • 145 x 174 x 145
      • 1.2500 x 1.2500 x 1.2500
      • 2000000 fibers
    • Filtering mask
      • 145 x 174 x 145
      • 1.2500 x 1.2500 x 1.2500
    • No dataset specified for EC compartments

    [ 773.9 seconds ]

#convert the bvals/bvecs pair to a single scheme file
import amico
#amico.util.fsl2scheme( 'bvals.txt', 'bvecs.txt', 'DWI.scheme' )
amico.util.fsl2scheme( '/home/ubuntu/storage/dmri/bvals', '/home/ubuntu/storage/dmri/bvecs', '/home/ubuntu/storage/dmri/DWI.scheme' )

#load the data
mit = commit.Evaluation( '.', '.' )
#mit.load_data( 'DWI.nii.gz', 'DWI.scheme' )
mit.load_data( '/home/ubuntu/storage/dmri/data.nii.gz', '/home/ubuntu/storage/dmri/DWI.scheme' )

#use a forward-model with 1 Stick for the streamlines and 2 Balls for all the rest
mit.set_model( 'StickZeppelinBall' )
d_par   = 1.7E-3             # Parallel diffusivity [mm^2/s]
d_perps = [ ]                # Perpendicular diffusivity(s) [mm^2/s]
d_isos  = [ 1.7E-3, 3.0E-3 ] # Isotropic diffusivity(s) [mm^2/s]
mit.model.set( d_par, d_perps, d_isos )

mit.generate_kernels( regenerate=True )
mit.load_kernels()

#create the sparse data structures to handle the matrix A
mit.load_dictionary( 'COMMIT' )
mit.set_threads(1)
mit.build_operator()

#perform the fit
mit.fit( tol_fun=1e-3, max_iter=1000, verbose=False )
mit.save_results( path_suffix="_COMMIT1" )

-> Writing scheme file to [ /home/ubuntu/storage/dmri/DWI.scheme ]

-> Loading data:

  • DWI signal:
    • dim : 145 x 174 x 145 x 288
    • pixdim : 1.250 x 1.250 x 1.250
    • values : min=0.00, max=34633.90, mean=353.21
  • Acquisition scheme:
    • 288 samples, 19 shells
    • 0 @ b=0, 18 @ b=5.0, 32 @ b=1000.0, 19 @ b=2000.0, 15 @ b=3005.0, 33 @ b=995.0, 17 @ b=2995.0, 26 @ b=2005.0, 9 @ b=990.0, 22 @ b=1990.0, 15 @ b=1005.0, 15 @ b=1995.0, 19 @ b=2990.0, 8 @ b=3010.0, 5 @ b=2010.0, 14 @ b=2985.0, 3 @ b=1985.0, 1 @ b=1010.0, 16 @ b=3000.0, 1 @ b=2980.0 [ 44.0 seconds ]

-> Preprocessing: [ WARNING ] There are no b0 volumes for normalization [ 0.0 seconds ]

-> Simulating with "Stick-Zeppelin-Ball" model:

[ 22.3 seconds ]

-> Resampling LUT for subject ".":

  • Keeping all b0 volume(s)...

    [ OK ]

  • Normalizing... [ OK ] [ 4.8 seconds ]

-> Loading the dictionary: [ WARNING ] Dictionary does not have the same geometry as the dataset

  • Segments from the tracts... [ 283522 fibers and 16664340 segments ]
  • Segments from the peaks... [ 0 segments ]
  • Isotropic contributions... [ 53008 voxels ]
  • Post-processing... [ OK ] [ 6.5 seconds ]

-> Distributing workload to different threads:

  • Number of threads : 1
  • A operator... [ OK ]
  • A' operator... [ OK ] [ 0.2 seconds ]

-> Building linear operator A: [ 9.5 seconds ]

-> Fit model:

[ 00h 10m 42s ]

-> Saving results to "Results_StickZeppelinBall_COMMIT1/*":

  • Fitting errors:
    • RMSE... [ 419.879 +/- 325.409 ]
    • NRMSE... [ 0.489 +/- 0.237 ]
  • Voxelwise contributions:
    • Intra-axonal... [ OK ]
    • Extra-axonal... [ OK ]
    • Isotropic... [ OK ]
  • Configuration and results:
    • streamline_weights.txt... [ OK ]
    • results.pickle... [ OK ] [ 5.0 seconds ]
daducci commented 3 years ago

Hi Tabinda,

that is really strange, as we have successfully used COMMIT and COMMIT2 on hCP data before. Are you using standard/preprocessed data or you have applied some other processing on it? I see a few warnings here and there, which may be the cause of your problems (you appear to have a very high RMSE and NRMSE values):

Let us know if this solves your problem. Ale

sarwart commented 3 years ago

Hi Ale,

Adjusting the directory path and using bStep has solved the problem. I have successfully estimated streamline weights using both COMMIT and COMMIT2. Thanks

For using COMMIT2 on other datasets, I just need to adjust the value of reg_lambda. Am I right? Is there any optimal value or range of values for this parameter? I believe there is no need to change the values of diffusivity parameters or model. Please correct me if I am wrong

Thanks, Tabinda

daducci commented 3 years ago

Good to hear that! And yes, at the moment there's no closed-form solution to set the lambda value, and it needs to be estimated empirically. Indeed I would not change the model parameters, e.g., diffusivities, and I'd only adjust the lambda.