daducci / COMMIT

Linear framework to combine tractography and tissue micro-structure estimation with diffusion MRI
Other
45 stars 33 forks source link

Building operator A error: no EC compartments #124

Closed SebastienDam01 closed 1 year ago

SebastienDam01 commented 1 year ago

Hello,

I am trying to run COMMIT2 on my data preprocessed by QSIPrep and tractogram obtained from MRtrix3. I followed the tutorial but got an error when building the linear operator.

The tractogram was obtained using the iFOD2 algorithm, based on anatomically constrained tractography.

Code input

import os
import commit
from commit import trk2dictionary
import amico 
import numpy as np

# Directories
derivatives_dir="path/to/my/derivatives"
dwi_dir=os.path.join(derivatives_dir,"qsiprep")
fod_dir=os.path.join(derivatives_dir,"fod")
tracto_dir=os.path.join(derivatives_dir,"tractography")
connectomes_dir=os.path.join(derivatives_dir,"connectome")

sub="sub-016"

# Input files
dwi_path=os.path.join(dwi_dir,sub,f"dwi/{sub}_space-T1w_desc-preproc_dwi.nii.gz")
bval_path=os.path.join(dwi_dir,sub,f"dwi/{sub}_space-T1w_desc-preproc_dwi.bval")
bvec_path=os.path.join(dwi_dir,sub,f"dwi/{sub}_space-T1w_desc-preproc_dwi.bvec")
tracto_path=os.path.join(tracto_dir,sub,"tracks_10M.tck")

os.chdir(os.path.join(connectomes_dir,sub))

# Get only connecting fibers
if not os.path.isfile(os.path.join(connectomes_dir,sub,f"{sub}_parcels.csv")) :
    os.system( f'tck2connectome -force -nthreads 0 -assignment_radial_search 2 \
            -out_assignments fibers_assignment.txt \
            {tracto_path} \
            functional/hcp_mmp1_in_T1w_dwi.nii.gz {sub}_parcels.csv' )

if not os.path.isdir(os.path.join(connectomes_dir,sub,"bundles")) :
    os.mkdir(os.path.join(connectomes_dir,sub,"bundles"))
    os.system(( f'connectome2tck -force -nthreads 0 -exclusive -files per_edge -keep_self {tracto_path} assignments_{sub}_parcels.csv bundles/bundle_' ))

if not os.path.isfile( f'{tracto_dir}/{sub}/fibers_connecting.tck' ) :
    C = np.loadtxt( f'{sub}_parcels.csv', delimiter=',' ) # NB: change 'delimiter' to suits your needs
    CMD = 'tckedit -force -nthreads 0'
    for i in range(C.shape[0]):
        CMD_i = 'tckedit -force -nthreads 0'
        for j in range(i,C.shape[0]):
            if C[i,j] > 0 :
                CMD_i += ' bundles/bundle_%d-%d.tck' %(i+1,j+1)
        os.system( CMD_i + ' bundles/fibers_connecting_%d.tck' % (i+1) )
        CMD += ' bundles/fibers_connecting_%d.tck' %(i+1)
    os.system( CMD + f' {tracto_dir}/{sub}/fibers_connecting.tck' )

# Importing the tractogram
trk2dictionary.run(
    filename_tractogram = os.path.join(tracto_dir,sub,"fibers_connecting.tck"),
    filename_mask       = os.path.join(dwi_dir,sub,f"anat/{sub}_label-WM_probseg_flirt_bin.nii.gz"),
    verbose=3
)

# Load the diffusion data
amico.util.fsl2scheme(bval_path, bvec_path, 'DWI.scheme')
mit = commit.Evaluation()
mit.load_data(dwi_path, 'DWI.scheme', b0_thr=5)

# Set the forward model
mit.set_model( 'StickZeppelinBall' )
d_par       = 1.7E-3             # Parallel diffusivity [mm^2/s]
d_perps_zep = [ 0.51E-3 ]        # Perpendicular diffusivity(s) [mm^2/s]
d_isos      = [ 1.7E-3, 3.0E-3 ] # Isotropic diffusivity(s) [mm^2/s]
mit.model.set( d_par, d_perps_zep, d_isos )
mit.generate_kernels( regenerate=True )
mit.load_kernels()

# Load the sparse data-structure and build A
mit.load_dictionary(os.path.join(tracto_dir,sub,"COMMIT"))
mit.set_threads(1)
mit.build_operator()

# perform the fit
mit.fit( tol_fun=1e-3, max_iter=1000, verbose=True )
mit.save_results( path_suffix="_COMMIT1" )

# Retrieve the streamline contributions estimated by COMMIT for later use in COMMIT2:
x_nnls, _, _ = mit.get_coeffs( get_normalized=False )

Terminal output

-> Creating the dictionary from tractogram:

   * Configuration:
    - Segment position = COMPUTE INTERSECTIONS
    - Coordinates shift in X = 0.000 (voxel-size units)
    - Coordinates shift in Y = 0.000 (voxel-size units)
    - Coordinates shift in Z = 0.000 (voxel-size units)
    - Min segment len  = 0.001 mm
    - Min streamline len    = 0.00 mm
    - Max streamline len    = 250.00 mm
    - Do not blur streamlines
    - Output written to "/derivatives/tractography/sub-015/COMMIT"
    - Temporary files written to "/derivatives/tractography/sub-015/COMMIT/temp"
    - Using parallel computation with 20 threads

   * Loading data:
    - Tractogram
        - geometry taken from "/derivatives/qsiprep/sub-015/anat/sub-015_label-WM_probseg_flirt_bin.nii.gz"
        - 130 x 155 x 131
        - 1.5000 x 1.5000 x 1.5000
        - 5863925 streamlines
    - Filtering mask
        - 130 x 155 x 131
        - 1.5000 x 1.5000 x 1.5000
    - No dataset specified for EC compartments

   * Exporting IC compartments:
     [ 5673556 streamlines kept, 514133882 segments in total ]                    

   * Exporting EC compartments:
     [ 0 voxels, 0 segments ]

   [ 281.7 seconds ]

-> Writing scheme file to [ DWI.scheme ]

-> Loading data:
    * Acquisition scheme:
        - diffusion-weighted signal
        - 199 samples, 13 shells
        - 14 @ b=0, 23 @ b=1490.0, 19 @ b=2990.0, 20 @ b=1495.0, 11 @ b=3005.0, 31 @ b=1500.0, 16 @ b=2985.0, 16 @ b=2995.0, 17 @ b=1505.0, 24 @ b=3000.0, 4 @ b=2980.0, 1 @ b=1485.0, 2 @ b=3010.0, 1 @ b=1510.0
    * Signal dataset:
        - dim    : 130 x 155 x 131 x 199
        - pixdim : 1.500 x 1.500 x 1.500
        - values : min=-2287.08, max=39701.55, mean=362.34
   [ 19.6 seconds ]

-> Preprocessing:
    * Normalizing to b0... [ min=-2071.32, max=4038.11, mean=0.42 ]
    * Keeping all b0 volume(s)... [ 130 x 155 x 131 x 199 ]
   [ 1.2 seconds ]

-> Simulating with "Stick-Zeppelin-Ball" model:
   [ 0.5 seconds ]

-> Resampling LUT for subject ".":
    * Keeping all b0 volume(s)...
          [ OK ]
    * Normalizing... [ OK ]
   [ 0.1 seconds ]

-> Loading the dictionary:
    * Segments from the tracts... [ 5673556 fibers and 514133882 segments ]
    * Segments from the peaks...  [ 0 segments ]
    * Isotropic contributions...  [ 157794 voxels ]
    * Post-processing...          [ OK ]
   [ 161.9 seconds ]

-> Distributing workload to different threads:
    * Number of threads : 1
    * A operator...  [ OK ]
    * A' operator... [ OK ]
   [ 0.1 seconds ]
[ ERROR ] The selected model has EC compartments, but no peaks have been provided; check your data

This error originates from the file core.pyx line 691 when the following condition is met: if self.DICTIONARY['EC']['nE'] <= 0 and self.KERNELS['wmh'].shape[0] > 0 :

Having no EC compartments, the value self.DICTIONARY['EC']['nE'] is set to 0. Thus, I do not understand this part but I may be missing something here.

On a side note, the b-values of my data should be 1500s/mm2 and 3000s/mm2 but they oscillate around these values in the b-values file, resulting in more than 2 shells when loading the data. I am not sure if this could have an impact on my error, though.

Please let me know if I can provide any further information.

Thank you very much,

Sébastien

nightwnvol commented 1 year ago

Hi @SebastienDam01,

I think the error is caused by a wrong generation of the kernels. Since you have no EC compartment, in your script you have to specify no values when you generate the kernerls for that compartment, so d_perps_zep = [].

Let us know if this solve your issue.

SebastienDam01 commented 1 year ago

Hi @nightwnvol,

Thanks for your help, this indeed solved the error ! However, I got another issue while following the rest of the tutorial. I managed to fit the model but while preparing the anatomical prior on bundles, I could not compute the individual weight for each bundle.

Code input (extract)

# Model fitting
[...]

# Retrieve the streamline contributions
x_nnls, _, _ = mit.get_coeffs(get_normalized=False) 

# Prepare the anatomical prior on bundles
C = np.loadtxt(f'{sub}_parcels.csv', delimiter=',')
C = np.triu( C ) # be sure to get only the upper-triangular part of the matrix
group_size = C[C>0].astype(np.int32)

# Indices of the streamlines composing each bundle
tmp = np.insert(np.cumsum(group_size), 0, 0)
group_idx = np.array( [np.arange(tmp[i],tmp[i+1]) for i in range(len(tmp)-1)], dtype=np.object_ )

# Individual weight for each bundle to penalize all groups of streamlines in the same manner regardless of their cardinality
group_w = np.empty_like( group_size, dtype=np.float64 )
for k in range(group_size.size) :
    group_w[k] = np.sqrt(group_size[k]) / ( np.linalg.norm(x_nnls[group_idx[k]]) + 1e-12 )

Output

IndexError: index 147765 is out of bounds for axis 0 with size 147765

My tractogram (passed to trk2dictionary.run()) containing only the connecting streamlines fibers_connecting.tck has 147765 streamlines, while the cumulative sum of streamlines in my connectome {sub}_parcels.csv is equal to 7383185, thus far surpassing my tractogram. As the IC coefficients x_nnls estimated is also the same size as my tractogram, x_nnls[group_idx[k]] returns this index error.

I may be missing something again while constructing my tractogram or elsewhere. Please tell me if I can change something in my code.

nightwnvol commented 1 year ago

Maybe an error with your fibers_assignment.txt file?

I just noticed that in the connectome2tck command, you've specified the assignments_{sub}_parcels.csv file as input assignments, but your file should be fibers_assignment.txt (output from the tck2connectome command).

SebastienDam01 commented 1 year ago

Yes, that solved my problem, sorry for those confusing errors !