Closed fedemoro closed 4 years ago
Hi @fedemoro ,
Before I address your issues, can you do the following and tell me which branch are you working on?
cd dMRIharmonization
git status
Baranch duoy-dti the branch is updated in regard to 'origin/dipy-dti'.
Thanks, did you use --resample M x N x O
as part of the results you mentioned above?
In other words, do you know if your subjects across reference and target sites have different spatial resolution that necessitates resampling to a common resolution?
You can check spatial resolution using a command like below:
fslhd /my/nifti/image | grep pixdim
The following are spatial resolutions found from the above command:
pixdim1 1.500000
pixdim2 1.500000
pixdim3 1.500000
No I haven't use the --resample flag since all the subjects in the study have a pixdim of 2x2x2
i'm wondering if there is a quick way to extract individual mean FA for each subject
I shall implement your suggestion. In the meantime, you can edit harmonization.py
with
the following two lines after this :
print(f'std of all FA: ', np.std(ref_mean))
print(f'all FA: ', ref_mean)
Do the same for target_mean_before
and target_mean_after
.
Finally, you can extract individual FA at a later time using:
lib/tests/fa_skeleton_test.py -i /path/to/single_case.txt -s BSNIP -t /path/to/BSNIP/template/
while single_case.txt
contains only one line like following:
/path/to/FA/img/whose/meanFA/needs/to/be/computed
since only the bounders of the brain have been scaled. Could you please comment this?
Reference to your screenshot, if you adjust contrast on fslview
, you should be able to see values all over the brain, not just the skull region as it is apparent in your screenshot. The scales are within 0-10
, ideally less than 2.5
. So, with default contrast, they may not be visible. On the other hand, I agree the skull scale values can be larger than that of inner-brain scale values. This is due to the division performed here:
scale= ref/(target+eps)
scale.clip(min=0., out=scale)
# scale.clip(max=10., min= 0., out= scale)
The clipping is in effect in our later commits. However, the high scaling in the skull region is corrected by the algorithm here, which is a local median filtering approach to remove high values:
mask= findLargestConnectMask(img, mask)
se= custom_spherical_structure(n_zero)
paddedMask= np.pad(mask, n_zero, 'constant', constant_values= 0.)
dilM = binary_dilation(paddedMask, se)*1
eroM = binary_erosion(paddedMask, se)*1
skullRingMask = dilM - eroM
paddedImg = np.pad(img, n_zero, 'constant', constant_values=0.)
skullRing= paddedImg*skullRingMask
thresh= np.percentile(skullRing[skullRing>0], 95)
outLier= (skullRing>thresh)*1
tmp= local_med_filter(paddedImg, outLier)
denoisedImg = tmp[n_zero:-n_zero, n_zero:-n_zero, n_zero:-n_zero]
Thanks a lot for clarifying this.
Coming back to the poor harmonization outcome that I got.
these are the command that I used to create the template harmonization.py --template path_to/template/ --reflist ref.txt --tar.txt --ref_name REF --tar_name TAR --nshm 6 --nproc 4 --debug --create
Not sure whether having omitted --nzero and --denoise flags or the fact that I've used Raw DWI images before eddy correction may have decreased the quality of template creation.
Do you have any suggestion on what to do to improve it?
1.
these are the command that I used to create the template
The command looks fine. You are running basic dMRIharmonization. If you had used --resample MxNxO
, then I would tell you to run spm-bspline
branch which leverages on a better bspline interpolation method for resampling.
However, we should definitely do axis alignment and eddy correction before running harmonization. Doing that should yield much better results. I shall make a note in our README about it anyway. If you don't have access to tools for them, you can use our pipeline, specifically the following two scripts:
https://github.com/pnlbwh/pnlNipype/blob/master/scripts/align.py https://github.com/pnlbwh/pnlNipype/blob/master/scripts/pnl_eddy.py
The other suggestion would be, to uncomment the line so scale maps are clipped at 10. :
# scale.clip(max=10., min= 0., out= scale)
Finally, I think you mean the numbers only:
After harmonization the ref site differs more from the ref cite than that before harmonization
Although you may find a little discrepancy in the numbers, visual observation should imply goodness of dMRIharmonization. In any case, let us know your progress.
i'm wondering if there is a quick way to extract individual mean FA for each subject
I shall implement your suggestion. In the meantime, you can edit
harmonization.py
with the following two lines after this :print(f'std of all FA: ', np.std(ref_mean)) print(f'all FA: ', ref_mean)
Do the same for
target_mean_before
andtarget_mean_after
.Finally, you can extract individual FA at a later time using:
lib/tests/fa_skeleton_test.py -i /path/to/single_case.txt -s BSNIP -t /path/to/BSNIP/template/
while
single_case.txt
contains only one line like following:/path/to/FA/img/whose/meanFA/needs/to/be/computed
Just a clarification about the mean FA value that is generated. Is it referred to the white matter or to the whole brain?
Hi @fedemoro ,
It's on the skeleton only. Refer to Debugging section in README.md:
once the data are in MNI space, we calculate mean FA over the IITmean_FA_skeleton.nii.gz
All FA, mean FA, and std FA are printed:
I have run the harmonization.py on 26 (ref) 16 (tar) age-match healthy controls. After harmonization the ref site differs more from the ref cite than that before harmonization. Printing statistics: REF mean FA: 0.4764179896357986 TAR mean FA before harmonization: 0.45825602725158743 TAR mean FA after harmonization: 0.5094364786748963
1) Checking the scale maps I notice that they look quite different form the one you show https://github.com/pnlbwh/dMRIharmonization/blob/dipy-dti/doc/flowchart.png since only the bounders of the brain have been scaled. Could you please comment this?![image](https://user-images.githubusercontent.com/29942574/64602365-f6d65780-d3be-11e9-838f-050d24bf49db.png)
2) in the printing statistics only the man value is reported but not the standard deviation i'm wondering if there is a quick way to extract individual mean FA for each subject? This could help to see whether there is any issue at the single subject level
Thanks
Fede