nipy / mindboggle

Automated anatomical brain label/shape analysis software (+ website)
http://mindboggle.info
Other
145 stars 54 forks source link

antsCorticalThickness.sh as input? #33

Closed binarybottle closed 10 years ago

binarybottle commented 10 years ago

My colleague Mohammad and I found that we get a better segmentation (and thickness measure) by combining FreeSurfer (FS) and ANTs (Atropos) outputs (see below and the new functions combine_whites_over_grays() and thickinthehead()). Given that segmentation will affect mindboggle labeling, I am considering scrapping the nipype antsRegistration call in mindboggle and simply requiring recon-all & antsCorticalThickness.sh outputs as inputs to mindboggle (and perhaps give the option to call both from within mindboggle). what do you think about this?

SUMMARY OF FINDINGS:

Segmentation: ANTs does a better job at capturing gray matter than FS, including regions we care about for the EMBARC study (medial orbitofrontal), but sometimes extends gray matter into extra-brain tissue. FS does a better job at capturing white matter than ANTs, when its surfaces don't stop short. We can obtain best segmentation results by taking the union of FS and ANTs white matter and the union of FS and ANTs gray matter, replacing gray with white where they intersect. The prospect of painting some white voxels and erasing some gray voxels with ITK-Snap is far more attractive than trying to correct surface meshes with the repositioning tool in Freeview.

FS vs. ANTs: I tested scan-rescan reliability of thickness measures for all 62 labels in all 40 EMBARC healthy controls. FS has very high reliability across scans for a given subject, as well as across subjects per label. ANTs has lower reliability. Replacing the segmentation that ANTs uses with the FS segmentation does not completely remedy this. We are currently analyzing how much our best hybrid segmentation approach (above) improves ANTs thickness reliability.

Simple check: Since we are interested in average thickness values per labeled region, as a sanity check I wrote a very simple program which computes an embarrassingly simple thickness measure -- thickness is defined intuitively as volume / area (label volume divided by the average of the gray/white and gray/CSF border voxels after rescaling). Surprisingly, this simple program has a reliability comparable to FS, and may have a more accurate range of values (see below). If we try to do the same within FS, by dividing FS label volume by FS label area, we get a similar distribution, but some regions result in outliers (poles, entorhinal). FS values are also prone to surface failures, of course.

Accuracy: I read quite a few articles about cortical thickness measures, and the closest set of regions to ours accompanied by manual measurements of MRI thickness that I came across was in Kabani, 2001. If we consider the 16 regions that map to ours (i.e., disregard the cingulate), then for the 640 labels (16 x 40 subjects), FS-generated average thickness values are within Kabani's ranges for about 40% of the 640 labels, whereas my simple program is within Kabani's ranges for almost 90% of the 640 labels.

binarybottle commented 10 years ago

Brian Avants has added thickinthehead() to ANTs as LabelThickness in ImageMath, which can also call Direct (kellykapowski).