Closed satra closed 5 years ago
@dnkennedy - if you get details of the ABIDE protocol do post it here.
this is the current nipype workflow on top of antsCorticalThickness.sh output to label and extract measures
from glob import glob
import os
from nipype import Workflow, MapNode, Node
from nipype.interfaces.ants import ApplyTransforms, AntsJointFusion, LabelGeometry
from nipype.utils.misc import human_order_sorted
T = glob('/data/out/ants_subjects/arno/antsTemplateToSubject*')[::-1]
ref = '/data/T1.nii.gz'
mask = '/data/out/ants_subjects/arno/antsBrainExtractionMask.nii.gz'
T1s = human_order_sorted(glob('/opt/data/OASIS-TRT_brains_to_OASIS_Atropos_template/*.nii.gz'))
labels = human_order_sorted(glob('/opt/data/OASIS-TRT_labels_to_OASIS_Atropos_template/*.nii.gz'))
thickness = '/data/out/ants_subjects/arno/antsCorticalThickness.nii.gz'
N = 20
wf = Workflow('labelflow')
transformer = MapNode(ApplyTransforms(), iterfield=['input_image'], name="transformer")
transformer.inputs.reference_image = ref
transformer.inputs.transforms = T
transformer.inputs.input_image = T1s[:N]
transformer.inputs.dimension = 3
transformer.inputs.invert_transform_flags = [False, False]
transformer_nn = MapNode(ApplyTransforms(), iterfield=['input_image'], name="transformer_nn")
transformer_nn.inputs.reference_image = ref
transformer_nn.inputs.transforms = T
transformer_nn.inputs.dimension = 3
transformer_nn.inputs.invert_transform_flags = [False, False]
transformer_nn.inputs.input_image = labels[:N]
transformer_nn.inputs.interpolation = 'NearestNeighbor'
labeler = Node(AntsJointFusion(), name='labeler')
labeler.inputs.dimension = 3
labeler.inputs.target_image = [ref]
labeler.inputs.out_label_fusion = 'label.nii.gz'
labeler.inputs.mask_image = mask
labeler.inputs.num_threads = 8
wf.connect(transformer, 'output_image', labeler, 'atlas_image')
wf.connect(transformer_nn, 'output_image', labeler, 'atlas_segmentation_image')
tocsv = Node(LabelGeometry(), name='get_measures')
tocsv.inputs.intensity_image = thickness
wf.connect(labeler, 'out_label_fusion', tocsv, 'label_image')
wf.base_dir = os.getcwd()
wf.config['monitoring'] = {'enabled': True}
wf.run('MultiProc')
@binarybottle - what would you think about adding the above to the mindboggle123 script?
this would:
if that sounds good to you, i will send a PR to add this in. i would also augment the first transformer to use BSpline
.
Sounds great, @satra!
Mindboggle already generates a joint-fusion-labeled T1. Would people be confused by multiple joint-fusion-labelings of the same T1?
LabelGeometry is another issue, so including its output would be great!
@binarybottle - i'm not sure mindboggle generates a joint-fusion labeled T1, since the 20 atlases are not included anywhere - is there a function that downloads them and applies joint fusion?
you may be talking about just registering the joint fusion created atlas to the individual T1, which is different from applying joint fusion using multiple atlases to the T1.
Yes, sorry -- thank you for clarifying. Looking forward to a PR!
this is also done.
i've asked the ants folks for approaches they use to label brain structures.
https://github.com/ANTsX/ANTs/issues/587
for simple 2, if we want to be as good as possible we should use the better method (1). i'll implement and test that in the container i'm building.
i'll add container details to #5.