I received a mail from @melanieganz about the derivatives meeting in which she linked to this repo.
Previously, my colleagues worked with PMOD to (pre)process data. But when I critically evaluated their pipeline, I noticed that it made a lot of errors during co-registration and normalization.
Since I am working on a study with currently 110 FDG PET scans from patients with hyperkinetic movement disorders and healthy controls (still recruiting up to 140), I decided to build a robust preprocessing pipeline with nipype. The data that I am working with are static 3T T1 and FDG-PET images.
The pipeline makes use of:
fmirprep for the T1 images (--anat-only),
AFNI autobox for cropping PET images
HD-BET for skull-stripping PET images (which outperformed other alternatives that I tried). I built a custom nipype function for it. The repo (https://github.com/MIC-DKFZ/HD-BET) seemed to not be maintained anymore and I was thinking of forking it.
ANTs for co-registration. I do this in two steps (inspired by fmiprep). First I co-register the skull-stripped mask, then the actual PET image.
Coreg1, Coreg2, and fmriprep normalisation are then merged into a final transformation matrix
Final matrix is applied to the original PET image
Normalized PET image is then smoothed.
The pipeline seems to do a good job for our data set and I very much like to share it with the community.
Hey all,
I received a mail from @melanieganz about the derivatives meeting in which she linked to this repo.
Previously, my colleagues worked with PMOD to (pre)process data. But when I critically evaluated their pipeline, I noticed that it made a lot of errors during co-registration and normalization.
Since I am working on a study with currently 110 FDG PET scans from patients with hyperkinetic movement disorders and healthy controls (still recruiting up to 140), I decided to build a robust preprocessing pipeline with nipype. The data that I am working with are static 3T T1 and FDG-PET images.
The pipeline makes use of:
--anat-only
),The pipeline seems to do a good job for our data set and I very much like to share it with the community.
What would be a good way to move forward?