Version with incremental levels of sweetness (more to come):
[x] fsl_glm_from_scratch.sh: no elegance, no advanced tools are used. This is merely to provide a starting point for an analysis that goes from DICOMs to FFA blob in a fully automated way.
[x] fsl_glm_w_datalad.sh: same analysis as fsl_glm_from_scratch.sh, but managed with datalad (data dependency/prov. capture, metadata extraction/search, analysis code export)
We could further replace the stupid steps with better ones (maybe even having multiple flavors of scripts with analog workflows, but different tools -- you can do it the most beautiful way....:
datalad+subdatasets, heudiconv+reproin, nipype workflow, BIDS app ...
The structural from here https://github.com/datalad/example-dicom-structural matches the fMRI scan, so we could add steps like brain extraction, or co-registration -- but I guess that is not so much the point here.
Running the dumb one in the root of this repo should look like this:
% bash fsl_glm_from_scratch.sh
Cloning into 'raw'...
done.
sub-02
1
Chris Rorden's dcm2niiX version v1.0.20170923 (OpenJPEG build) GCC6.3.0 (64-bit Linux)
Found 5460 DICOM image(s)
swizzling 3rd and 4th dimensions (XYTZ -> XYZT), assuming interslice distance is 3.300000
Convert 5460 DICOM as sub-02/dicoms_func_task-oneback_run-1_20140425155335_401 (80x80x35x156)
Philips Precise RS:RI:SS = 4.00757:0:0.0132383 (see PMC3998685)
R = raw value, P = precise value, D = displayed value
RS = rescale slope, RI = rescale intercept, SS = scale slope
D = R * RS + RI , P = D/(RS * SS)
D scl_slope:scl_inter = 4.00757:0
P scl_slope:scl_inter = 75.5384:0
Using P values ('-p n ' for D values)
Conversion required 0.299224 seconds (0.296344 for core code).
To view the FEAT progress and final report, point your web browser at 1stlvl_glm.feat/report_log.html
bash fsl_glm_from_scratch.sh 265.07s user 3.38s system 99% cpu 4:29.24 total
under 5 min, as promised.
The more fancy one looks like this:
% ./fsl_glm_w_datalad.sh
Demo dataset at /tmp/tmp.U5HOVWLTMP/localizerdemo
[INFO ] Creating a new annex repo at /tmp/tmp.U5HOVWLTMP/localizerdemo
create(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
[INFO ] Cloning /home/mih/dicom_demo/functional to '/tmp/tmp.U5HOVWLTMP/localizerdemo/inputs/rawdata'
add(ok): inputs/rawdata (dataset) [added new subdataset]
add(notneeded): inputs/rawdata (dataset) [nothing to add from /tmp/tmp.U5HOVWLTMP/localizerdemo/inputs/rawdata]
add(notneeded): .gitmodules (file) [already included in the dataset]
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
install(ok): inputs/rawdata (dataset)
action summary:
add (notneeded: 2, ok: 1)
install (ok: 1)
save (ok: 1)
add(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo/code/events2ev3.sh (file) [non-large file; adding content to git repository]
add(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo/code (directory)
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
action summary:
add (ok: 2)
save (ok: 1)
[INFO ] == Command start (output follows) =====
sub-02
1
[INFO ] == Command exit (modification check follows) =====
add(ok): sub-02/onsets/run-1/body.txt (file)
add(ok): sub-02/onsets/run-1/face.txt (file)
add(ok): sub-02/onsets/run-1/house.txt (file)
add(ok): sub-02/onsets/run-1/scramble.txt (file)
add(ok): sub-02/onsets/run-1/scene.txt (file)
add(ok): sub-02/onsets/run-1/object.txt (file)
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
action summary:
add (ok: 6)
save (ok: 1)
[INFO ] == Command start (output follows) =====
Chris Rorden's dcm2niiX version v1.0.20170923 (OpenJPEG build) GCC6.3.0 (64-bit Linux)
Found 5460 DICOM image(s)
swizzling 3rd and 4th dimensions (XYTZ -> XYZT), assuming interslice distance is 3.300000
Convert 5460 DICOM as sub-02/dicoms_func_task-oneback_run-1_20140425155335_401 (80x80x35x156)
Philips Precise RS:RI:SS = 4.00757:0:0.0132383 (see PMC3998685)
R = raw value, P = precise value, D = displayed value
RS = rescale slope, RI = rescale intercept, SS = scale slope
D = R * RS + RI , P = D/(RS * SS)
D scl_slope:scl_inter = 4.00757:0
P scl_slope:scl_inter = 75.5384:0
Using P values ('-p n ' for D values)
Conversion required 0.327120 seconds (0.323876 for core code).
[INFO ] == Command exit (modification check follows) =====
add(ok): sub-02/dicoms_func_task-oneback_run-1_20140425155335_401.json (file)
add(ok): sub-02/dicoms_func_task-oneback_run-1_20140425155335_401.nii (file)
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
action summary:
add (ok: 2)
save (ok: 1)
add(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo/sub-02/1stlvl_design.fsf (file) [non-large file; adding content to git repository]
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
action summary:
add (ok: 1)
save (ok: 1)
[INFO ] == Command start (output follows) =====
To view the FEAT progress and final report, point your web browser at /tmp/tmp.U5HOVWLTMP/localizerdemo/1stlvl_glm.feat/report_log.html
[INFO ] == Command exit (modification check follows) =====
add(ok): 1stlvl_glm.feat/absbrainthresh.txt (file)
<snip>
add(ok): 1stlvl_glm.feat/.files/images/fslstart.png (file)
add(ok): 1stlvl_glm.feat/.files/images/tick.gif (file)
add(ok): 1stlvl_glm.feat/.files/images/vert2.png (file)
add(ok): 1stlvl_glm.feat/.ramp.gif (file)
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
action summary:
add (ok: 344)
save (ok: 1)
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
[INFO ] Aggregate metadata for dataset /tmp/tmp.U5HOVWLTMP/localizerdemo
Metadata extraction: 100%|███████████████████████| 3.00/3.00 [00:00<00:00, 4.85 extractors/s]/usr/lib/python2.7/dist-packages/nibabel/parrec.py:508: UserWarning: PAR/REC version 'None' is currently not supported -- making an attempt to read nevertheless. Please email the NiBabel mailing list, if you are interested in adding support for this version.
""".format(version)))
[INFO ] Update aggregate metadata in dataset at: /tmp/tmp.U5HOVWLTMP/localizerdemo
aggregate_metadata(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
[INFO ] Attempting to save 6 files/datasets
save(ok): /tmp/tmp.U5HOVWLTMP/localizerdemo (dataset)
action summary:
aggregate_metadata (ok: 1)
save (ok: 1)
[INFO ] Building search index
[INFO ] Search index contains 353 documents
[INFO ] Query completed in 0.000592947006226 sec. Reporting up to 20 top matches.
search(ok): 1stlvl_glm.feat/mask.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe2.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/tstat1.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe12.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe17.nii.gz (file)
search(ok): 1stlvl_glm.feat/filtered_func_data.nii.gz (file)
search(ok): 1stlvl_glm.feat/rendered_thresh_zstat1.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe4.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe10.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe6.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe1.nii.gz (file)
search(ok): 1stlvl_glm.feat/thresh_zstat1.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe18.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe16.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/res4d.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe9.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/sigmasquareds.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe13.nii.gz (file)
search(ok): 1stlvl_glm.feat/stats/pe15.nii.gz (file)
search(ok): 1stlvl_glm.feat/cluster_mask_zstat1.nii.gz (file)
[INFO ] Reached the limit of 20 top matches, there could be more which were not reported.
action summary:
search (ok: 20)
#!/bin/sh
#
# This file was generated by running (the equivalent of)
#
# datalad rerun --script=myanalysis.sh --since= 83a0e4c4f699b76ee827d1e62ec0bd948fa923f3
#
# in /tmp/tmp.U5HOVWLTMP/localizerdemo
bash code/events2ev3.sh sub-02 inputs/rawdata/events.tsv
dcm2niix -b y -o sub-02 inputs/rawdata/dicoms
feat sub-02/1stlvl_design.fsf
./fsl_glm_w_datalad.sh 518.84s user 72.21s system 194% cpu 5:03.96 total
Version with incremental levels of sweetness (more to come):
fsl_glm_from_scratch.sh
: no elegance, no advanced tools are used. This is merely to provide a starting point for an analysis that goes from DICOMs to FFA blob in a fully automated way.fsl_glm_w_datalad.sh
: same analysis asfsl_glm_from_scratch.sh
, but managed with datalad (data dependency/prov. capture, metadata extraction/search, analysis code export)fsl_glm_w_datalad.sh
but with an automated heudiconv setup. functionality is coming via https://github.com/datalad/datalad-neuroimaging/pull/17We could further replace the stupid steps with better ones (maybe even having multiple flavors of scripts with analog workflows, but different tools -- you can do it the most beautiful way....: datalad+subdatasets, heudiconv+reproin, nipype workflow, BIDS app ...
The structural from here https://github.com/datalad/example-dicom-structural matches the fMRI scan, so we could add steps like brain extraction, or co-registration -- but I guess that is not so much the point here.
Running the dumb one in the root of this repo should look like this:
under 5 min, as promised.
The more fancy one looks like this: