Closed dohmatob closed 8 years ago
To be discussed with @schwarty . In geenral, I think that it is useful to have scripts to pre-process openfMRI.
For example,
elvis@middle-earth:~/CODE/FORKED/pypreprocess$ python examples/openfmri_preproc.py /tmp/ds105 /tmp/ds105_preproc -s sub001 -O
/home/elvis/.local/lib/python2.7/site-packages/pypreprocess/configure_spm.py:147: UserWarning: Setting SPM MCR backend with cmd: spm.SPMCommand.set_mlab_paths(matlab_cmd='/home/elvis/spm8 run script', use_mcr=True)
warnings.warn("Setting SPM MCR backend with cmd: %s" % cmd)
Traceback (most recent call last):
File "examples/openfmri_preproc.py", line 76, in
I am checking this out
ok, great.
On Thu, Aug 6, 2015 at 2:23 PM, Martin Perez-Guevara < notifications@github.com> wrote:
I am checking this out
— Reply to this email directly or view it on GitHub https://github.com/neurospin/pypreprocess/issues/109#issuecomment-128347336 .
DED
the _fetch_files parameters passed were incorrect, which was causing the unpacking problem. thats fixed now. Afterwards the download was not working, due to the base url, turns out they moved to the amazon cloud, its working now. I ran into some problems with SPM configuration, so changed the openfmri to work with pure python for testing purposes, could also become an additional example.
Nonetheless the preprocessing presented some errors, like:
/media/mfpgt/MainDataDisk/PHD_research/open_source_contributions/pypreprocess/pypreprocess/purepython_preproc_utils.py in _do_subject_realign(subject_data, reslice, register_to_mean, caching, hardlink_output, ext, func_basenames, write_output_images, report, func_prefix) 99 100 if write_output_images > 1: --> 101 subject_data.hardlink_output_files() 102 103 return subject_data
/media/mfpgt/MainDataDisk/PHD_research/open_source_contributions/pypreprocess/pypreprocess/subject_data.py in hardlink_output_files(self, final) 463 if not filename is None: 464 linked_filename = hard_link( --> 465 filename, self.session_output_dirs[sess]) 466 tmp.append(linked_filename) 467 if final:
/media/mfpgt/MainDataDisk/PHD_research/open_source_contributions/pypreprocess/pypreprocess/io_utils.py in hard_link(filenames, output_dir) 577 return hardlinked_filenames[0] 578 else: --> 579 return [hard_link(_filenames, output_dir) for _filenames in filenames] 580 581
/media/mfpgt/MainDataDisk/PHD_research/open_source_contributions/pypreprocess/pypreprocess/io_utils.py in hard_link(filenames, output_dir) 560 continue 561 if not os.path.isfile(src): --> 562 raise OSError("src file %s doesn't exist" % src) 563 564 # unlink if link already exists
OSError: src file /media/mfpgt/MainDataDisk/PHD_research/open_source_contributions/pypreprocess/test_tmp_data/preproc/sub001/tmp/ravol_0 doesn't exist
I also created a mock open fmri dataset function in _test_utils. the same problems arise.
I think I catched also an unrelated bug in do_subjects_preproc in spm_utils. the normalize variable was not being created when there was not an init file. So calling preproc_params['normalize'] was throwing an exception.
I changed it for this: normalize = preproc_params.get("normalize", True)
This was found and fixed yesterday morning. Please update your repo.
On Fri, Aug 7, 2015 at 1:51 AM, Martin Perez-Guevara < notifications@github.com> wrote:
I thinked I catched also an unrelated bug in do_subjects_preproc in spm_utils. the normalize variable was not being created when there was not an init file. So calling preproc_params['normalize'] was throwing an exception.
I changed it for this: normalize = preproc_params.get("normalize", True)
— Reply to this email directly or view it on GitHub https://github.com/neurospin/pypreprocess/issues/109#issuecomment-128541202 .
DED
On Fri, Aug 7, 2015 at 1:34 AM, Martin Perez-Guevara < notifications@github.com> wrote:
the _fetch_files parameters passed were incorrect, which was causing the unpacking problem. thats fixed now. Afterwards the download was not working, due to the base url, turns out they moved to the amazon cloud, its working now. I ran into some problems with SPM configuration, so changed the openfmri to work with pure python for testing purposes, could also become an additional example.
OK. Please consider sending a PR already with the things you've already done on the ticket.
- Good job finding the download bugs
- But note that pure python can't go do normalization or segmentation. I don't mind having a separate pure python example which which run on a single-subject. But we still need the scripts running with SPM as they used to.
— Reply to this email directly or view it on GitHub https://github.com/neurospin/pypreprocess/issues/109#issuecomment-128539076 .
DED
Confirmed the following bug. After downloading the whole data file this strange exception comes up. the last messages makes no sense to me since the dataset was actually downloaded.
Downloaded 3343212440 of 3343212440 bytes (100.00%, 0.0s remaining)
...done. (1391 seconds, 23 min)
/home/mperezgu/.local/lib/python2.7/site-packages/nilearn/datasets.py:712: UserWarning: An error occured while fetching ds002_raw
IOError Traceback (most recent call last) /usr/lib/python2.7/dist-packages/IPython/utils/py3compat.pyc in execfile(fname, where) 202 else: 203 filename = fname --> 204 builtin.execfile(filename, where)
/volatile/martin_local/open_source_contributions/pypreprocess/examples/openfmri_preproc.py in
/volatile/martin_local/open_source_contributions/pypreprocess/pypreprocess/openfmri.pyc in preproc_dataset(data_dir, output_dir, ignore_subjects, restrict_subjects, delete_orient, dartel, n_jobs) 56 57 if not os.path.exists(data_dir): ---> 58 fetch_openfmri(parent_dir, dataset_id) 59 60 ignore_subjects = [] if ignore_subjects is None else ignore_subjects
/volatile/martin_local/open_source_contributions/pypreprocess/pypreprocess/datasets.pyc in fetch_openfmri(data_dir, dataset_id, force_download, verbose) 322 output_dir = os.path.join(data_dir, dataset_id) 323 if not os.path.exists(output_dir) and not force_download: --> 324 _fetch_files(data_dir, urls, verbose=verbose) 325 shutil.move(temp_dir, output_dir) 326 shutil.rmtree(os.path.split(temp_dir)[0])
/home/mperezgu/.local/lib/python2.7/site-packages/nilearn/datasets.pyc in _fetch_files(data_dir, files, resume, mock, verbose) 719 if os.path.exists(temp_dir): 720 shutil.rmtree(tempdir) --> 721 raise IOError('Fetching aborted: ' + abort) 722 files.append(target_file) 723 # If needed, move files from temps directory to final directory.
IOError: Fetching aborted: Target file cannot be found
IOError: Fetching aborted: Target file cannot be found
This message is awfully poor. I improved it. If you want, you can pull nilearn's master and rerun your script, you should have a more detailed report of your problem.
This issue should be closed... also openfmri should be discarded for bids.
OK, closing.
The openfmri/ and scripts/ subdirectories are probably broken. These should be cleaned up or perhaps simply deleted from the repo.