Closed MarcRamos closed 8 years ago
Hello Marc,
Thank you for your feedback. Are you using hemisphere aquisition for your DSI data ? If that's the case, I think the error was due to motion and eddy current corrections that tried to correct empty volumes. I've added a range specification for the preprocessing stage in the last commit. The default is set to the whole dataset (usually the case for DTI or HARDI schemes), but you can specify the range of volumes to process (the first volume index is 0).
Let me know if this works.
Cheers, David Romascano
Hello, I don't see it in the preprocessing stage, I just downloaded it. Also, there is a way to resize those windows? They are too small, and it will be more convenient in the segmentation stage, where a path has to be selected for old freesurfer files.
Thanks, Marc.
2014/1/23 davidrs06 notifications@github.com
Hello Marc,
Thank you for your feedback. Are you using hemisphere aquisition for your DSI data ? If that's the case, I think the error was due to motion and eddy current corrections that tried to correct empty volumes. I've added a range specification for the preprocessing stage in the last commit. The default is set to the whole dataset (usually the case for DTI or HARDI schemes), but you can specify the range of volumes to process (the first volume index is 0).
Let me know if this works.
Cheers, David Romascano
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33112717 .
This seems to be trying to resample the first volume. In the eddy_corrected.nii.gz there are 515 volumes and they seem fine, the first is the b0. It shouldn't crash, should it?
mri_convert --frame 0 -voxsize 1.000000 1.000000 1.000000 --input_volume ***1/NIPYPE/diffusion_pipeline/preprocessing_stage/eddy_correct/eddy_corrected.nii.gz --output_volume diffusion_first.nii.gz Standard error: Killed Return code: 137 Interface MRIConvert failed to run.
2014/1/23 Marc Ramos m4rklar@gmail.com
Hello, I don't see it in the preprocessing stage, I just downloaded it. Also, there is a way to resize those windows? They are too small, and it will be more convenient in the segmentation stage, where a path has to be selected for old freesurfer files.
Thanks, Marc.
2014/1/23 davidrs06 notifications@github.com
Hello Marc,
Thank you for your feedback. Are you using hemisphere aquisition for your DSI data ? If that's the case, I think the error was due to motion and eddy current corrections that tried to correct empty volumes. I've added a range specification for the preprocessing stage in the last commit. The default is set to the whole dataset (usually the case for DTI or HARDI schemes), but you can specify the range of volumes to process (the first volume index is 0).
Let me know if this works.
Cheers, David Romascano
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33112717 .
I think FreeSurfer resamples all the dataset before extracting the b0 volume. I'm fixing this but ran into other errors. I'll let you know when it's running fine for me.
David
Thanks!
2014/1/23 davidrs06 notifications@github.com
I think FreeSurfer resamples all the dataset before extracting the b0 volume. I'm fixing this but ran into other errors. I'll let you know when it's running fine for me.
David
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33123892 .
Hi again,
So, try to download the last commit, now you should be able to specify the range of volume over which to perform motion or eddy current correction. I also changed the code so that the b0 volume is extracted before performing the resampling. There were also a couple of bugs that made the pipeline crash when re-processing the data, so the best would be to delete everything in NIPYPE/diffusion_pipeline/diffusion_stage and NIPYPE/diffusion_pipeline/preprocessing_stage before running the connectome mapper again.
I've also added the possibility to resize the windows.
Let me know if it works now. Thanks for pointing this issue out.
Cheers, David Romascano
https://github.com/LTS5/cmp_nipype/tree/6d47336ebe832a5394a9a44bb0ca1d02e56546d5 This is the last version?
2014/1/23 davidrs06 notifications@github.com
Hi again,
So, try to download the last commit, now you should be able to specify the range of volume over which to perform motion or eddy current correction. I also changed the code so that the b0 volume is extracted before performing the resampling. There were also a couple of bugs that made the pipeline crash when re-processing the data, so the best would be to delete everything in NIPYPE/diffusion_pipeline/diffusion_stage and NIPYPE/diffusion_pipeline/preprocessing_stage before running the connectome mapper again.
I've also added the possibility to resize the windows.
Let me know if it works now. Thanks for pointing this issue out.
Cheers, David Romascano
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33134813 .
Actually no, you should download from https://github.com/LTS5/cmp_nipype, just click on the "Download Zip" button on the right.
Thanks.
There are more issues. In the preprocessing stage:
stderr 2014-01-24T14:31:41.315359:Aborted (core dumped) 140124-14:31:43,278 workflow ERROR: ['Node motioncorrection failed to run on host connectome-ripper.'] 140124-14:31:43,338 workflow INFO: Saving crash info to /ho**master/crash-20140124-143143-marcramos-motion_correction.npz 140124-14:31:43,338 workflow INFO: Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/plugins/multiproc.py", line 15, in run_node result['result'] = node.run(updatehash=updatehash) File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1128, in run self._run_interface() File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1226, in _run_interface self._result = self._run_command(execute) File "/usr/local/lib/python2.7/dist-packages/nipype/pipeline/engine.py", line 1350, in _run_command result = self._interface.run() File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 823, in run runtime = self._run_interface(runtime) File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1104, in _run_interface self.raise_exception(runtime) File "/usr/local/lib/python2.7/dist-packages/nipype/interfaces/base.py", line 1060, in raiseexception raise RuntimeError(message) RuntimeError: (u"Command:\nmcflirt -in /home/marcramosp1/NIFTI/DSI.nii.gz -out motion_corrected.nii.gz -refvol 0\nStandard output:\n\nStandard error:\nError: failed to open file motion_corrected.nii.gz\nImage Exception : #22 :: ERROR: Could not open image motion_corrected\nterminate called after throwing an instance of 'RBD_COMMON::BaseException'\nAborted (core dumped)\nReturn code: 134", '\nInterface MCFLIRT failed to run.')
Another question. Why does the connectome mapper process the tracks if it hasn't finished preprocessing stage correctly? I saw the tracks and they looked nice, but I asume it uses the non-preprocessed images.
Cheers, Marc
2014/1/24 davidrs06 notifications@github.com
Actually no, you should download from https://github.com/LTS5/cmp_nipype, just click on the "Download Zip" button on the right.
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33218729 .
Ok, lets try something: delete all the NIPYPE folder in the subject directory. Then set up and run only the "Preprocessing stage" by clicking the "Custom mapping" button and selecting the "Preprocessing Stage" only. Do you still get an error ? Make sure to set the range of volumes to the ones that are not empty.
And if you run the preprocessing stage without motion or eddy current correction (leaving the checkboxes blank), what do you get ?
Hi Marc,
Are you processing the pipeline using multi-processing ? Does the error still occur if you set the number of processors to 1 ?
Best regards, David Romascano
Hello, deleting the folder nipype solved the issue, thanks! I put number of cores to 4. The connectome had finished computing, but after checking the output I saw something rather wrong. The corpus callosum was like cut and a lot of fibers that should be there were missing. I checked the white matter mask and it seems fine (fsmask_1mm right? is used for seeding) but the dsi_dir.nii looks very noisy.
I guess spline filter didn't messed it up, but maybe that dsi_dir.
code:* dtb_streamline.inputs.dir_file = self.inputs.*_dirfile
In the code that's the input, right?
2014-01-27 davidrs06 notifications@github.com
Hi Marc,
Are you processing the pipeline using multi-processing ? Does the error still occur if you set the number of processors to 1 ?
Best regards, David Romascano
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33369026 .
I checked previous dsi_dir.nii files and they should llok like this. Then, I don't know what had happened.
2014-01-27 Marc Ramos m4rklar@gmail.com
Hello, deleting the folder nipype solved the issue, thanks! I put number of cores to 4. The connectome had finished computing, but after checking the output I saw something rather wrong. The corpus callosum was like cut and a lot of fibers that should be there were missing. I checked the white matter mask and it seems fine (fsmask_1mm right? is used for seeding) but the dsi_dir.nii looks very noisy.
I guess spline filter didn't messed it up, but maybe that dsi_dir.
code:* dtb_streamline.inputs.dir_file = self.inputs.*_dirfile
In the code that's the input, right?
2014-01-27 davidrs06 notifications@github.com
Hi Marc,
Are you processing the pipeline using multi-processing ? Does the error still occur if you set the number of processors to 1 ?
Best regards, David Romascano
— Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33369026 .
This is a filtered part of what I just got with mrtrix streamtrack in connectome mapper (DTI diffusion). As you can see, the corpus callossum is cut and fibers are missing. After seeing that, I changed some parameters in the connectome mapper diffusion's stage and try to run again (diffusion and connectome stages only), but it says: All stages finished! And in the console: File "/usr/local/lib/python2.7/dist-packages/cmp/pipelines/diffusion/diffusion.py", line 277, in process (preproc_flow,diff_flow, [('outputnode.diffusion_preproc','inputnode.diffusion')]), UnboundLocalError: local variable 'preproc_flow' referenced before assignment
I already have this fibers pre-calculated with the old connectomemapper and I have put the same parameters, but for the mrtrix. Resample 2x2x2 local model: CSD Inverse: Y lmax auto FA 0.7 Deterministic Desired number of tracks 1000000 Max number of tracks 1500000 Curvature 1 step size 0.2
Now the outputs are not available. I hope you can answer me soon, thanks! Marc.
2014-01-27 Marc Ramos m4rklar@gmail.com
I checked previous dsi_dir.nii files and they should llok like this. Then, I don't know what had happened.
2014-01-27 Marc Ramos m4rklar@gmail.com
Hello,
deleting the folder nipype solved the issue, thanks! I put number of cores to 4. The connectome had finished computing, but after checking the output I saw something rather wrong. The corpus callosum was like cut and a lot of fibers that should be there were missing. I checked the white matter mask and it seems fine (fsmask_1mm right? is used for seeding) but the dsi_dir.nii looks very noisy.
I guess spline filter didn't messed it up, but maybe that dsi_dir.
code:* dtb_streamline.inputs.dir_file = self.inputs.*_dirfile
In the code that's the input, right?
2014-01-27 davidrs06 notifications@github.com
Hi Marc,
Are you processing the pipeline using multi-processing ? Does the error still occur if you set the number of processors to 1 ?
Best regards, David Romascano
Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33369026 .
Hello, as the title of the issue says, errors appear during the execution of the tracking and keeps going. I believe that the program keeps running even when something crashes, erros appear but it doesnt stop and thinks the stage is finished. Afterwards, i couldn't select only that stage to run, see the error i posted before. The option skip previously done stages (as it was in the old cmp) should appear as well. It was very useful.
Thank you!
Marc.
Hello Marc,
Nipype default response to a crash event is to keep going and process as many nodes as possible. You can change this behaviour by creating a configuration file and setting the _stop_on_firstcrash variable to true (http://nipy.org/nipype/users/config_file.html). By default, Nipype will process new nodes as long as the crash doesn't prevent him from processing the remaining ones.
As for the custom mapping: you pointed out a bug that needs to be fixed. When executing a stage, Nipype checks if the parameters for all prerequired stages are the same as the last run. When only checking the diffusion stage, Nipype looks for the preprocessed data, which doesn't exist in the pipeline because the preprocessing stage is not checked. This is why the pipeline crashes. The custom mapping should be rearanged to look as a "until which stage to process" option. For now, you should check all boxes until the stage you want to compute. Nipype will skip already computed stages automatically. I'll post a new commit this afternoon or tomorrow to fix this.
As for the cutted corpus callosum, do you get the same result when processing data outside of nipype (running the mrtrix command on your own) ? Also, did you upload pictures ? I'm not able to see anything. If you are comparing DTI fibers obtained with DTK and DTI fibers obtained with MRTrix, fibers will look different mainly because the white matter mask needs to be resampled to fit the diffusion space for MRTrix. Did you check the wm mask resampled to 2x2x2 mm (NIPYPE/diffusion_pipeline/diffusion_stage/mask_resample/wm_mas_resampled.nii) ? DTK doesn't need the wm mask to be in the same space, which allows more fine filtering of the fibers in the wm.
Best regards, David Romascano
Hello, thanks for the clear answer, I hope the feedback is helping as much as you are helping me =) Yes I have sent you an image before, you didnt see it? I checked the mask and looks normal. I have run the exact same command with mrtrix but with less fibers just to see the results. I think they look quite the same, so maybe is something wrong with the parameters. I will try something else.
Best, Marc.
2014-01-28 davidrs06 notifications@github.com
Hello Marc,
Nipype default response to a crash event is to keep going and process as many nodes as possible. You can change this behaviour by creating a configuration file and setting the _stop_on_firstcrash variable to true (http://nipy.org/nipype/users/config_file.html). By default, Nipype will process new nodes as long as the crash doesn't prevent him from processing the remaining ones.
As for the custom mapping: you pointed out a bug that needs to be fixed. When executing a stage, Nipype checks if the parameters for all prerequired stages are the same as the last run. When only checking the diffusion stage, Nipype looks for the preprocessed data, which doesn't exist in the pipeline because the preprocessing stage is not checked. This is why the pipeline crashes. The custom mapping should be rearanged to look as a "until which stage to process" option. For now, you should check all boxes until the stage you want to compute. Nipype will skip already computed stages automatically. I'll post a new commit this afternoon or tomorrow to fix this.
As for the cutted corpus callosum, do you get the same result when processing data outside of nipype (running the mrtrix command on your own) ? Also, did you upload pictures ? I'm not able to see anything. If you are comparing DTI fibers obtained with DTK and DTI fibers obtained with MRTrix, fibers will look different mainly because the white matter mask needs to be resampled to fit the diffusion space for MRTrix. Did you check the wm mask resampled to 2x2x2 mm (NIPYPE/diffusion_pipeline/diffusion_stage/mask_resample/wm_mas_resampled.nii) ? DTK doesn't need the wm mask to be in the same space, which allows more fine filtering of the fibers in the wm.
Best regards, David Romascano
Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33466680 .
I checked again the mask with the fibers. Can you see the image? There are missing parts. So the issue is in the mask I think.
2014-01-28 Marc Ramos m4rklar@gmail.com
Hello, thanks for the clear answer, I hope the feedback is helping as much as you are helping me =) Yes I have sent you an image before, you didnt see it? I checked the mask and looks normal. I have run the exact same command with mrtrix but with less fibers just to see the results. I think they look quite the same, so maybe is something wrong with the parameters. I will try something else.
Best, Marc.
2014-01-28 davidrs06 notifications@github.com
Hello Marc,
Nipype default response to a crash event is to keep going and process as many nodes as possible. You can change this behaviour by creating a configuration file and setting the _stop_on_firstcrash variable to true (http://nipy.org/nipype/users/config_file.html). By default, Nipype will process new nodes as long as the crash doesn't prevent him from processing the remaining ones.
As for the custom mapping: you pointed out a bug that needs to be fixed. When executing a stage, Nipype checks if the parameters for all prerequired stages are the same as the last run. When only checking the diffusion stage, Nipype looks for the preprocessed data, which doesn't exist in the pipeline because the preprocessing stage is not checked. This is why the pipeline crashes. The custom mapping should be rearanged to look as a "until which stage to process" option. For now, you should check all boxes until the stage you want to compute. Nipype will skip already computed stages automatically. I'll post a new commit this afternoon or tomorrow to fix this.
As for the cutted corpus callosum, do you get the same result when processing data outside of nipype (running the mrtrix command on your own) ? Also, did you upload pictures ? I'm not able to see anything. If you are comparing DTI fibers obtained with DTK and DTI fibers obtained with MRTrix, fibers will look different mainly because the white matter mask needs to be resampled to fit the diffusion space for MRTrix. Did you check the wm mask resampled to 2x2x2 mm (NIPYPE/diffusion_pipeline/diffusion_stage/mask_resample/wm_mas_resampled.nii) ? DTK doesn't need the wm mask to be in the same space, which allows more fine filtering of the fibers in the wm.
Best regards, David Romascano
Reply to this email directly or view it on GitHubhttps://github.com/LTS5/cmp_nipype/issues/18#issuecomment-33466680 .
To post images write a comment directly on https://github.com/LTS5/cmp_nipype/issues/18 instead of replying from your e-mail. You can drag and drop image files into typing box.
One thing you could do is leave everything in the 1x1x1 mm size (set the F0, F1 and F2 parameters in the diffusion stage configuration to 1). This way you'll have the white matter mask and the diffusion data in the same space for both the DTK and MRTrix processing.
Another way would be to manually edit the wm matter mask as you like and use the "Custom wm segmentation" option for the MRTrix processing. You should also select the output of the parcellation stage as "Custom Atlas".
And yes, your feedback is of great help, thanks for reporting everything.
Cheers, David Romascano
Can you see them now?
Yes
The problem might be the wm mask indeed. What do you get if you set the resampling size to F0=F1=F2=1 in the diffusion stage configuration ?
Yes i did, it just finished and looks the same. I am trying to run only the diffusion step but is not working but it takes some time, is the bug you told me before right?
THe thing is that the red bundle over the corpus callosum should be green so i assume there is something wrong in the gradients. I will try another option, because i was inverting Y axis and maybe (as i did in the old connectomemapper).
Hi again,
The last commit now has the option to select until which stage to process the data. If you only modify the parameters of the diffusion stage, Nipype will skip all the previous stages automatically. Let me know if this works and if you run into other errors.
Regarding the corpus callosum, it's a discussion that is more suited for the cmtk-users group on google. You can create a new post there (https://groups.google.com/forum/#!forum/cmtk-users). For now, a solution might be to manually edit the white matter mask, and use it as a custom mask.
Best regards, David Romascano
Hi, thanks first for all the effort and ongoing working, this will be a great tool. I am getting errors in the workflow, and trying to understand how it works. I am processing diffusion DSI for this one.
I have errors in several steps, I think that it already crashes in the first one but it keeps running, I see sometimes errors appearing but the process doesnt stop. The main error is in the motion_correction, and regarding the time stamps, it tooks 3 hours for it to end and tell me that there is an error. That step takes that long? The error tells me that some file doesn't exist, inside "NIPYPE/diffusion_pipeline/preprocessing_stage/motion_correction/0x_2e57599550ab... _unfinished.json"
In segmentation I use existing freesurfer data. Laussane2008 in parcellation.
Some guidance? Thanks! Marc