Closed jasongong11 closed 4 years ago
code to pull docker image:
docker pull pennbbl/xcpengine
code to build singularity image:
sudo singularity build xcpengine_latest.simg docker://pennbbl/xcpengine
Looks like the error /share/apps/singularity-3.5.2//matlab/startup.m does not exist
is an issue with the running on slurm HPC service. I rerun the code in a local machine using the same singularity image and got no error like this. Could you please offer some advise about how to get around with this? Or does this error matter?
Looks like the singularity binding point should be /data/derivatives
rather than /data/derivatives/fmriprep
, otherwise, the container could not find freesurfer
dir.
Looks like there is a naming issue with /freesurfer/fsaverage5
which should be /freesurfer/fsaverage
.
I have another question regarding how to compute pearson correlation adjacent matrix by own. Can I just extract time series from the file <sub>_<run>_img_sm6Std.nii.gz
in norm dir using nilearn tool and compute the correlation matrix?
To be more specific, I want to run a dynamic connectivity network analysis. Which way is a better option: 1) to use the file in norm dir and separate different time windows and then extract time series and then compute correlation matrix by myself, or 2) simply separate the time series into different time window using the time series data ts.1D
in fcon
dir?
In other words, I think my question is that is the process to extract time series that is running in xcpengine is independent at each TR or is not independent. If each TR is independent, then I guess I could just separate the time series into different time windows using the ts.1D
file. That would be much easier.
Hi @jasongong11
Yes the first issue is that you have new fmriprep output that doesn't include fsaverage5 in the FreeSurfer output. We have updated the xcpEngine, you can pull for the latest one, it will be fine.
you can include singularity run -cleanenv
Clean the environment before running the container.
Remember <sub>_<run>_img_sm6Std.nii.gz
has been spatially filtered after regression (_residualized.nii.gz
). I would advise to extract timeseries in native space not norm. You can do it with nilearn tool or possibly with xcpengine ( check the last paragraph here: https://xcpengine.readthedocs.io/utils/roiquants.html#roi-quantification ). There are atlases in sub-*_atlas
that have been registered to native space already after running xcpengine.
Hi @a3sha2 ,
Thank you very much for your response! The issues are gone with your advice! And I really appreciate your advice on extracting time series!
I am still puzzled about extracting time series by xcpengine. I noticed that after running the basic pipeline, the xcpengine already extracted time series based on Power et al. (2011) atlas. The result file is fcon/power264/<sub>_<run>_power264_ts.1D
. I wonder if I can just use this file directly for further analysis?
Thanks again! Best, Jason
yes it can be use
Describe the bug The crash happened when running the regress module on singularity at hpc
Command I used
Cohort file Paste cohort file between the triple backticks
Design File Paste your entire design (
.dsn
) file between the triple backticksError message Paste your error message between the backticks
Runtime Information I was running singularity on slurm
Additional context I wonder if there is any issue with the design file or might be something wrong with the singularity images I am running.