Open WilliamFCB opened 1 year ago
Hi @WilliamFCB Thanks for bringing this up, it's something that could use some discussion.
We typically rsync all the imageqc.json files to a laptop and use dmriprep_viewer on those files. It only needs those json files to function and web browsers on clusters are painful to use.
Indeed, I would be interested to know, what criteria you employ when evaluating the quality of processed data Cheers
Hi Matt,
Some thoughts on and questions regarding the QC evaluation of qsiprep processed data. I was wondering if the carpetplot provided by qsiprep could be updated to the color scheme/format as in the qc.pdf generated by eddy_quad. The latter gives a clearer picture of the problem volumes/slices. Alternatively, one could maybe use the one provided by dmriprep_viewer (see pictures below. I noticed that the latter displays the order of slices differently than qsiprep/eddy_quad, which both start with 0 at the top).
At the moment, I extract the following QC data (qsiprep, I also calculate median, std, min, max, CV of measures) and eddy_quad qc measures. For the latter, I save/prevent deletion of eddy-related data using the nipype.cfg approach (see my qsiprep running code below). I wrote a bash script to extract the eddy qc measures from the eddy_quad generated qc.json file (see at the end). Not very pretty, but it does the job ;-) Here, I use synthstrip on topup_imain_corrected_avg_brain.nii.gz to get a better-fitting brain mask for use in eddy_quad.
I tried dmriprep_viewer. However, the motion and distortion-corrected images are way too small (at least in Firefox on our cluster) to evaluate if the corrections were successful. Also, the middle slices do not allow discerning more localized problem areas such as orbitofrontal regions).
I now loop through data using fsleyes (not ideal). First, I loop through datasets with >0 raw_num_bad slices. Next, I go through data sets with a fd_max > voxel size. For the remaining data, I go through the carpet/eddy plots and only visually check images if there are potentially problematic movement patterns. For this, I created an HTML with only the carpet plots for all subjects. If I can find the time, I will look in more detail at how the visual evaluation compares to extracted qc-measures. At first glance, the latter seems not always to be straightforward if problems (bad slices/ outliers/ large fd_mean/max, etc) are not very outspoken.
Anyway, I was wondering what your preferred QC procedure is.
running qsiprep with preserving eddy output
!/bin/bash
Purpose: Running qsiprep with saving eddy related output
To get all eddy output for QUAD and SQUAD qc measures
generate a nipype.cfg file with the following two lines (excluding the #):
[execution]
remove_unnecessary_outputs=false
Place the cfg file in your home directory in: ~/.nipype
Create the directory if necessary
SBATCH -J qsi_GPU
SBATCH --cpus-per-task=3
SBATCH --mem-per-cpu=8GB
SBATCH --time=24:00:00
SBATCH --output logs/slurm-%j.txt
SBATCH --partition=HPC
SBATCH --gres=gpu:1
slurmid=${SLURM_ARRAY_TASK_ID} sub=${1} ses=${2} BIDS_DIR=${3} RAWDATA_DIR=${BIDS_DIR}/rawdata SCRIPTS_DIR=${4} site=${5}
qsiprep_sif="/mnt/depot64/singularity/site/qsiprep-0.16.1.sif" DERIV_DIR="/mnt/scratch/VIA_BIDS/qsiprep/${site}_derivatives_EDDY_output" LICENSE_DIR="/mnt/depot64/freesurfer/freesurfer.6.0.0" WORK_DIR="/mnt/scratch/VIA_BIDS/qsiprep/qsiprep_workEDDY${site}"
if [ ! -e ${WORK_DIR} ]; then mkdir -p ${WORK_DIR} fi
if [ ${site} == "DRCMR" ] then res="1.8" elif [ ${site} == "CFIN" ] then res="2" fi
BIDS_DIR_TMP="/mnt/scratch/VIA_BIDS/qsiprep/bidsqsiprep${site}_tmp"
echo $sub $ses $RAWDATA_DIR $BIDS_DIR_TMP $SCRIPTS_DIR $WORK_DIR ${slurmid} ${site}
echo module purge echo module load singularity/3.11.1-ce echo module load cuda/11.2 module purge module load singularity/3.11.1-ce module load cuda/11.2
log=logs/slurm.qsiprep.${sub}.${ses}-gpu_EDDY_output.log mv logs/slurm-${SLURM_JOBID}.txt $log
RUN QSIPREP ORIGINAL RESOLUTION
export SINGULARITYENV_ANTS_RANDOM_SEED=999
DERIV_DIR_ORIGRES="${DERIV_DIR}/qsiprep_origres"
if [ ! -e ${DERIV_DIR_ORIGRES} ]; then mkdir -p ${DERIV_DIR_ORIGRES} fi
time singularity run --cleanenv --contain --nv \ -B ${LICENSE_DIR} \ -B ${BIDS_DIR_TMP}/:/data_in \ -B ${RAWDATA_DIR}/:${RAWDATA_DIR} \ -B ${SCRIPTS_DIR}/:${SCRIPTS_DIR} \ -B ${DERIV_DIR_ORIGRES}/:/data_out \ -B ${WORK_DIR}/:/work \ -B /tmp:/tmp \ -B /lib:/drcmrlib \ -B /mrhome/wimb/.nipype/ \ --env "LD_LIBRARY_PATH=/drcmrlib:\$LD_LIBRARY_PATH" \ ${qsiprep_sif} \ /data_in \ /data_out \ participant \ -w /work \ --participant-label ${sub}${ses/-/} \ --skip_bids_validation \ --use-plugin ${SCRIPTS_DIR}/tmp/qsiprepplugin/${sub}${ses}_qsiprep_plugin.yml \ --fs-license-file ${LICENSE_DIR}/license.txt \ --unringing-method mrdegibbs \ --dwi-denoise-window 5 \ --output-resolution ${res} \ --hmc-model eddy \ --eddy_config ${SCRIPTS_DIR}/tmp/eddyparams/${sub}${ses}_eddy_params.json \ --output-space T1w \ --nthreads ${SLURM_CPUS_PER_TASK} \ -vv
RUN QSIPREP HIGH RESOLUTION
export SINGULARITYENV_ANTS_RANDOM_SEED=999
DERIV_DIR_HIGHRES="${DERIV_DIR}/qsiprep_highres"
if [ ! -e ${DERIV_DIR_HIGHRES} ]; then mkdir -p ${DERIV_DIR_HIGHRES} fi
time singularity run --cleanenv --contain --nv \ -B ${LICENSE_DIR} \ -B ${BIDS_DIR_TMP}/:/data_in \ -B ${RAWDATA_DIR}/:${RAWDATA_DIR} \ -B ${SCRIPTS_DIR}/:${SCRIPTS_DIR} \ -B ${DERIV_DIR_HIGHRES}/:/data_out \ -B ${WORK_DIR}/:/work \ -B /tmp:/tmp \ -B /lib:/drcmrlib \ -B /mrhome/wimb/.nipype/ \ --env "LD_LIBRARY_PATH=/drcmrlib:\$LD_LIBRARY_PATH" \ ${qsiprep_sif} \ /data_in \ /data_out \ participant \ -w /work \ --participant-label ${sub}${ses/-/} \ --skip_bids_validation \ --use-plugin ${SCRIPTS_DIR}/tmp/qsiprepplugin/${sub}${ses}_qsiprep_plugin.yml \ --fs-license-file ${LICENSE_DIR}/license.txt \ --unringing-method mrdegibbs \ --dwi-denoise-window 5 \ --output-resolution 1 \ --hmc-model eddy \ --eddy_config ${SCRIPTS_DIR}/tmp/eddyparams/${sub}${ses}_eddy_params.json \ --output-space T1w \ --nthreads ${SLURM_CPUS_PER_TASK} \ -vv
bash script for extracting eddy qc measures
!/bin/bash
script to derive eddy_quad qc measures from qsiprep processed data
this script uses synthstrip to create a more optimal brain mask for extracting eddy snr and cnr measures (see https://surfer.nmr.mgh.harvard.edu/docs/synthstrip/)
The section extracting outlier data below needs to be adjusted in case you have more than one b-shell
[ "$1" == "" ] && echo "Please specify a site e.g. DRCMR or CFIN" && exit [ "$2" == "" ] && echo "Please specify a qsiprep output resolution e.g. highres, origres" && exit
site=${1} res=${2} BIDS_DIR="/mnt/projects/VIA_BIDS/BIDSMRI${site}" SCRIPTS_DIR="/mnt/projects/VIA_BIDS/scripts/qsiprep" DERIV_DIR="/mnt/scratch/VIA_BIDS/qsiprep/${site}_derivatives_EDDYoutput/qsiprep${res}/qsiprep" WORK_DIR="/mnt/scratch/VIA_BIDS/qsiprep/qsiprep_workEDDY${site}"
module load fsl/6.0.5 module load singularity/3.11.1-ce
if [ ${site} == "DRCMR" ] then ses="ses-01DRCMRprisma" elif [ ${site} == "CFIN" ] then ses="ses-01CFINskyra" fi
for i in ${DERIV_DIR}/sub-via504*${ses/-}
for i in ${DERIV_DIR}/sub*${ses/-} do
cd ${OUT_QC_DIR}
define output csv filename
qc_csv="${OUT_QC_DIR}/qc_measures.csv"
write header line to output csv
no indent here as otherwise, echo produces spaces after a comma
echo subid,sessionid,b0_mean_snr,b1_mean_cnr,b0_std_snr,b1_std_cnr,\ perc_outliers_tot,perc_outliers_pe,perc_outliers_b1,\ mean_v2v_move_mm_abs,mean_v2v_move_mm_rel,vox_displ_stdv,\ mean_v2v_trans_mm_x,mean_v2v_trans_mm_y,mean_v2v_trans_mm_z,\ mean_v2v_rot_deg_x,mean_v2v_rot_deg_y,mean_v2v_rot_deg_z,\ std_v2v_EC_lin_x,std_v2v_EC_lin_y,std_v2v_EC_lin_z,\ mean_wv_std_trans_mm_x,mean_wv_std_trans_mm_y,mean_wv_std_trans_mm_z,\ mean_wv_std_rot_deg_x,mean_wv_std_rot_deg_y,mean_wv_std_rot_deg_z,\ qsiprep_dir,work_dir > ${qc_csv}
write extracted qc meausures to csv
echo ${subid/ses*},${ses},$b0_mean_snr,$b1_mean_cnr,$b0_std_snr,$b1_std_cnr,\ ${perc_outliers_tot},${perc_outliers_pe},${perc_outliers_b1},\ ${mean_v2v_move_mm_abs},${mean_v2v_move_mm_rel},${vox_displ_std},\ ${mean_v2v_trans_mm_x},${mean_v2v_trans_mm_y},${mean_v2v_trans_mm_z},\ ${mean_v2v_rot_deg_x},${mean_v2v_rot_deg_y},${mean_v2v_rot_deg_z},\ ${std_v2v_EC_lin_x},${std_v2v_EC_lin_y},${std_v2v_EC_lin_z},\ ${mean_wv_std_trans_mm_x},${mean_wv_std_trans_mm_y},${mean_wv_std_trans_mm_z},\ ${mean_wv_std_rot_deg_x},${mean_wv_std_rot_deg_y},${mean_wv_std_rot_deg_z},\ ${i},${WORK_SUB} >> ${qc_csv}
done
script to concatenate individual qc_measures.csv files
!/bin/bash
[ "$1" == "" ] && echo "Please specify a site e.g. DRCMR or CFIN" && exit [ "$2" == "" ] && echo "Please specify a qsiprep output resolution e.g. highres origres" && exit
site=${1} res=${2} BIDS_DIR="/mnt/projects/VIA_BIDS/BIDSMRI${site}" SCRIPTS_DIR="/mnt/projects/VIA_BIDS/scripts/qsiprep" DERIV_DIR="/mnt/scratch/VIA_BIDS/qsiprep/${site}_derivatives_EDDYoutput/qsiprep${res}/qsiprep" OUT_DIR="/mnt/scratch/VIA_BIDS/qsiprep/${site}_derivatives_EDDY_output"
if [ ${site} == "DRCMR" ] then ses="ses-01DRCMRprisma" elif [ ${site} == "CFIN" ] then ses="ses-01CFINskyra" fi
outfile="${OUTDIR}/VIA11${site}qsiprep${res}_eddy_qc_measures.csv" echo creating ${outfile} echo ""
counter=0 list=$(ls ${DERIV_DIR}/sub*${ses/-}/eddy_qc/qc/qc_measures.csv) for i in ${list} do echo ${i} echo "" echo "counter: $counter" echo "" if [[ ${counter} == 0 ]] then echo in rm -f ${outfile} cp -v ${i} ${outfile} chmod 750 ${outfile} let counter++ else tail -n +2 $i >> ${outfile} let counter++ fi done