spinoza-centre / pRFline

Repository for the pRF in line-scanning project
MIT License
1 stars 0 forks source link

Meeting 04-04-2022 #13

Closed gjheij closed 2 years ago

gjheij commented 2 years ago

Voxel selection

The main concern for me was the problem of averaging runs. Motion affects which voxels we're actually targeting. The translation outside the slice plan are invisible to us, so we can currently not correct for that. Rotation in the slice plane can be corrected somewhat by manually aligning the slices and motion in the line direction can be "corrected" by aligning intensity profiles of e.g., the tissue segmentation (see #11).

We're mostly interested in a particular patch of cortex consisting of about 6-7 voxels. This can be the case for one run, but the next run can have more voxels due to motion. This makes averaging tricky: which voxels to select? But, this process means we're selecting an ROI a priori. SD suggested to select the ROI later: first, we average the tissue segmentations, and select GM voxels based on a given threshold. This would ensure we're only selecting GM-voxels that are present in both runs. Basically, we select the ROI a posteriorly.

To do

gjheij commented 2 years ago

Update 06-04

In the commit above is an implementation of the discussed voxel selection strategy. Basically, we get the manually aligned tissue probabilities from all runs, average them, and create a tissue classification by setting thresholds ourselves (in blue, the average probabilities across runs with shaded standard deviation; manually set thresholds at 0.7 but can be specified separately for WM/GM/CSF; in grey, the voxels surviving the thresholding). This ensures consistency of voxel selection across runs: sub-003_ses-4_desc-tissue_classification So, for a single subject, a complete preprocessing step would look like this:

# fetch all the files for some runs; in this case run-4/5/6 (we exclude run-2 because it's incomplete)
run_files = utils.get_file_from_substring([f"sub-{sub}", f"ses-{ses}", f"{task}"], func_dir, exclude="run-2")

# from run_files, select the functional files ending with .mat
func_file = utils.get_file_from_substring("bold.mat", run_files)

# from run_files, select the single slice (without OVS) images
ref_slices = utils.get_file_from_substring([f"sub-{sub}", f"ses-{ses}", "acq-1slice", ".nii.gz"], anat_dir, exclude="run-2")

# get the registration matrix mapping ses-1 (not FreeSurfer!) to the first high resolution multi-slice anatomy
trafo = utils.get_file_from_substring(f"ses{ses}_rec-motion1", opj(deriv_dir, 'pycortex', f"sub-{sub}", 'transforms'))

# get the manually created registration matrix mapping the first single slice image to the run-specific single slice image
trafo_run = utils.get_file_from_substring(".txt", trafo_dir)

# plop everything in Dataset
data_obj = dataset.Dataset(func_file,
                           verbose=True,
                           acompcor=True,
                           ref_slice=ref_slices,
                           ses1_2_ls=trafo,
                           run_2_run=trafo_run,
                           voxel_cutoff=300, # voxels lower than this are not considered for aCompCor
                           save_as=opj(anat_dir, f"sub-{sub}_ses-{ses}"))

After preprocessing (filtering, standardizing, and aCompCor) each run, the probability maps are averaged, new voxel classification is made, and a separate dataframe with GM-voxels is created (data_obj.gm_df). We can also specify a range in which we need to look for voxels to create a dataframe just with voxels in the vicinity of our ribbon. For instance, the range can be [355,375]. It then looks for the GM-voxels based on the new classification within this range to create a ribbon-dataframe (data_obj.ribbon_df). This latter one is very compatible with the Nideconv fitting (especially the plotting part), as it's not that many voxels.

As far as the chunk of code above is concerned, not much has changed really. The main thing was to keep track of the run-specific segmentations and have a separate function to the averaging/voxel selection.

Scan session 06-04

I scheduled a scan session today, but unfortunately the scanner decided to not fully coorperate. First, I positioned the subject to low in the coil, resulting in power optimization issues. After fixing the positioning of the subject in the coil with Diederick we still got the same error and decided to turn off the RF power optimization and Pick up coil optimization. This worked, as in, the scanner proceeded scanning. The images, however, were awful. Both the images acquired with the volume coil (e.g., low resolution anatomy) and surface coil images (partial anatomy, slices). Moreover, when we finally got scanning, time ran out so I had to take the subject out. No functional data from this session. I do have physiology, but for some reason these files are still empty after exporting with gtpackngo..? Gianfranco is getting decent files, so is this a Classic-specific issue?

PS: notes from Luisa about power optimization: 9f3fbed9-6b82-406f-b934-1f566bb87d2f