This is a good task for people without access to GPUs.
Existing work in the Naselaris lab has built voxel-to-voxel mapping models that extract key dimensions of variance corresponding to the signal in the stimulus. We want to map vision voxels to imagery voxels, but this could also be done to map vision activity patterns to other vision activity patterns, creating a denoising autoencoder for vision, but that is likely a separate research direction.
The task is to, in a cross-validated fashion (train on 5 or 11, test on 1), create a denoising model using the NSD-Imagery data that can produce denoised imagery betas that might be easier for the decoding model to read.
Training Steps:
Create a linear model to map vision activity patterns to imagery activity patterns of the same stimulus
Run PCA on the outputs of this model (which are vision activity patterns mapped to imagery activity patterns in the brain)
Reconstruct imagery trials with the minimum amount of principal components necessary to maintain the accuracy of the full model.
Testing Steps:
Reconstruct an imagery activity patterns by using only the principal components identified in the training step via PCA.
Pass this "denoised" imagery activity pattern through a mindeye decoder (can be done later if no GPU).
This is a good task for people without access to GPUs.
Existing work in the Naselaris lab has built voxel-to-voxel mapping models that extract key dimensions of variance corresponding to the signal in the stimulus. We want to map vision voxels to imagery voxels, but this could also be done to map vision activity patterns to other vision activity patterns, creating a denoising autoencoder for vision, but that is likely a separate research direction.
The task is to, in a cross-validated fashion (train on 5 or 11, test on 1), create a denoising model using the NSD-Imagery data that can produce denoised imagery betas that might be easier for the decoding model to read.
Training Steps:
Testing Steps: