Closed Caselles closed 7 months ago
Hello,
Regarding applying exactly the same pre-processing, that is a very high threshold to cross. The preprocessing performed in NSD is quite custom/tailored, so practically speaking it would require substantial effort to exactly replicate. That being, the code used is indeed available...
If you wanted to "more or less" replicate the pre-processing to the level of what is practically important, I think that is a much easier bar to cross. Many flavors of pre-processing out there are not that different from what NSD involved. It's just that there are details to consider, and maybe you care a lot about the details?
The NSD paper describes in full detail exactly what was done. For the most part you can think about the analysis in terms of preprocessing (e.g. head motion, spatial distortion, registration, etc.) vs. analysis (e.g. the time-series analysis that we implemented using GLMsingle). Depending on what you care about, maybe you are more concerned with the time-series analysis?
Thanks for the prompt answer!
Thank you for pointing out the difficulty of exactly replicating the pre-processing. It wasn't sure that it would be that difficult and hearing that from you definitely confirms me that it is not the way to go for me.
In my research, I'm trying the following:
Hence, what do you recommend in terms of pre-processing steps? This what I had in mind :
Let me know what you think!
Thanks for the context...
Hence, what do you recommend in terms of pre-processing steps? This what I had in mind :
motion correction spatial distorsion co-registration compute a glm with a simple model like glover with a design matrix of the events in my sequence related to my tasks compute the betas for each event mask the betas using a mask similar to the nsdgeneral mask use that as input to my model
The steps you describe here sound generally fine. (Except that I don't quite know what "glover" is...)
I guess, one question to consider is the space that your data will be prepared in. Like, native volume space, or surface space, or some group space like fsaverage or MNI. Since you are trying to bridge different subjects and datasets, presumably, fsaverage is probably a good choice.
But if yyou want to lean heavily on the idea you described of mapping to some latent space, then I guess the preprocessed space won't really matter (but then of course, the details of your mapping procedure are important to think carefully about).
Thank you for your answers, I'll be working this.
I'll close the issue for now, and maybe re-open it if there are further discussions to have :)
Thanks again for answering promptly, and congrats again on the great work you did.
👍
Hello,
First of all thanks for the great work! The progress we are seeing in the field of fMRI-to-Image is a direct consequence of your work, so big congrats.
I'm working on a research project where we do our own scanning sessions, with a similar protocol but for different tasks. The scans are done on a 3T MRI.
I wonder what is the best practice to apply the exact same pre-processing as the NSD paper but on my own scanning sessions? I don't see how this could not be done theoretically, but in practice I would like you opinion and advice on how to do it and avoid mistakes.
The goal for me would be to go from my nifti raw sessions to a dataset with pre-processed (masked, beta, etc) voxels for each event of my protocol.
I hope you'll have a bit of time to provide a bit of advice, otherwise that's completely ok and don't hesitate to close the issue.
Thanks again!