Closed tsalo closed 4 years ago
Is the nilearn dataset only in MNI space? I'd lean towards using data on the subject space to compare both methods (original and new).
Yes, it's only in MNI space. I think we need MNI space data for the first test (original vs. refactor), though. After we move from refactoring to actually improving, we will need both MNI-space and native-space versions of the same data. I think we can probably use any fMRIPrepped (single-echo) data at that point. If you have any on-hand, that would be awesome. Otherwise, one of us could download a subject's worth of data from OpenNeuro and run a recent version of fMRIPrep at that point.
EDIT: Of course, if anyone has fMRIPrepped data available now, we should use that instead. It just needs the preprocessed data (without AROMA) in both native structural and MNI space, as well as tissue-probability maps. We can try out other methods for deriving our CSF and edge masks (as discussed in #6), but it would be nice to have some ground truth data for the first test.
I was thinking we could upload the files to the OSF, since it's so easy to download data for integrations tests from there (as we do in tedana
). The files may be too large to keep in the repository.
Yes, it's only in MNI space. I think we need MNI space data for the first test (original vs. refactor), though. After we move from refactoring to actually improving, we will need both MNI-space and native-space versions of the same data. I think we can probably use any fMRIPrepped (single-echo) data at that point. If you have any on-hand, that would be awesome. Otherwise, one of us could download a subject's worth of data from OpenNeuro and run a recent version of fMRIPrep at that point.
Okay I see where you're going with this. I agree then we could start with the nilearn dataset and then move on to an fMRIPrepped one.
EDIT: Of course, if anyone has fMRIPrepped data available now, we should use that instead. It just needs the preprocessed data (without AROMA) in both native structural and MNI space, as well as tissue-probability maps. We can try out other methods for deriving our CSF and edge masks (as discussed in #6), but it would be nice to have some ground truth data for the first test.
@smoia might have something? I haven't preprocessed any data with fMRIPrep as of today 😅
I was thinking we could upload the files to the OSF, since it's so easy to download data for integrations tests from there (as we do in
tedana
). The files may be too large to keep in the repository.
Yes 💯 let's keep the repo as simple and small as possible.
This is my first time using AROMA. I'm looking at the nilearn data and I see we don't have all the necessary parameters to run AROMA. Am I missing something?
I thought all we needed was standard space data with motion parameters. What else does AROMA need?
Well, according to the parser, we would need the affine and the warp file too.
How did you run AROMA in the past?
Oh, I think that should be fine. The affine and warp are just there to move data from native to standard space. As long as our data are already in standard space AROMA should work fine.
The last time I used AROMA directly (which was a while ago), I circumvented the transformation steps. I don't remember if I used custom code to do it though...
Okay. I'll give it a try then and see if I manage to write the integration test.
I'm having issues with the installation of the package. Could you please have a look at #13 ? For some reason the package is not being installed as aroma
.
I'm leaning toward using a dataset we can easily download from nilearn, like
nilearn.datasets.fetch_development_fmri()
, which has (some) adult participants and comes in MNI space with motion parameters.@eurunuela @smoia @CesarCaballeroGaudes do any of you have any other datasets you'd prefer to use?