mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.69k stars 1.31k forks source link

beamformer group level (GSoC) #5195

Closed TommyClausner closed 5 years ago

TommyClausner commented 6 years ago

Hey there MNE community!

For the next ~3 months I'm going to work on implementing some necessary functions for performing group statistics on volumetric (beamformer) data. This will be my Google Summer of Code Project and in case you're interested you could check out the progress here: https://blogs.python-gsoc.org/tommy-clausner/

In order to maximize the outcome it would be nice to involve a broad audience / the MNE community. So any broader suggestions or feedback, related to the project, could be given here and are very welcome, while I will open separate issues for separate steps as appropriate (if that's alright?).

The final goal of the project would be to implement

in order to prepare single subject data for group level analyses / statistics.

My plan was to start with existing toolboxes first (MNE, FreeSurfer, FSL, ANTsPy) and see what I can exploit in which way. So any experience or hints are very welcome! Afterwards I'd like to write some alpha-stage functions to do the warping in the style of beamformerData.morph('MNI') to e.g. morph a grid space to MNI space. Finally the output would be shaped such that it incorporates the existing API. Bonus: if time allows I'd like to get into the respective plotting functionality as well!

The next days I'll get a bit more into the MNE workflow / ecosystem and happily discuss any points that already might arise.

A more detailed to-do list will follow soon, but feel free to drop any more general comment / suggestion / complaint / wish / question / etc in here.

For now this issue serves as some kind of "Hello World" issue and project's wish list :)

Looking forward to working with you!

Cheers,

Tommy

larsoner commented 6 years ago

I will open separate issues for separate steps as appropriate (if that's alright?).

Agreed, this is where we can converge on API, etc.

warping of volumetric grid spaces to another

When you say warping do you mean an affine transformation, or something else? Would this be based on coregistering the MRIs (based on brain shape)? I'm out of my depth on this a bit, but yes please start an issue to discuss this. It might help if you start by looking at how setup_volume_source_space is used in examples, and the MRI interpolator it already provides. This is what most of us will be familiar with.

making pseudo-individual anatomical MRIs based on head-shapes

mne coreg already allows this to be done, feel free to try it and see if it's sufficient for your use cases. In the last few weeks we added transforming the MRI files (in addition to Freesurfer surfaces) by tweaking the affine.

We do have spherical warping implemented in principle, but in practice it would involve a bit of work (especially in terms of the MRI warping) to get it to work. You are welcome to tackle this if you are motivated to do it!

The current to-do list for mne coreg lives in #3934. So if it seems deficient feel free to comment there and once we converge I can add more to the top-level TODO list.

shape the respective output desirably (would be a major thing to be discussed)

I have no idea what you mean :) Shape as in spatially deform, or shape as in ndarray.shape, shape as in "craft" / design an API, or ... ?

(get into the plotting)

Feel free to open an issue for this, too. So far we have used nilearn plotting, and it would be great if we could continue to do that, or extend what they have available. If you want something to start with, take a look at #4496 (and feel free to take over if you have time!)

agramfort commented 6 years ago

I would also list and experiment with these options:

http://nipy.org/dipy/examples_built/affine_registration_3d.html http://nipy.org/dipy/examples_built/syn_registration_3d.html

TommyClausner commented 6 years ago

@larsoner Basically it needs to be a non-linear transformation, because subject's brains are arbitrarily different. So I guess it would make sense to first do an affine transformation the get subject spaces roughly aligned and then do the nonlinear transform.

My idea would have been to morph the individual T1s to some reference T1 and use the obtained linear+nonlinear transformations to shift (morph) the source space(s). I opened #5208 to discuss this further ;)

Thanks also for the hint with mne coreg, I'll check it out!

I have no idea what you mean :) Shape as in spatially deform, or shape as in ndarray.shape, shape as in "craft" / design an API, or ... ? ... hm ...

Yes, I meant what shall be the output in terms of API like saving as .nii, returning as ndarray, returning as internal nifti, VolSource, etc...

@agramfort Thanks for the hint with nipy! it helped a lot (see #5208)

larsoner commented 6 years ago

My idea would have been to morph the individual T1s to some reference T1 and use the obtained linear+nonlinear transformations to shift (morph) the source space(s). I opened #5208 to discuss this further ;)

This is similar to what we do with surface source estimates where we morph each subject's data to (typically) fsaverage.

Let's proceed with #5208 and then see what sort of API we should use. Right now surface source estimates SourceEstimate have a .morph method. We could eventually have this also for VolSourceEstimate. Source estimates for mixed (surf+vol) source spaces can use both methods under the hood.