mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.68k stars 1.31k forks source link

Morph volumetric source estimate--discrepancy in number of vertices #6265

Closed poojaprabhu9 closed 5 years ago

poojaprabhu9 commented 5 years ago

Describe the problem

we (@virvw and @poojaprabhu9) are trying to morph the whole brain volumetric source estimate to fsaverage (as illustrated in example). After morphing, the number of vertex count vary per subject. For example:

subject 1,
<VolSourceEstimate  |  21667 vertices, subject : fsaverage1, tmin : -200.0 (ms), tmax : 1200.0000000000002 (ms), tstep : 4.0 (ms), data size : 21667 x 351>
subject 2,
<VolSourceEstimate  |  21197 vertices, subject : fsaverage1, tmin : -200.0 (ms), tmax : 1200.0000000000002 (ms), tstep : 4.0 (ms), data size : 21197 x 351>

what possibly could have gone wrong? after morphing, why am i getting different number of vertices for every subject?

Additional context

snippet of the code is as follows,

src = setup_volume_source_space(listsubj[i], mri=aseg_fname, pos=3.,
                                sphere=sphere, mindist=3.,
                                subjects_dir=subjects_dir) 
fname_epochs = (wdir + listsubj[i] +
                "/mne_python/EPOCHS/MEEG_epochs_icacorr_signDT8_" +
                condition[c] + '_' + listsubj[i] + "-epo.fif")
epochs = mne.read_epochs(fname_epochs)
epochs.resample(250, npad='auto', window='boxcar', n_jobs=1, pad='edge',
                verbose=None)
epochs.crop(-0.2,1.2)
picks = mne.pick_types(epochs.info, meg=True, eeg=True, stim=False, eog=False)
evoked = epochs.average(picks=picks)

noise_cov= mne.compute_covariance(epochs, tmin=-0.2, tmax=0.,
                                  method=('shrunk','empirical'))
data_cov= mne.compute_covariance(epochs, keep_sample_mean=True, tmin=0.,
                                 tmax=1.2, method=('shrunk','empirical'))

fname_trans= ('/neurospin/meg/meg_tmp/MTT_MEG_Baptiste/mri/%s/mri/T1-neuromag'
              '/sets/COR.fif') % subj
fname_bem = op.join(subjects_dir, '%s' % subj, 'bem',
                    '%s-5120-5120-5120-bem-sol.fif' % subj)
fwd = make_forward_solution(evoked.info, fname_trans, src, fname_bem,
                            mindist=5.0, meg=True, eeg=True, n_jobs=1)

filters = make_lcmv(evoked.info, fwd, data_cov, reg=0.05,
                    noise_cov=noise_cov, pick_ori='max-power',
                    weight_norm='nai')

stcs = apply_lcmv(evoked, filters, max_ori_out='signed')
morph = mne.compute_source_morph(fwd['src'],
                                 subject_from=subj, subject_to='fsaverage', 
                                 subjects_dir=subjects_dir)
 stc_fsaverage = morph.apply(stcs)
larsoner commented 5 years ago

During forward calculation, vertices can be removed from the source space. If you look at fwd['src'] for each subject, they will probably already have differing numbers of vertices.

virvw commented 5 years ago

Hi there,

I guess what Pooja means is that the end point of morphing to an average brain volume would be to end up with a common grid for all participants so we can perform group stats. An unequal number of vertices prevents an easy work around for this unless we are missing something in our understanding of volume morphing current implementation.

Let us know, we can help document / explicit things eventually.

Best,

Virginie van Wassenhove https://brainthemind.com/ https://brainthemind.com/

On Fri, May 3, 2019 at 6:11 PM Eric Larson notifications@github.com wrote:

During forward calculation, vertices can be removed from the source space. If you look at fwd['src'] for each subject, they will probably already have differing numbers of vertices.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/6265#issuecomment-489152206, or mute the thread https://github.com/notifications/unsubscribe-auth/ABYMUIX2UNCFCTS2HWCEMG3PTRP3NANCNFSM4HKTOVGA .

agramfort commented 5 years ago

I think we do have a design issue here. If you morph to fsaverage you should get stcs with the same volume grid to stcs data can be combined for any arithmetic operation.

@larsoner ok for you? I think it's the issue @britta-wstnr reported to me too no?

poojaprabhu9 commented 5 years ago

Thanks Virginie for explanation. Please check the snippet too, so that it will be sure whether it's a coding issue from my end or it's a design issue.

larsoner commented 5 years ago

Seems reasonable to me

agramfort commented 5 years ago

it may take some time to be fixed.

@poojaprabhu9 @virvw you can export the stc to a nifti image and then mask it with a volumetric atlas eg using nilearn.

would this help already?

what do you want to do with the stc in fsaverage space?

virvw commented 5 years ago

No, masking won't do . We conducted some hipp source reconstruction to test a priori working hypotheses on evoked data, replicated/solved the single-trial regression in so we're alright with that.

Now, we wish to :

It would be plain simple and elegant to process the full brain in that manner. I don't see a satisfactory alternative as the questions we are adressing are sufficiently complex not to add uneeded methodological complexity. Willing to help for months but somehow need feedback/synchronize on your side or we can't be useful... :)

@pooja prabhu prabhuppooja@gmail.com: how is it implemented in fieldtrip (or spm or brainstorm) ? or @Britta Westner britta.wstnr@gmail.com already worked on this?

On Sat, May 4, 2019 at 11:35 AM Alexandre Gramfort notifications@github.com wrote:

it may take some time to be fixed.

@poojaprabhu9 https://github.com/poojaprabhu9 @virvw https://github.com/virvw you can export the stc to a nifti image and then mask it with a volumetric atlas eg using nilearn.

would this help already?

what do you want to do with the stc in fsaverage space?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/6265#issuecomment-489311360, or mute the thread https://github.com/notifications/unsubscribe-auth/ABYMUITAZDMDTRB2LQZWTL3PTVKEZANCNFSM4HKTOVGA .

britta-wstnr commented 5 years ago

Yes, I got the problems with morphed stc reported by other people, too. I haven't looked into this yet, but it is on my list. However, I am not too familiar with the whole morphing code yet, so I cannot say anything about how easy a fix this would be ... or where things go wrong. @virvw : FieldTrip is taking a different approach, where the source grid is set up in MNI space to begin with and then warped to the individual anatomy via the normalized individual brain volume.

The goal is clear, I think: Have a common space to morph to, where each voxel location is the same across subjects and the count of voxels does not change. If I see it right, @agramfort, this would include figuring out how to drop grid points - or add (?) them - with varying grid counts across subjects: or is there a way to specify the number of grid points in the individual source spaces?

virvw commented 5 years ago

Please, keep Pooja and I in the loop (I may be offline for a couple weeks due to travels) but let us know how we can help advance!

On Sat, May 4, 2019 at 12:15 PM Britta Westner notifications@github.com wrote:

Yes, I got the problems with morphed stc reported by other people, too. I haven't looked into this yet, but it is on my list. However, I am not too familiar with the whole morphing code yet, so I cannot say anything about how easy a fix this would be ... or where things go wrong. @virvw https://github.com/virvw : FieldTrip is taking a different approach, where the source grid is set up in MNI space to begin with and then warped to the individual anatomy via the normalized individual brain volume.

The goal is clear, I think: Have a common space to morph to, where each voxel location is the same across subjects and the count of voxels does not change. If I see it right, @agramfort https://github.com/agramfort, this would include figuring out how to drop grid points - or add (?) them - with varying grid counts across subjects: or is there a way to specify the number of grid points in the individual source spaces?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/6265#issuecomment-489314118, or mute the thread https://github.com/notifications/unsubscribe-auth/ABYMUIVQY3MPANNNDAZIKRDPTVO3VANCNFSM4HKTOVGA .

agramfort commented 5 years ago

@poojaprabhu9 can you try to output the morphed volumes as nifti files with:

morph.apply(stc, output='nifti1')

You should get nifti1 images in fsaverage space. I hope this helps as otherwise I'll been someone to look into this. Not sure if @TommyClausner has still time? Otherwise maybe @larsoner can do black magic?

poojaprabhu9 commented 5 years ago

No @agramfort. This did not help.

On Sun, May 5, 2019 at 8:09 PM Alexandre Gramfort notifications@github.com wrote:

@poojaprabhu9 https://github.com/poojaprabhu9 can you try to output the morphed volumes as nifti files with:

morph.apply(stc, output='nifti1')

You should get nifti1 images in fsaverage space. I hope this helps as otherwise I'll been someone to look into this. Not sure if @TommyClausner https://github.com/TommyClausner has still time? Otherwise maybe @larsoner https://github.com/larsoner can do black magic?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/6265#issuecomment-489432521, or mute the thread https://github.com/notifications/unsubscribe-auth/AI6HEXZ4HXGRTT6Y46PUNJLPT3WTHANCNFSM4HKTOVGA .

-- Thank You Pooja Prabhu

poojaprabhu9 commented 5 years ago

Any luck? Actually, the morphing of volume source (whole brain) is a vital part of our work. I would request expertise to suggest some solution. Thanks

TommyClausner commented 5 years ago

Hey @poojaprabhu9

Sorry must have somehow missed this :/ I'll have a look later today ...

TommyClausner commented 5 years ago

Hey @poojaprabhu9

do both subjects have the same number of vertices when you do ? stcs = apply_lcmv(evoked, filters, max_ori_out='signed')

Because as far as I can see, I assume the difference of vertices comes from the fact that there are different input vertices. Meaning that compute_source_morph re-scales the existing vertices into target space ('fsaverage'). That is it uses both MRIs (subject + fsaverage) to compute a volumetric morph, that will be applied to the result of apply_lcmv. Hence the difference in number of vertices doesn't seem to be surprising.

To be a bit more clear: the (number of) vertices are actually voxel indices where "activity" can be found. Thus len(stc_fsaverage.vertices) should be equal to len(stc_fsaverage.data). Corresponding regions would thus be those, where the vertex values are the same. So you could define a vector of vertices that has all indices corresponding to the actual grid space and assign the data using stc_fsaverage.vertices from both subjects. Another way is to convert the data into an actual volume

Could you try morph.apply(stc, mri_resolution=True, output='nifti1') and see whether the resulting nifti has the shape (256, 256, 256, 1) ? (Kinda what @agramfort suggested)

Cheers, Tommy

Maybe is an additional note: the default grid space that is used for morphing (as you used in the code you provided is iso 5 mm). Thus the resulting volume is of shape (51, 51, 51, 1), which means that the values found in stc_fsaverage.vertices refer to a location in range(0, 51**3)

larsoner commented 5 years ago

Hence the difference in number of vertices doesn't seem to be surprising.

This was my reaction at first, too, then I realized that while this is true, it doesn't solve a critical use case.

Let's quickly say what's done in an example surface case:

Thus at the end of both stc_fs_1 = morph.apply(stc_subj_1) or stc_fs_2 = morph.apply(stc_subj_2) you have a SourceEstimate with 20484 verts, which is very helpful.

For the volume case, this morph is done but the resample/upsample step is a bit hidden in the zooms/spacing selected during morphing, and doesn't show up explicitly in the VolSourceEstimate. You can do the NIfTI output of shape (256, 256, 256, n_times), but this is challenging for many reasons (poor memory usage, no built-in support for MNE functions like there is for (Vol)SourceEstimate, etc.).

Maybe is an additional note: the default grid space that is used for morphing (as you used in the code you provided is iso 5 mm). Thus the resulting volume is of shape (51, 51, 51, 1), which means that the values found in stc_fsaverage.vertices refer to a location in range(0, 51**3)

Option 1

Assuming you are saying that the data for, say, vertex 12345 for stc_fs_1_vol and stc_fs_2_vol refer to the same voxel in the 5mm-zoom space (right?), if we provided a way to zero pad (or probably better, spatially interpolate) the data for stc_fs_1_vol and stc_fs_2_vol so that they both contained the same set of vertices, then everything would be aligned properly. Meaning stc_fs_1_vol.data rows are the same spatial locations as those in stc_fs_2_vol.data. @TommyClausner do you know the best way to do this interpolation (maybe just nearest neighbor is good enough, or trilinear if some neighbors are almost the same distance away)?

One additional consideration here is that it's sometimes very nice to have the source space corresponding to the source estimate you are using. So if we go this route, we should probably add a morph.get_output_source_space() or similar method that gave the effective MNE volume source space corresponding to the output volume source estimate object.

Option 2

Another option would be to say what source space to resample/upsample to when specifying the volume morph. I spoke briefly to @agramfort and, following the surface path, this could be done by:

  1. Adding a standard fsaverage 5mm spacing volumetric source space (easy with src_fs_vol_5mm = setup_volume_source_space('fsaverage', pos=5.) and some dataset updates)
  2. Making compute_source_morph in the volume case allow specifying a destination source space to morph to, such that once you did stc_fs_1_vol = morph.apply(stc_subj_1_vol) you had a VolSourceEstimate whose stc_fs_1_vol.vertices corresponded to vertices in the complete src_fs_vol_5mm space, i.e., it was always the case that np.array_equal(stc_fs_1_vol.vertices, src_fs_vol_5mm[0]['vertno']).

@TommyClausner how hard would it be to get this to use the src_fs_vol_5mm source space locations, rather than generic ones we currently use? (Or at least some full/complete source space, rather than one with holes?) Maybe we could just take the existing transformation (computed at whatever accuracy/spacing/zooms are being used), then add one more linear / sparse matmul that goes from this space to the src_fs_vol_5mm space / vertices.

I'm not sure which option is better/easier, but I think either one would probably allow @poojaprabhu9 and @virvw to proceed.

TommyClausner commented 5 years ago

hm... Thanks @larsoner ! Sounds reasonable to make the output such that it is standardized :)

Assuming you are saying that the data for, say, vertex 12345 for stc_fs_1_vol and stc_fs_2_vol refer to the same voxel in the 5mm-zoom space (right?)

Yes, which means a grid like np.zeros((51, 51, 51)) and then plugging in the values should already do the trick, no ?

What I just wonder is the following: when I call mne.setup_volume_source_space('fsaverage', pos=5.)[0]['shape'] I get (39, 39, 39) and for the standard 5 mm morph (51, 51, 51). So actually the standard fs source space using pos=5. should be a sub-volume of the morph output, right ? (probably excluding outside brain regions, that are in the MRI-like volumetric morph space) - So wouldn't it just be possible to find the corresponding subset, create a new source space (mne.setup_volume_source_space('fsaverage', pos=5.)) and insert the values there / sub-sample morph.apply(stc).vertices to fit the src ?

I guess I miss something, but where is the need for an interpolation ?

btw: guess a trilinear interpolation, should do the trick in case we indeed need it, no ?

larsoner commented 5 years ago

So actually the standard fs source space using pos=5. should be a sub-volume of the morph output, right ... So wouldn't it just be possible to find the corresponding subset,

It depends on whether or not it's actually a subset of identical X/Y/Z coordinate (centers). It's entirely possible that the pos=5. creates the 5x5x5 voxel centers in different places from the "standard 5mm morph". We'd have to dig into the affine to check. But yes hopefully we don't need any interpolation and can just effectively subselect. @TommyClausner can you look into a bit? I always get a bit lost looking at the transformations and code in setup_volume_source_space.

britta-wstnr commented 5 years ago

I will just chime in here to re-iterate the two important points:

The latter point might only be possible with non-linearly warped grid positions in individual space, i.e., the FieldTrip way of doing things. Alternatively, the fsaverage brain will be non-equidistant, right? As discussed with @agramfort, this will introduce problems for plotting on MRIs (i.e., never converging interpolations). I would strongly vote for the fsaverage to be the equidistant brain (then individual subjects can still be plotted on this). I am not sure about interpolating, as this might pose problems for statistics: especially with 5 mm spacing (or coarser) this can place grid points in the average at different places than in the individual subjects.

larsoner commented 5 years ago
  • same number of vertices across subjects
  • the vertices should fall onto the same brain tissue across subjects

It depends on when in the pipeline you want this to occur. The way we have been talking about things (and how it works in master), each subject source space would have a variable number of verts (mostly head-size/shape dependent), and then after morphing they all have the same source space vertices in fsaverage's space (except for holes, which is what we so far are trying to figure out how to fix).

Alternatively, the fsaverage brain will be non-equidistant, right?

In either of the two options I mention above, after morphing the VolSourceEstimate vertices will all be in the same space (fsaverage with some even grid spacing).

The latter point might only be possible with non-linearly warped grid positions in individual space, i.e., the FieldTrip way of doing things.

It sounds like you want to take a fsaverage source space, warp it to subjects, then do source localization. I don't think this solves the problem of "holes" due to forward exclusion -- the fs->subj morphed source space can have vertices culled from it just like the individual subject source spaces can. Either way you need to either interpolate the missing vertices for subjects missing them, or ignore vertices for all subjects that were missing from any individual subject.

FYI what I think you're suggesting we have the equivalent for in surface source space as morph_source_spaces, but it doesn't get used often and poses its own challenges. So for consistency it would be nice not to go that route unless we have to.

Also thinking about topology and localization -- it might not be as good for source estimation to use less-evenly-spaced sources during the localization step (which would happen taking an evenly spaced fsaverage source space and warping it). It might be preferable to deal with the spacing differences after localization via proper interpolation in the destination space. But maybe this is backward thinking...?

I am not sure about interpolating, as this might pose problems for statistics: especially with 5 mm spacing (or coarser) this can place grid points in the average at different places than in the individual subjects.

You can always interpolate onto a finer grid if you want (down to even 1mm). On surfaces we already do this typically by upsampling 8196 source points onto 20484 on fsaverage. I'd guess that 5 mm is probably "good enough" given that smoothing is often used/useful for group studies anyway, used in fMRI which has better resolution than us (with Gaussian FWHMs around this size or bigger), and given the assumed M/EEG source resolution. But either way, the option for finer resolution should be there in either option I outlined above.

agramfort commented 5 years ago

we support both options with surfaces: 1/ morphing by reinterpolation 2/ using a morphed source space (similar to fieldtrip approach)

I think we should not treat the volumes differently.

larsoner commented 5 years ago

using a morphed source space (similar to fieldtrip approach)

I think we should not treat the volumes differently.

Okay for now / in this issue let's focus on fixing the current way of doing things 1/ using one or both of the options I've talked about above, and @britta-wstnr feel free to open an issue for 2/.

TommyClausner commented 5 years ago

I could check out whether it's possible to just "sub-sample" and in case not dive a bit into the ifs and hows on options 1 and 2 that @larsoner suggested

poojaprabhu9 commented 5 years ago

Hi @TommyClausner, after trying this: stcs = apply_lcmv(evoked, filters, max_ori_out='signed') Each subject have a different number of vertices (len(stcs.vertices)).

I tried---morph.apply(stc, mri_resolution=True, output='nifti1') --the resulting nifti has a shape(256,256,256,(number of time points)). i have considered entire time so the number of time points is 281.

TommyClausner commented 5 years ago

Hey @poojaprabhu9

ok cool, but then it should (at least for now) fix the issue of having the stuff in the same space, because now your data for both subjects should be non-linearly morphed towards the "fsaverage"-space and thus be comparable, no ?

poojaprabhu9 commented 5 years ago

@TommyClausner Non-linearly morphing might help. But i am not sure how can i do it?

TommyClausner commented 5 years ago

I mean after calling morph.apply(stc, mri_resolution=True, output='nifti1') the output nifti is a non-linear morph of the source data you were inputing when using compute_source_morph - so the result is a nifti in 'fsaverage' space containing the individual, non-linear morphed subject data

poojaprabhu9 commented 5 years ago

Yes that's right.

On Tue, May 14, 2019, 18:39 Tommy Clausner notifications@github.com wrote:

I mean after calling morph.apply(stc, mri_resolution=True, output='nifti1') the output nifti is a non-linear morph of the source data you were inputing when using compute_source_morph - so the result is a nifti in 'fsaverage' space containing the individual, non-linear morphed subject data

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/6265?email_source=notifications&email_token=AI6HEX6I7XAHVGWOZRD4SW3PVK2ZJA5CNFSM4HKTOVGKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODVLNOJA#issuecomment-492230436, or mute the thread https://github.com/notifications/unsubscribe-auth/AI6HEXZGFIJWG52DDRHEVRLPVK2ZJANCNFSM4HKTOVGA .

TommyClausner commented 5 years ago

Hey @larsoner

I was tinkering around a bit, but I guess I don't fully understand how the source-space spacing is thought... I tried to create a source space that has the size of the morphed output (similar to when I output it as a nifti), but there are issues:

somehow some axis are swapped (0 and 2) and there seems to be an offset between the source space and the nifti, even though it seems at the first glance pretty straight forward, since creating a volume source space and converting to nifti use the same kind of interpolator. So in theory there should be some kind of straight forward way to create a source space after morphing that just suits the morphed source estimate...

Here is what I did

import os
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse, read_inverse_operator
import numpy as np

sample_dir_raw = sample.data_path()
sample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')
subjects_dir = os.path.join(sample_dir_raw, 'subjects')

fname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')
fname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')

fname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',
                                  'brain.mgz')

evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)

# Apply inverse operator
stc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, "dSPM")

# To save time
stc.crop(0.09, 0.09)

morph = mne.compute_source_morph(inverse_operator['src'],
                                 subject_from='sample', subject_to='fsaverage',
                                 subjects_dir=subjects_dir)

stc_fsaverage = morph.apply(stc)

full_mri_size_fs = 256.
size_pos = 5.
sphere_radius = (full_mri_size_fs - 2 * size_pos) / 2
src_fs_vol_5mm = mne.setup_volume_source_space('fsaverage', sphere=(0., 0., 0., sphere_radius), subjects_dir=subjects_dir,  mri='brain.mgz', pos=size_pos)

src_fs_vol_5mm[0]['inuse'] = np.zeros(src_fs_vol_5mm[0]['inuse'].shape)
src_fs_vol_5mm[0]['inuse'][stc_fsaverage.vertices] = 1

tmp_array = np.zeros((np.prod(morph.shape), 1))
tmp_array[stc_fsaverage.vertices] = stc_fsaverage._data
tmp_array = tmp_array.reshape(morph.shape)
tmp_array = tmp_array[::-1,:,:]
tmp_array = tmp_array[:,:,::-1]
tmp_array = np.swapaxes(tmp_array, 0, 2)

stc_fsaverage._data = tmp_array.reshape(np.prod(morph.shape), 1)[stc_fsaverage.vertices]
stc_fsaverage.data = stc_fsaverage._data

stc_fsaverage.plot(src_fs_vol_5mm, subject='fsaverage',subjects_dir=subjects_dir)

stc.plot(inverse_operator['src'], subject='sample', subjects_dir=subjects_dir)

So as you can see, to get it kinda right, I had to swap some axis and reverse the order of some values, but this of course doesn't seem to be the "correct" way... So maybe you or @agramfort or @britta-wstnr or someone else might have an idea about that... Do you ?

I think the solution to the problem in general is:

Think this would be viable ?

Unfortunately I have not too much time to dig much deeper (unless you might be able to give me a hint where to maybe look at :) )

larsoner commented 5 years ago

So as you can see, to get it kinda right, I had to swap some axis and reverse the order of some values, but this of course doesn't seem to be the "correct" way... So maybe you or @agramfort or @britta-wstnr or someone else might have an idea about that... Do you ?

In the MNE volume code there are these transforms like mri_voxel->mri etc. One of them probably has such a transformation/axis swap. Swapping 0 and 2 sounds suspiciously like what you'd do to take a SAR (superior, anterior, right) representation of the data to a RAS. The former is maybe how the data are stored on disk, thinking in C-major order (S/z-slices, then A/y-slice within that z, then R/x within that y and z) and the latter is what MNE/Freesurfer prefers (RAS orientation). But this is just speculation on my part.

What might help is if we start annotating the MNE volume source space setup code with what we understand it's doing. If you already have some of this in your head, feel free to open a WIP PR adding comments about it. If not, I can look at the code and try to annotate it a bit to see if it helps you understand what needs to be done. I'm optimistic that doing so will make it clear when axes need to be swapped and why, so that there will be no more mysterious changes in your code.

I think the solution to the problem in general is:

  • morphing as it is now
  • creating the corresponding fsaverage source space according to the morph settings
  • arranging the morphed data such that vertices (and data) fit with the new "pseudo morphed" space

Think this would be viable ?

As long as by "arranging the morphed data" you mean dealing with the problem of subject-by-subject variable "holes" by some suitable interpolation scheme such that in the end you always get a VolSourceEstimate defined in the "complete" fsaverage source space, yes this sounds good to me.

TommyClausner commented 5 years ago

Swapping 0 and 2 sounds suspiciously like what you'd do to take a SAR (superior, anterior, right) representation of the data to a RAS

hm... Sounds plausible :)

What might help is if we start annotating the MNE volume source space setup code with what we understand it's doing

d'accord !

As long as by "arranging the morphed data" you mean dealing with the problem of subject-by-subject variable "holes" by some suitable interpolation scheme such that in the end you always get a VolSourceEstimate defined in the "complete" fsaverage source space

Since the interpolators in morph and src work after the same principle I hope that there is a way of setting up a source space such, that the source space interpolator just mapps it "right" (not 100% sure that this is the case / will work, but it seems so)

I'll open a WIP PR asap :)

larsoner commented 5 years ago

FYI @TommyClausner in #6046 I fixed some stuff that was wrong with the stc.vertices that came from SourceMorph, see this comment. Can you see if your script can be simplified now?

poojaprabhu9 commented 5 years ago

I tried it. it seems that the problem is till intact. I used the mne 0.19.0.

larsoner commented 5 years ago

Yes we have not fixed this yet

larsoner commented 5 years ago

Okay @TommyClausner I might have a bit more insight into this now, I think I've come to see some of the things you pointed out above. Sorry in advance for the wall of text. I'll try to make a PR sometime next week to add code comments to the volume source space code to make sure all this is accurate.

If we look at where the sample source space in the inverse comes from:

>>> inverse_operator['src']
<SourceSpaces: [<volume, shape=(21, 26, 24), n_used=3757, coordinate_frame=head>]>

We can recreate this with:

>>> bem = mne.read_bem_solution('/Users/larsoner/mne_data/MNE-sample-data/subjects/sample/bem/sample-5120-bem-sol.fif')
>>> mne.setup_volume_source_space('sample', pos=7., bem=bem, subjects_dir=subjects_dir, verbose=True)
...
Surface extent:
    x =  -66.7 ...   68.8 mm
    y =  -88.0 ...   79.0 mm
    z =  -44.5 ...  105.8 mm
Grid extent:
    x =  -70.0 ...   70.0 mm
    y =  -91.0 ...   84.0 mm
    z =  -49.0 ...  112.0 mm
13104 sources before omitting any.
3757 sources remaining after excluding the sources outside the surface and less than    5.0 mm inside.
>>> src
<SourceSpaces: [<volume, shape=[21 26 24], n_used=3757, coordinate_frame=MRI (surface RAS)>]>

Note that np.prod([21 26 24]) == 13104 -- in other words, stc.vertices, which come from src[0]['vertno'], are indices into the subset of grid vertices contained within the sub-cube (of what would be the original grid of size (51, 51, 51)) defined the "grid extent"s given above.

Then looking at what we want the canonical 5mm-spacing fsaverage destination volumetric source to be we have:

>>> bem_fs = mne.make_bem_model('fsaverage', 5, (0.3,), subjects_dir=subjects_dir)
>>> bem_fs = mne.make_bem_solution(bem_fs)
>>> src_fs =  mne.setup_volume_source_space('fsaverage', pos=5., bem=bem_fs, subjects_dir=subjects_dir, verbose=True)
...
Grid extent:
    x =  -80.0 ...   80.0 mm
    y = -115.0 ...   75.0 mm
    z =  -75.0 ...   90.0 mm
...
43758 sources before omitting any.
...
14629 sources remaining after excluding the sources outside the surface and less than    5.0 mm inside.

At the end of the day, we want our stc_morphed_properly.vertices to correspond to src_fs[0]['vertno'], i.e. indices into the subset of the volume within the given grid extent used during creation of src_fs.

In principle IIUC this should "just" require properly reindexing the stc_morphed_currently.vertices that morph.apply produces -- which has stc_morphed_currently.vertices corresponding to the (51, 51, 51) space -- to vertices corresponding to those given by the "grid extent" of src_fs. This assumes that the voxel centers are the same for both spaces (which I'm not sure about) -- if they aren't, we're talking about trilinear interpolation, basically.

If this sounds reasonable, one option would be to:

  1. Do morphing as we do now
  2. Write a reindexing/resampling function -- keeping in mind that vertno is really reshaped in Fortran order, which is a bug I found and fixed in #6046 -- and see if it gives the expected result when plotting. If the voxel centers are the same in src_fs and in the morph/zooms-based code, I think this is just a reindexing problem. If they aren't, then I think it's a trilinear interpolation problem that we can hopefully solve using nibabel or dipy trilinear interpolation code as a first pass, and the built-in MNE trilinear interpolation -> sparse matrix code for speed eventually. Probably call this resample_to_vol_src: take a stc defined on an entire "MRI grid" and turn it into a stc defined on a "source space grid" if you give it the source space to which it should be restricted.
  3. Either allow passing src_to to SourceMorph directly, to have it do this restriction/conversion for you when you call apply, or add a src_to argument to apply, or just make the resample_to_vol_src public.

Alternatively, we could:

  1. Adjust SourceMorph so that the volume it produces is defined in the same space as a stc_fs.as_volume(mri_resolution=False) based on src_fs would be. I think this would require adding src_to as an arg somewhere in the SourceMorph constructor, as we could then inspect it to get 1) the zooms, 2) the domain/shape, and (maybe?) 3) the voxel centers (/affine?) configured correctly such that the image produced by applying the morph is already has the correct grid extents/voxel centers. We currently don't set (2) based on any src but rather on mri. And I'm still not sure about the voxel centers...

My vote is for this second one. In this scheme, if users do src_to=None in SourceMorph, then it's the behavior we have on master. If they pass src_to=src_fs, then people can omit the subject and zooms arg, and everything should "just work" (hopefully) when they do stc_fs = morph.apply(stc) and stc_fs.plot(src_fs)!

If this works, then we can move on to what I think is mostly a separable issue, namely how to deal with any possible "holes" that come up due to vertices near the edges existing for some subjects and not others, i.e. if none of the vertices in the morphed STC actually map into some of the members of src_fs[0]['vertno']. I think this will end up being a pretty easy problem to solve using some form of iterative trilinear extrapolation, but for now we can just live with some values being undefined (maybe set them to nan, and we can change the behavior via kwarg later?).

And from there, hopefully the volumetric morphing will work like the surface-based morphing for people!

larsoner commented 5 years ago

@TommyClausner I adapted your script above into gist showing that if we:

  1. Set the sphere size so it makes a shape of (51, 51, 51) (what the SourceMorph does), and
  2. Replace the affine

You can get the stc.plot(src), stc.plot(morph), and stc_fs.plot(src_fs) to all match. This is all consistent with the stuff above. So I'm optimistic that the plans above could work.