mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.62k stars 1.3k forks source link

The adjacency matrix doesn't work for a modified fsaverage source space #11820

Open dasdiptyajit opened 1 year ago

dasdiptyajit commented 1 year ago

Description of the problem

I would like to perform some spatio-temporal clustering tests on the template fsaverage data. At the moment, I am trying to calculate the adjacency matrix for a modified source space (i.e., dipoles in the medial walls are excluded). However, if I modify the source space object; mne.spatial_src_adjacency prompts an error since src[0]["use_tris"] for modified object returns an empty array.

Steps to reproduce

# %%
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator
from mne import compute_covariance
import os.path as op
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage

# %%
# Set parameters
# --------------
data_path = sample.data_path()
meg_path = data_path / "MEG" / "sample"

# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)

# The files live in:
subject = "fsaverage"
trans = "fsaverage"  # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, "bem", "fsaverage-ico-5-src.fif")
bem = op.join(fs_dir, "bem", "fsaverage-5120-5120-5120-bem-sol.fif")

# read raw file
(raw_fname,) = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)

# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
    (ch_name, ch_name.rstrip(".").upper().replace("Z", "z").replace("FP", "Fp"))
    for ch_name in raw.ch_names
)
raw.rename_channels(new_names)

# Read and set the EEG electrode locations, which are already in fsaverage's
# space (MNI space) for standard_1020:
montage = mne.channels.make_standard_montage("standard_1005")
raw.set_montage(montage)
raw.set_eeg_reference(projection=True)  # needed for inverse modeling

# create epochs (just for testing)
epochs = mne.make_fixed_length_epochs(raw, duration=1, preload=True)
cov_epochs = epochs.copy().crop(None, 0.05)
noise_cov = compute_covariance(cov_epochs, method='auto')

# evoked
evoked = epochs.average()

# %%
# calculate forward and inverse
# -------------------------
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src, bem=bem, eeg=True, mindist=5.0, n_jobs=None)
inv = make_inverse_operator(raw.info, fwd , noise_cov, loose=0.2, depth=0.8, fixed=False)
fwd['surf_ori'] = True
print(fwd)

# modified forward: restrict the forward model i.e., exclude medial wall labels ('unknown')
# -----------------------------------------
fs_labels = mne.read_labels_from_annot(subject='fsaverage', parc='HCPMMP1', subjects_dir=subjects_dir)
fwd_restrict = mne.forward.restrict_forward_to_label(fwd.copy(), fs_labels[2:])
fwd_restrict['surf_ori'] = True
print(fwd_restrict)
inv_restrict = make_inverse_operator(raw.info, fwd_restrict, noise_cov, loose=0.2, depth=0.8, fixed=False)

# Read the original source space
subjects_dir = data_path / "subjects"
src_fname = subjects_dir / "fsaverage" / "bem" / "fsaverage-ico-5-src.fif"

# extract srcs different way
src_ori = mne.read_source_spaces(src_fname)
src_inv = inv['src']
src_inv_restrict = inv_restrict['src']

# testing 3 srcs
adjacency_src_ori = mne.spatial_src_adjacency(src_ori)  # works
adjacency_src_inv = mne.spatial_src_adjacency(src_inv)  # works
adjacency_src_inv_restrict = mne.spatial_src_adjacency(src_inv_restrict)   # doesn't work

Link to data

No response

Expected results

mne.spatial_src_adjacency(src_inv_restrict) should return a sparse matrix.

Actual results

mne.spatial_src_adjacency(src_inv_restrict) returns *** ValueError: zero-size array to reduction operation maximum which has no identity

Additional information

none

larsoner commented 1 year ago

At the moment, I am trying to calculate the adjacency matrix for a modified source space (i.e., dipoles in the medial walls are excluded).

Rather than using a modified source space, could you use the standard one but pass spatial_exclude to the clustering function? This is exactly the use case that I use this parameter/setup for...

But yes it's possible that we also have some bug with this reduced source space that we should fix.

dasdiptyajit commented 1 year ago

Yes, spatial_exclude seems like a work around for these tests. The problem that I have, may be rather more specific to my case: the medial wall dipoles were excluded in the forward modelling/source reconstruction and the data was morphed to 'fsaverage' with similar dipoles config. So, I can't adapt to a new parameter settings for the cluster tests then I will have no consistency between the pipeline/results.

Is there a work around to reduce the full spatial_src_adjacency to a set of labels somehow?

larsoner commented 1 year ago

the medial wall dipoles were excluded in the forward modelling/source reconstruction and the data was morphed to 'fsaverage' with similar dipoles config.

I don't see a problem with morphing to the complete fsaverage space then using spatial_exclude. I'd expect the results to be the same as morphing to fsaverage with the dipoles excluded. (The only difference should be that the dipoles right next to the medial wall end up with slightly less activation because they can spread to the medial wall during morphing, but then this should wash out in the stat_fun anyway because it will be the case for any condition(s) you morph and the relevant standard-deviation term should account for it.)

Is there a work around to reduce the full spatial_src_adjacency to a set of labels somehow?

The spatial_src_adjacency for fsaverage should just be a block-diagonal (20484, 20484) sparse matrix. In principle you should be able to subselect the vertices you used via proper use of np.searchsorted and knowing there are 10242 vertices per hemisphere.

dasdiptyajit commented 1 year ago

Great! Thanks for the clarification. I will try to give it a go for both options.

For the second option this is something I have so far:

fs_labels = mne.read_labels_from_annot(subject='fsaverage', parc='HCPMMP1', subjects_dir=subjects_dir)
fsaverage_vertices = [np.arange(10242), np.arange(10242)]
exclude_lh = np.in1d(fsaverage_vertices[0], fs_labels[0].vertices)
exclude_rh = np.in1d(fsaverage_vertices[1], fs_labels[1].vertices)
spatial_exclude = np.concatenate([np.where(exclude_lh)[0], np.where(exclude_rh)[0] + len(fsaverage_vertices[0])])
In [34]: spatial_exclude
Out[34]: array([    8,    23,    36, ..., 20203, 20204, 20205])

In [35]: spatial_exclude.shape
Out[35]: (1742,)