mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.7k stars 1.31k forks source link

Dipole / source localisation using anatomy template #5665

Closed JoseAlanis closed 5 years ago

JoseAlanis commented 5 years ago

Hey guys,

this is José from Germany. Im been working with MNE for a while now and I'm wondering if there is way to run dipole / source localisation for EEG data without relying on the actual subjects' structural data. Let's say for instance using a standard anatomy template such as Colin27 for co-registration. I'm aware that this type of approach might lack robustness and accuracy in many ways, but I've seen other toolboxes (e.g., eeglab, fieldtrip) provide an alternative for this type of analysis and was wondering if this is also true for MNE. I've gone through a couple of posts related to this issue (e.g., #5579), but was not able to find an exhaustive answer. Last but not least, if there are others interested in this kind of approach (?), could it be interesting to provide an example for source localisation that makes no use of the actual subjects T1 in the MNE documentation? Those are great!

Thanks in advance!

agramfort commented 5 years ago

hi unfortunately it is still not supported. Sorry

cbrnr commented 5 years ago

What is necessary to make this work? I think it would be important to support this kind of workflow, because especially in the EEG community it is not commonplace to have individual MRI scans.

agramfort commented 5 years ago

I know we should support this.

basically we need to "fake" an individual subject eg using fsaverage. So be able to project spherical montage positions to the head surface of fsaverage so one can then compute forward, source space etc.

do you have some interest to look into this? basically one important thing is to have a way to compare what this worklow would give compared to an alternative software that does this routinely (BESA, Brainstorm? etc.)

cbrnr commented 5 years ago

Interest yes, time not so much, and also I don't really know what's possible in MNE. I do know how this is done in EEGLAB though, so if someone more comfortable with this area would like to tackle this I'd be more than happy to support - but I can't do it by myself.

JoseAlanis commented 5 years ago

Hey guys, thank you very much for the responses. I'm happy to hear that, in principle, you would support this type of workflow. Like @cbrnr mentioned, I think that for me, it wouldn't be much of a problem to just run the analysis in eeglab, but beein able to run the complete analysis pipeline in MNE/python would ceartaily be much better.

So, I've been triying to go through some of thes steps that @agramfort suggested. Setting up the BEM for fsaverage wasn't too hard and results look pretty good, I guess. This is how the head surface looks like (plotted with mne.viz.plot_alignment() and surfaces='seghead':

fig_2

And surfaces='brain':

fig_4

If I understand correctly, the text step would be to try the coregistration of the EEG-setup to the scalp surface and the best way to achive this is throught the coregistration GUI (?). Loading the data to the GUI is not a problem. However, once the data is loaded, it appears as if only the nasion is beeing plotted on top of the fsaverage BEM:

fig_5

According to this slides there should be way of finding a "proper" MRI scaling factor (?). However, if I use the automatic fitting functions in the top right of the GUI the head and/or the fiducials disapear. I also get an error in the bottom right of the GUI for the LPA, NAS and RPA.

So I guess from here I'm just lost. Do you guys know what the problem might be or would some like to work on this together? It shouldn't be much work after getting the coregistration to actually work.

agramfort commented 5 years ago

how did you provide the electrode locations? do you have them at dig points in the info of the 01-raw.fif file?

JoseAlanis commented 5 years ago

Hmm, I think so, 01-raw.fif is a .bdf-file which I imported as follows:

# EEG montage
montage = mne.channels.read_montage(kind='biosemi64')
# Import raw data
raw = mne.io.read_raw_edf('./dpx_tt_bdfs/data1.bdf',
                          montage=montage,
                          preload=True,
                          stim_channel=-1,
                          exclude=['EOGH_rechts', 'EOGH_links', 'EOGV_oben', 'EOGV_unten',
                                   'EXG3', 'EXG4', 'EXG5', 'EXG6',
                                   'EXG7', 'EXG8'])

If i look at raw.infoi get the following:

<Info | 18 non-empty fields
    bads : list | 0 items
    buffer_size_sec : float | 1.0
    ch_names : list | Fp1, AF7, AF3, F1, F3, F5, F7, FT7, FC5, ...
    chs : list | 65 items (EEG: 64, STIM: 1)
    comps : list | 0 items
    custom_ref_applied : bool | False
    dev_head_t : Transform | 3 items
    dig : list | 67 items
    events : list | 0 items
    highpass : float | 0.0 Hz
    hpi_meas : list | 0 items
    hpi_results : list | 0 items
    lowpass : float | 52.0 Hz
    meas_date : int | 1434539154
    nchan : int | 65
    proc_history : list | 0 items
    projs : list | 0 items
    sfreq : float | 256.0 Hz
    acq_pars : NoneType
    acq_stim : NoneType
    ctf_head_t : NoneType
    description : NoneType
    dev_ctf_t : NoneType
    experimenter : NoneType
    file_id : NoneType
    gantry_angle : NoneType
    hpi_subsystem : NoneType
    kit_system_id : NoneType
    line_freq : NoneType
    meas_id : NoneType
    proj_id : NoneType
    proj_name : NoneType
    subject_info : NoneType
    xplotter_layout : NoneType
>

raw.info['dig'] has me 67 entries. I suppose 3 fiducials (kind 1) and 64 channels (kind 3)? I can print the output of raw.info['dig'] and post it here if it helps.

agramfort commented 5 years ago

when you load the file in the GUI does it tell you that it finds the 67 dig points? You should see them once loaded and locked fiducial points.

mmagnuski commented 5 years ago

@JoseAlanis To do the same I had to first create a "fake" digitization montage and add it to the data (seems you have already performed this step). Then the scale differences between the MRI and you channel positions may make the channels not visible. In my case the channels were very far away because the MRI was in meters (IIRC) and the channel positions were in cm (IIRC).
But in general - mne by default scales the mri to fit channel positions, which makes perfect sense if you are using digitized positions and a generic mri. However, if you don't have digitized positions, but the general spherical channel coordinates distributed with the cap, you would like to scale channel positions to the mri, not vice versa. We did this simply by copying the mri scaling factor found by the coreg GUI and multiplied the DigMontage channel positions by 1/mri_scaling_factor. Once the scale is correct the rest goes smoothly. :) Currently there are no easy tools to perform this kind of EEG coreg in mne, but it should get easier in the future. I will put together a few functions for this once I have the time. :)

agramfort commented 5 years ago

ok I will put this on your todo list for the next MNE code sprint :)

JoseAlanis commented 5 years ago

Hey guys, sorry for the delayed response and thanks a lot for the comments! They have really help me understand (at least a little bit better) what the coregistration GUI is doing. To keep you posted: After playing around a bit more with the GUI I noticed that my channels were indeed very far away. Just like you explained, @mmagnuski. I also noticed that the coreg GUI shows me the scale adjustmens in mm (see below, bottom left in the coreg window). So I went with something similar to your approach, @mmagnuski. I chaged the scale of the raw.info['dig'] manually, just to see what happens:

# copy of raw
raw_new = raw.copy()

# Change raw.dig scale
for i in range(len(raw.info['dig'])):
    raw_new.info['dig'][i]['r'] = raw.info['dig'][i]['r']/(1000)

Now I can see the electrodes and fiducials in the coreg GUI. Below you see them scaled by distance to the scalp.

bildschirmfoto 2018-11-22 um 16 05 24

So I think the workaround kind of does the trick. What makes a good coregistration result though? Do all electrodes have to be equally close to the scalp? (do they need to be "touching" the scalp or just above it)?

Anyways, thanks for all the help and looking forward to see the result of the next MNE code sprint!

agramfort commented 5 years ago

to me you should not have electrodes in from of the eyes to the EEG cap should be tilted backwards

I don't know where to find the code to map EEG spherical coordinates to MNI template.

larsoner commented 5 years ago

There is an example on how to do this with fsaverage now, which would be the preferred MNE way (and fsaverage is already in MNI talairach space):

http://mne-tools.github.io/dev/auto_tutorials/plot_eeg_no_mri.html#sphx-glr-auto-tutorials-plot-eeg-no-mri-py

Once you have the forward operator, you can check out other MNE examples for how to do inverses, etc. Closing since hopefully it will work with less effort than what we talked about here anyway!