mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.72k stars 1.32k forks source link

Animate ECoG activity on top of the FreeSurfer pial surface of the brain #7787

Closed adam2392 closed 4 years ago

adam2392 commented 4 years ago

Describe the problem

In #7768 animation is being added to the plot_ecog example. @larsoner suggested that we add in the animation into the example. Currently, it's put together using matplotlib.animation, but seems like mne-python has other animation funcs (e.g. topomap), so I think it would be nice to have a "3D brain animate" function sitting in mne-python.

Describe your solution

A function similar to the topomap (https://mne.tools/dev/generated/mne.Evoked.html#mne.Evoked.animate_topomap) animate function should be possible.

I think a nice feature, would be a four view panel (saggital, coronal, axial and "custom"), where "custom" is looking at the activity from an optimally selected view that shows as many channels as possible on the surface of the brain. The other 3 views are traditional.

Describe possible alternatives

@larsoner suggested this might go into the 3D brain that @GuillaumeFavelier is developing? Not familiar w/ this, but happy to help where possible.

If the 3D brain is really nice... I would be interested in discussing how visualization of SEEG activity might look.

Additional context

See: https://github.com/mne-tools/mne-python/pull/7768#issuecomment-628797523

larsoner commented 4 years ago

We recently added add_volume to _Brain in #8064. That PR is big but it's because of the complications involved in volumetric rendering.

We could add a add_sensor that takes sensor locations from a supplied info, and data containing time courses, and colors the sensors according to data. For EEG these would be discs projected onto the scalp, for ECoG these would be disks projected onto the brain surface; for MEG these would be MEG sensors at their location determined by dev_head_t (all of these probably require info+trans input). Actually I don't think that this would be too difficult because it's mostly code re-use from plot_alignment.

I can take a stab at implementing it if it seems useful.

hoechenberger commented 4 years ago

@larsoner Sounds like a great idea!!

larsoner commented 4 years ago

Another option would be to come up with some function that takes an Evoked instance, subject, and trans and then projects activation directly on the pial surface to yield an STC. Then you can just do stc.plot(smoothing_steps='nearest') and it should at least look okay. An api like:

stc = stc_from_sensors(evoked, trans, subject, surface='pial', subjects_dir=None, distance=0.01, project=True)

where distance will make any vertex on the given surface within that many meters of a given electrode be colored according to that electrode's value. If project=True (default) it will project the sensor locations onto the given surface first (which seems reasonable). This will actually be pretty easy to code, and easier than modifying brain. I can try it to see if it looks reasonable.

WDYT @adam2392 ?

larsoner commented 4 years ago

And we could make a interp='linear' | 'constant' or so where the linear mode scales by 1 when the distance is 0, and 0 when the distance is distance so you end up with something like an "activation cone" in terms of the amplitudes around the given location. Again, pretty easy to code and would give some flexibility.