Closed adam2392 closed 3 years ago
It occurred to me that for surfaces we don't need any sort of source space because we can just directly interpolate onto the surfaces. Due to how we visualize volumes, though, that's not going to be the case. So I think we'll want to add a src
argument to stc_near_sensors
that's required for volume mode (and we can make surface mode use it if it's provided).
It occurred to me that for surfaces we don't need any sort of source space because we can just directly interpolate onto the surfaces. Due to how we visualize volumes, though, that's not going to be the case. So I think we'll want to add a
src
argument tostc_near_sensors
that's required for volume mode (and we can make surface mode use it if it's provided).
Sorry do you mind elaborating a little bit on what you mean here?
How should I best tackle making this PR in your opinion? It seems that I want to perhaps first document and code-trace what's going on in source_estimate.py::stc_near_sensors()
and then separate code into if/else
blocks based on whether it's surface (i.e. pial) or volumetric (i.e. 3 MRI slices) visualization?
Sorry do you mind elaborating a little bit on what you mean here?
We need to get sensor data into the brain, i.e., into source space. One way to do this is on the surface (already implemented), and in this case it's fine just to interpolate directly onto the high-density FreeSurfer mesh (100k+ vertices per hemisphere) because stc.vertices
always refer to numbers in this high density mesh.
For volumes it's different. vertices
do not refer to numbers in the 256x256x256 1mm isotropic T1, but rather to a lower-density (usually 5mm or 7mm isotropic) volumetric grid covering the same domain.
In other words, for (surface) SourceEstimate, the .vertices
attribute in unambiguous for the given subject's FreeSurfer reconstruction, whereas the .vertices
attribute of a VolSourceEstimate only makes sense in conjunction with a volume source space, chosen by the pos
parameter of setup_volume_source_space. So I would expect the use case for interpolating into a volume to be something like:
# set up a 5 mm isotropic source space
vol_src = setup_volume_source_space(subject='sample', pos=5., mri='aseg.mgz', subjects_dir=subjects_dir)
# interpolate sEEG data into this source space
vol_stc = stc_near_sensors(evoked, trans, 'sample', src=vol_src)
brain = vol_stc.plot_3d(src=vol_src)
How should I best tackle making this PR in your opinion?
To really push sEEG analysis forward, we need some data to play with and showcase these changes. It looks like we have this:
https://github.com/mne-tools/mne-misc-data/tree/master/seeg
Any way we could pretend that these data are from sample
or fsaverage
and start just by making an example like our tutorials/misc/plot_ecog.py
named tutorials/misc/plot_seeg.py
? If not, let's get some data + electrode locations to work with. It doesn't even have to be real, it just has to be recognizable to sEEG folks as sEEG-like so that they can understand from our analysis of the fake data how to analyze real data.
I would start with what you can already do with MNE to analyze some data like this, then move on to this 3D business by adding another section to the tutorial...
It seems that I want to perhaps first document and code-trace what's going on in source_estimate.py::stc_near_sensors() and then separate code into if/else blocks based on whether it's surface (i.e. pial) or volumetric (i.e. 3 MRI slices) visualization?
... sure, the if/else should be based on whether the stc
is an instance of SourceEstimate/VectorSourceEstimate or VolSourceEstimate/VolVectorSourceEstimate. I would say get as far as you can on your own, open up a WIP PR with # XXX huh?
comments or whatever and we can iterate. I can also push commits that way to help, too. But as I said earlier, it would help immensely to work with an existing example for this.
Okay am going to try to make a PR here related to a plot_seeg.py
tutorial, where it will involve also my attempt at what you stated above.
Now that I'm re-reading your comment here: https://github.com/mne-tools/mne-python/issues/8388#issuecomment-712402587
I actually don't know what the difference between mni, mri, and fs coordinate_frames in the context of mne python really means. What are the consequences per say of these?
I actually don't know what the difference between mni, mri, and fs coordinate_frames in the context of mne python really means. What are the consequences per say of these?
The only one you should need to deal with in this context is the MRI coordinate frame. I'll comment in your PR as needed...
Describe the problem
In https://github.com/mne-tools/mne-python/pull/8190, @larsoner suggests that we can add 3D translucent brain visualization of a time series for depth electrode data (i.e. SEEG).
Describe your solution
"if we improve stc_near_sensors to allow returning a VolSourceEstimate object then nothing new needs to be done at the viz end. In other words, we "just" need to come up with a suitable way to interpolate the sEEG sensor activity into a volume, and once that's done, stc.plot and stc.plot_3d should just work."
Seems like the work in sequence comprises of:
stc_near_sensors
allow to return aVolSourceEstimate
objectstc_near_sensors
have an interpolation scheme for SEEG sensor data into a 3D volume (possibly say just via 1/r^2 propagation?)stc.plot
andstc.plot_3d
similar to how Eric has done in https://github.com/mne-tools/mne-python/pull/8190Any other suggestions to the above are appreciated! Will try to tackle this soon.
Describe possible alternatives
TBD
Additional context
Logging this as an issue, to not forget the conversation Eric brought up.