mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.67k stars 1.31k forks source link

curvature of pial surface in plot_alignment #8748

Closed jasmainak closed 11 months ago

jasmainak commented 3 years ago

Describe the new feature or enhancement

When I run the example in MNE called plot_seeg.py, it gives me something like this:

image

It seems not too different from what I see on the website. The problem with this kind of image is that you can't really orient yourself and figure out where the electrodes are. The brain structures are hardly visible.

I don't know what needs to be done to improve the situation but here is an example of what looks better (figure from MMVT):

image

I think a combined plot like this would be killer for sEEG :) But even if not that, improving the contrast somehow will really help. Currently, I have no clue where I am in the brain ...

jasmainak commented 3 years ago

cc @GuillaumeFavelier

GuillaumeFavelier commented 3 years ago

Interesting... I had a very similar feature request somewhere on my Trello :thinking: I'll see what I can do and link it in here

Also, I think we should work on your rendering issues :sweat_smile:

larsoner commented 3 years ago

Someone should dig into the MMVT code to figure out what kind of viz that is. But offhand I wonder if adding a vtkPolyDataSilhouette actor for the glass brain would be enough to get this "edge highlighting" behavior. If it's enough then it shouldn't be too many lines to add it. @GuillaumeFavelier do you want to experiment with this? If it works we can come up with an API.

GuillaumeFavelier commented 3 years ago

I wonder if adding a vtkPolyDataSilhouette actor for the glass brain would be enough

I was focusing more on the slices to be honest but I think both could help

jasmainak commented 3 years ago

slicing is 2nd level IMO :) For sEEG, the support in MNE is a bit ad-hoc. I'm not yet sure myself what are the right kinds of visualizations but it's certainly different from MEG. My feeling is that the design needs to be thought through a little before jumping into implementation.

larsoner commented 3 years ago

I'm not yet sure myself what are the right kinds of visualizations but it's certainly different from MEG.

Indeed -- to me the two things that help guide viz in the image you posted are the slicing and the edge-visibility of the pial surface. @GuillaumeFavelier prototyped multi-views to some extent, my suggestion had to do with edge-visibility.

My feeling is that the design needs to be thought through a little before jumping into implementation.

For viz our workflow has more or less been to try different things and see how well they work in practice. It doesn't take too much effort and as long as the idea is clearly explained and has some chance of working it's worth trying. But feel free to list other ideas you think could be helpful and we could discuss those, too, and rough-draft implement ones that seem promising.

jasmainak commented 3 years ago

right, but I do think folks should look at the data a bit more closely to see if it makes sense what is being plotted :)

Take a look here -- I basically hacked plot_alignment with my limited knowledge of the renderer backends and have started using this for my own purposes (fun challenge: adding a time slider to this).

image

This is the MNE example data. You'll notice that all the "fire" that is in the example are all on channels outside the brain. This can lead to potential misinterpretation. Furthermore, sEEG is typically analyzed after doing a bipolar reference. I don't see that being done here? Related to #8718 ... you'll suddenly realize how slow our function is. Also, a bad channel has not been marked. Not sure what this data is? Was there a particular task? Do we expect some kind of response? The example is a good start but we need to simplify the story. All the stuff about transforms almost seems like a distraction. If the sEEG electrodes are in MRI coordinates, you should be able to directly plot them without doing any transformations. Also, why are two shafts in the air??

jasmainak commented 3 years ago

Regarding the visualization quality/contrast, Matti passes on the message: can you spot the central sulcus using your visualization? ;-)

larsoner commented 3 years ago

right, but I do think folks should look at the data a bit more closely to see if it makes sense what is being plotted :)

Sure, we should have these discussions in the context of what we already have and where it fits in with the viz plans we have...

(fun challenge: adding a time slider to this).

... for example, you'll get this for free if we follow the proposals in #8382, which AFAIK is "in the todo list" / pipeline. Rather than hacking something together specifica to sEEG, #8382 proposes a general framework that will be useful for any channel type (sEEG, ECoG, and DBS primarily, but also you could see M/EEG electrode activations corresponding to MNE/MxNE/LCMV/whatever localizations, which is pretty cool).

Furthermore, sEEG is typically analyzed after doing a bipolar reference. I don't see that being done here?

This is a data processing / set_bipolar_reference issue, not a viz one. So please open a separate issue for this, we shouldn't discuss it in this issue about being able to identify 3D landmarks and structures properly.

can you spot the central sulcus using your visualization? ;-)

This is more on point, and concretely I think having the edges highlighted like in MMVT would likely fix this problem (and one idea we can try to get this is and probably using the vtk silhouette I mentioned). But if you have other ideas for how to deal with this in 3D viz, we should continue discuss that here.

All the stuff about transforms almost seems like a distraction. If the sEEG electrodes are in MRI coordinates, you should be able to directly plot them without doing any transformations.

This seems unrelated to pial / gyral identifacation, please open a separate issue for this and other issue(s) you've brought up.

jasmainak commented 3 years ago

proposes a general framework that will be useful for any channel type

A brain.add_sensor_data would be great for sEEG. Again, I am not fully convinced of the value of having every function general enough for every data type and trying to shoehorn old workflows and functions to fit sEEG data analysis. It leads to issues like the one I highlighted above. I'll open a separate issue though.

jasmainak commented 3 years ago

okay another usability comment on visualization. In the mayavi backend, there are shortcuts for changing the views to be aligned along X/Y/Z axis. This is pretty useful if I'm comparing two sEEG plots ... however, it's missing in the Pyvista backend. Or is there a way to do this easily?

agramfort commented 3 years ago

let's make it work and then we'll make it nice.

I also fear that having everything in one function makes maybe harder the discovery of the feature it implements

larsoner commented 11 months ago

I think we have implemented at least some of the stuff here (time slider for colormapped sensors) and fixed the depth rendering issues so I'll close this, but let's reopen if I'm mistaken