Open larsoner opened 1 year ago
@larsoner I'm hitting this issue as well. Wondering if a first easy step would be to add the ability to visualize MEG sensor locations in the coreg GUI. Currently, one has to use plot_alignment
to verify that the coregistration is correct ...
With some Kernel OPM data I have I can do:
$ python -c "import mne; mne.datasets.fetch_phantom('otaniemi', verbose=True)"
$ mne coreg --subject phantom_otaniemi --fif 674cbda631d4477babffd04cacfee21b_meg.fif
and if I click "Show MEG Helmet" on the left I get the convex hull of the sensor positions (which is the "helmet" according to MNE-Python when no proper MEG helmet is found):
Can you start with this? From there we can tweak appearances, etc.
Let's continue in #11405
That's exactly what I want! But I can't seem to reproduce. It just loads the generic helmet for me ... how can I get 674cbda631d4477babffd04cacfee21b_meg.fif
for testing?
It just loads the generic helmet for me
By "generic" do you mean VectorView? This suggests your info
is wrong. If you run with verbose=True
(or --verbose
from the command line) using #11405 what does it tell you about the helmet it's loading? If it loads VectorView, it indicates your info['chs'][ii]['coil_type']
is wrong in your data and you should fix your file. If you use something that we don't have a helmet for like FIFFV_COIL_FIELDLINE_OPM_MAG_GEN1
or even simpler FIFFV_COIL_POINT_MAGNETOMETER
(but you should choose the correct def ideally) then you should get a reasonable plot.
This is what is done in the existing OPM tutorial using their own coil def, which produces the convex hull helmet seen here:
If it's already producing the convex hull of the sensors, this is the best we do currently.
At some point we might want to take the convex hull surface and try to make it smoother somehow... that could be done with the spherical spline interpolator probably. But we can think about that later, first let's make sure you can get the convex hull "helmet" to show up...
Indeed, I fixed the coil_type and that did it. Thank you!
One feedback I have is that it might be helpful to see the actual sensor locations in addition to / instead of the convex hull itself. Because many users do not have whole-head systems ... and they are using only subsets of sensor locations. One could check that the locations appear to match those from a photograph during the experiment.
Want to try adding a Show MEG sensors
checkbox? The logic will be very similar to the helmet stuff, and the code for plotting sensors should already be reasonably well refactored from plot_alignment
to accomplish it IIRC
From a dev-meeting discussion with @jasmainak one idea would be to add an API to visualize subject-specific (e.g., 3D-printed) OPM helmets, as they also work with them at MGH. I haven't thought about an API for this but I think the idea would be to support passing a dict(rr=..., tris=...)
for helmet vertices and triangulation in meters in the MEG device coordinate frame (same frame as info['chs']['loc']
). Then the responsibility is on the user to get the rr, tris
from their mesh format, e.g. with pymesh
.
To get started @georgeoneill @neurofractal do you have the subject-specific mesh for the ucl_opm_auditory
dataset you could share publicly? If you could share this with me directly I could try hacking in support and we can look at mne.viz.plot_alignment
to make sure things look okay, then we could update the ucl_opm_auditory
to have the mesh, and then I could add proper support for it in mne coreg
and mne.viz.plot_alignment
.
Hey good to hear from you @larsoner - do you mean the participant's headshape or a mesh of the actual 3D-printed helmet? I can generate the former but not the latter.
I was hoping for the 3D-printed helmet mesh (though the participant's headshape would be a nice addition as well). Do you usually have those available? If so, and have another already-publicly-accessible dataset ready to go, we could create a new MNE dataset.
Friendly ping @neurofractal as I'm starting to think about this issue again... do you have a mesh of the 3D-printed OPM helmet for an open dataset that we could use (especially the existing UCL auditory OPM dataset)?
Hey @larsoner good to hear from you. We don't have this information - the manufacturer just sends position of the sensors in relation to the MRI mesh. I could generate headshape information for you?
Any chance you have an anonymized (or un-anonymized with permission to share original) MRI for the participant from that dataset? I could run freesurfer's recon-all etc. (which would give the headshape) and update the dataset. Then we could source localize the auditory response, which would be nice. I'd also need the transformation from the sensor positions to the MRI space, though, in whatever format you all use (which sounds like is just at most a translation?). If this is too much work to track down that's alright!
I'll send you an email with the link - no worries :)
The MRI should be in the same space as the sensors, so no need for any translations.
Got it, thanks! :+1:
Continuing from https://github.com/mne-tools/mne-python/issues/11257#issuecomment-1288857665 with @georgeoneill
Yes we'll have to think about this. Let's just consider the rigid-helmet case for now maybe to make our lives easier :)
One thing to know is that, in MNE-Python, all sensor locations (for EEG) are supposed to live in the "head" coordinate frame, defined by the line between LPA and RPA (which become -X and +X) and the line perpendicular to this one through the nasion (+Y) in a right-handed coordinate system (making +Z up).
mne coreg
is really meant to coregister points in this head coordinate frame with the MRI coordinate frame defined during MRI acquisition. For MEG data, each system can additionally have its own "MEG device" coordinate frame (usually near the center of sensor "sphere" of the helmet). Theinfo['dev_head_t']
is usually set during acquisition to say how to translate from MEG to head, and thenmne coreg
gets you from MRI to head, so you can go from any frame to any other one.One way I think we could get this all to work in this framework is:
N
sensor positions in a point cloud visualization in a simple GUI (maybe the iEEG GUI could be repurposed, but if not, I don't think it's hard using pyvista)info
of the raw to contain the extra head shape points ininfo['dig']
, including some dummy/wrong LPA/Nasion/RPA (this will just make things easier in MNE-Python), i.e., present but in an anatomically incorrect "head" coordinate framemne coreg
to coregister the MEG sensors to the MRI, i.e., obtain MEG<->MRI transformmne coreg
to use the "MRI fiducials" -- which are easily accurately manually marked on the MRI, or estimated from the MNI<->MRI transform given by FreeSurfer -- to overwrite the existing dummy fiducials in the head coordinate frame, which will then overwrite/update theinfo['dev_head_t']
and also adjust all existing dig points to be in an antomically correct head coordinate frameAt this point we'd have all transforms we need for things to be defined according to MNE-Python's conventions.
It's a bit of hoops to jump through, but if we do this then all viz functions should behave properly, things like BIDS anonymization and uploading should "just work", etc.
One way to move forward with this would actually be for me to try this with our existing OPM dataset, because IIRC its head coordinate frame is not defined correctly. So I could try to make these adjustments to the dataset, and re-upload it.