mne-tools / mne-python

MNE: Magnetoencephalography (MEG) and Electroencephalography (EEG) in Python
https://mne.tools
BSD 3-Clause "New" or "Revised" License
2.61k stars 1.3k forks source link

ENH: Support OPM coreg #11276

Open larsoner opened 1 year ago

larsoner commented 1 year ago

Continuing from https://github.com/mne-tools/mne-python/issues/11257#issuecomment-1288857665 with @georgeoneill

coreg -- we plan to support OPM coregistration, probably through the existing mne coreg GUI. I'm still coming to understand the details of how this stuff works so it's not easy for me to say the scope here, but I've worked with the 3D plotting stuff there so I could help add the parts we need (I assume the big part would be allowing you to supply a helmet mesh).

We are still trying to understand where to go with this ourselves. Currently in our lab we all posess bespoke scannercasts derived from the anatomy of each participant, so the tranfrormation between the sensors to the anatomy is either identity or a translation (depending if the cRAS information of the anatomical was read correctly).

HOWEVER we've started to collect data from a site which they use generic helmets, so optical scans and point cloud/rigid body registrations are required, we'll have a better handle on this going forward. The rigid helmets will certainly be the case for the Cerca Magnetics systems going forward. The meshes for the Cerca helmets are provided in OPyM.

Yes we'll have to think about this. Let's just consider the rigid-helmet case for now maybe to make our lives easier :)

One thing to know is that, in MNE-Python, all sensor locations (for EEG) are supposed to live in the "head" coordinate frame, defined by the line between LPA and RPA (which become -X and +X) and the line perpendicular to this one through the nasion (+Y) in a right-handed coordinate system (making +Z up). mne coreg is really meant to coregister points in this head coordinate frame with the MRI coordinate frame defined during MRI acquisition. For MEG data, each system can additionally have its own "MEG device" coordinate frame (usually near the center of sensor "sphere" of the helmet). The info['dev_head_t'] is usually set during acquisition to say how to translate from MEG to head, and then mne coreg gets you from MRI to head, so you can go from any frame to any other one.

One way I think we could get this all to work in this framework is:

  1. Read the point-cloud data (we have some functions for this, could add more)
  2. Coregister it to the canonical sensor positions, maybe using ICP after manually marking the N sensor positions in a point cloud visualization in a simple GUI (maybe the iEEG GUI could be repurposed, but if not, I don't think it's hard using pyvista)
  3. Set the info of the raw to contain the extra head shape points in info['dig'], including some dummy/wrong LPA/Nasion/RPA (this will just make things easier in MNE-Python), i.e., present but in an anatomically incorrect "head" coordinate frame
  4. Using mne coreg to coregister the MEG sensors to the MRI, i.e., obtain MEG<->MRI transform
  5. Adding a new option to mne coreg to use the "MRI fiducials" -- which are easily accurately manually marked on the MRI, or estimated from the MNI<->MRI transform given by FreeSurfer -- to overwrite the existing dummy fiducials in the head coordinate frame, which will then overwrite/update the info['dev_head_t'] and also adjust all existing dig points to be in an antomically correct head coordinate frame

At this point we'd have all transforms we need for things to be defined according to MNE-Python's conventions.

It's a bit of hoops to jump through, but if we do this then all viz functions should behave properly, things like BIDS anonymization and uploading should "just work", etc.

One way to move forward with this would actually be for me to try this with our existing OPM dataset, because IIRC its head coordinate frame is not defined correctly. So I could try to make these adjustments to the dataset, and re-upload it.

jasmainak commented 1 year ago

@larsoner I'm hitting this issue as well. Wondering if a first easy step would be to add the ability to visualize MEG sensor locations in the coreg GUI. Currently, one has to use plot_alignment to verify that the coregistration is correct ...

larsoner commented 1 year ago

With some Kernel OPM data I have I can do:

$ python -c "import mne; mne.datasets.fetch_phantom('otaniemi', verbose=True)"
$ mne coreg --subject phantom_otaniemi --fif 674cbda631d4477babffd04cacfee21b_meg.fif

and if I click "Show MEG Helmet" on the left I get the convex hull of the sensor positions (which is the "helmet" according to MNE-Python when no proper MEG helmet is found):

Screenshot from 2023-01-03 13-35-12

Can you start with this? From there we can tweak appearances, etc.

larsoner commented 1 year ago

Let's continue in #11405

jasmainak commented 1 year ago

That's exactly what I want! But I can't seem to reproduce. It just loads the generic helmet for me ... how can I get 674cbda631d4477babffd04cacfee21b_meg.fif for testing?

larsoner commented 1 year ago

It just loads the generic helmet for me

By "generic" do you mean VectorView? This suggests your info is wrong. If you run with verbose=True (or --verbose from the command line) using #11405 what does it tell you about the helmet it's loading? If it loads VectorView, it indicates your info['chs'][ii]['coil_type'] is wrong in your data and you should fix your file. If you use something that we don't have a helmet for like FIFFV_COIL_FIELDLINE_OPM_MAG_GEN1 or even simpler FIFFV_COIL_POINT_MAGNETOMETER (but you should choose the correct def ideally) then you should get a reasonable plot.

This is what is done in the existing OPM tutorial using their own coil def, which produces the convex hull helmet seen here:

https://mne.tools/dev/auto_examples/time_frequency/source_power_spectrum_opm.html#alignment-and-forward

If it's already producing the convex hull of the sensors, this is the best we do currently.

At some point we might want to take the convex hull surface and try to make it smoother somehow... that could be done with the spherical spline interpolator probably. But we can think about that later, first let's make sure you can get the convex hull "helmet" to show up...

jasmainak commented 1 year ago

Indeed, I fixed the coil_type and that did it. Thank you!

One feedback I have is that it might be helpful to see the actual sensor locations in addition to / instead of the convex hull itself. Because many users do not have whole-head systems ... and they are using only subsets of sensor locations. One could check that the locations appear to match those from a photograph during the experiment.

larsoner commented 1 year ago

Want to try adding a Show MEG sensors checkbox? The logic will be very similar to the helmet stuff, and the code for plotting sensors should already be reasonably well refactored from plot_alignment to accomplish it IIRC

larsoner commented 8 months ago

From a dev-meeting discussion with @jasmainak one idea would be to add an API to visualize subject-specific (e.g., 3D-printed) OPM helmets, as they also work with them at MGH. I haven't thought about an API for this but I think the idea would be to support passing a dict(rr=..., tris=...) for helmet vertices and triangulation in meters in the MEG device coordinate frame (same frame as info['chs']['loc']). Then the responsibility is on the user to get the rr, tris from their mesh format, e.g. with pymesh.

To get started @georgeoneill @neurofractal do you have the subject-specific mesh for the ucl_opm_auditory dataset you could share publicly? If you could share this with me directly I could try hacking in support and we can look at mne.viz.plot_alignment to make sure things look okay, then we could update the ucl_opm_auditory to have the mesh, and then I could add proper support for it in mne coreg and mne.viz.plot_alignment.

neurofractal commented 8 months ago

Hey good to hear from you @larsoner - do you mean the participant's headshape or a mesh of the actual 3D-printed helmet? I can generate the former but not the latter.

larsoner commented 8 months ago

I was hoping for the 3D-printed helmet mesh (though the participant's headshape would be a nice addition as well). Do you usually have those available? If so, and have another already-publicly-accessible dataset ready to go, we could create a new MNE dataset.

larsoner commented 3 months ago

Friendly ping @neurofractal as I'm starting to think about this issue again... do you have a mesh of the 3D-printed OPM helmet for an open dataset that we could use (especially the existing UCL auditory OPM dataset)?

neurofractal commented 3 months ago

Hey @larsoner good to hear from you. We don't have this information - the manufacturer just sends position of the sensors in relation to the MRI mesh. I could generate headshape information for you?

larsoner commented 3 months ago

Any chance you have an anonymized (or un-anonymized with permission to share original) MRI for the participant from that dataset? I could run freesurfer's recon-all etc. (which would give the headshape) and update the dataset. Then we could source localize the auditory response, which would be nice. I'd also need the transformation from the sensor positions to the MRI space, though, in whatever format you all use (which sounds like is just at most a translation?). If this is too much work to track down that's alright!

neurofractal commented 3 months ago

I'll send you an email with the link - no worries :)

The MRI should be in the same space as the sensors, so no need for any translations.

larsoner commented 3 months ago

Got it, thanks! :+1: