mne-tools / mne-nirs

Process Near-Infrared Spectroscopy Data in MNE
https://mne.tools/mne-nirs/
BSD 3-Clause "New" or "Revised" License
79 stars 35 forks source link

Read data from GowerLabs Lumo device in HDDOT format #419

Closed rob-luke closed 2 years ago

rob-luke commented 2 years ago

UPDATE

Gowerlabs now exports data in SNIRF format. See https://github.com/Gowerlabs/lumomat

This makes the specific efforts below unnecessary, as SNIRF is a standardised format.

To view the progress for importing gowerlab SNIRF files in to MNE see:

Old Info. Retained for historical context

Describe the new feature or enhancement

I would like to read data stored in the HDDOT format that was acquired using a GowerLab Lumo device.

The data format is described at: https://github.com/DOT-HUB/DOT-HUB_toolbox/blob/master/DOT-HUB_toolbox_Manual.pdf

Essentially:

Describe your proposed implementation

  1. Convert the .nirs file to .snirf using the Homer3 function Nirs2Snirf.m. By default Homer3 throws away the 3D coordinates, so you need to use my branch at: https://github.com/rob-luke/DataTree/tree/lumo This will produce a .snirf file but it loses the landmark positions, you need to write note these in the MATLAB command window.
  2. Import the snirf file and then manually code in the landmark positions.

Describe possible alternatives

Additional comments

Am I doing this correctly? How can we make this simpler?

rob-luke commented 2 years ago

Using this approach the following code can be used to import data.

import mne
import numpy as np
import os.path as op
from mne.channels import make_dig_montage
import mne_nirs

# %%
# Read data

raw = mne.io.read_raw_snirf("/some/lumo/data/file.snirf", optode_frame="head")
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')

# %%
# Landmarks taken from nirs.SD3D.Landmarks

nasion = np.array([0, 184, 0]) / 1000
rpa = np.array([66.2270, 87.7913, -2.6014]) / 1000  # flipped
lpa = np.array([-71.0591, 92.7675, -2.4886]) / 1000  # flipped

# %%
# Make Dig Montage with landmarks

srcs = mne_nirs.channels.list_sources(raw)
dets = mne_nirs.channels.list_detectors(raw)
ch_pos = dict()
for src in srcs:
    raw_s1 = mne_nirs.channels.pick_sources(raw.copy(), int(src)).pick(1)
    ch_pos[f"S{src}"] = raw_s1.info['chs'][0]['loc'][3:6]
for det in dets:
    raw_s1 = mne_nirs.channels.pick_detectors(raw.copy(), int(det)).pick(1)
    ch_pos[f"D{det}"] = raw_s1.info['chs'][0]['loc'][6:9]
dm = make_dig_montage(ch_pos=ch_pos, nasion=nasion, lpa=lpa, rpa=rpa)
raw.set_montage(dm)

# %%
# Plot result

brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5', alpha=0.1)
brain.add_foci(raw.info['dig'][1]['r'] * 1000, hemi="lh", color='w')
brain.add_foci(raw.info['dig'][0]['r'] * 1000, hemi="lh", color='b')
brain.add_foci(raw.info['dig'][2]['r'] * 1000, hemi="lh", color='r')
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['sources', 'detectors'])
brain.add_head()

# %%
# Plot result just brain

brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5', alpha=0.1)
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['sources', 'detectors'])

produces...

image image

which looks approximately correct. It should match this figure from the paper: Evaluating a new generation of wearable high-density diffuse optical tomography technology via retinotopic mapping of the adult visual cortex

image

Which seems to roughly match with what MNE creates above. Some optodes seem to be slightly inside the head surface, but I guess this is due to discrepancies between the participants headshape + digitisation and the generic head model used here.

Any thoughts on the python script above and result @larsoner? Is there a cleaner way to achieve this? Can the coregistration functions in MNE-Python be used to tweak the final positions to sit nicely on the head surface, or is that not the use of it?

larsoner commented 2 years ago

Any thoughts on the python script above and result @larsoner? Is there a cleaner way to achieve this? Can the coregistration functions in MNE-Python be used to tweak the final positions to sit nicely on the head surface, or is that not the use of it?

I have not read in detail what you actually did above, but if what you have is realistic / digitized sensor locations on a surrogate MRI head model... then yes I'd expect them not to sit properly on the head surface.

The best solution is to have an individualized MRI from that subject, and then use mne coreg or something similar to align the points with the MRI.

Without that, you could use mne coreg to take a surrogate MRI like fsaverage and rescale it to the digitized locations. We do this a lot for infant MEG for example because we rarely have individualized MRIs of 6 month olds, but have good templates and good / reliable Polhemus digitizations of their head shapes.

Regardless, with EEG at least what we do in forward modeling is take whatever digitized EEG locations we end up with via this alignment procedure (really realistic, or guesstimated using 10-20 + fsaverage) and project them to the head surface. In principle we could similarly project the fNIRS electrodes to the head surface (or to the head surface + some distance, if you know for example the sensors have some standardized standoff from the surface). We're probably going to need to build this sort of thing for MEG eventually with the advent of OPMs and flexible geometries, so it's definitely in scope of MNE-Python to do these things.

larsoner commented 2 years ago

... and if those are fiducial points you plotted, your alignment appears to be off by quite a bit (needs to move/translate down and rotate/pitch forward)

rob-luke commented 2 years ago

Thanks @larsoner!

yes I'd expect them not to sit properly on the head surface

Good, then my intuition was ok.

what we do in forward modeling is take whatever digitized EEG locations we end up with via this alignment procedure (really realistic, or guesstimated using 10-20 + fsaverage) and project them to the head surface

What is this process called? Or is there a function name I can search for? Or internal function?

... and if those are fiducial points you plotted, your alignment appears to be off by quite a bit (needs to move/translate down and rotate/pitch forward)

Yes that's the fiducials. So I think something is off in what I am doing. I haven't done any individualised alignment before so I am struggling to grasp how this works. Does the fiducials being in the wrong spot mean I have imported the data incorrectly? Or is there a way I can programmatically generate a transformation? Or is it something you do visually and type the numbers out manually?

We're probably going to need to build this sort of thing for MEG eventually with the advent of OPMs and flexible geometries, so it's definitely in scope of MNE-Python to do these things.

Ok, good to know. I will try and make any code I write generic then.

larsoner commented 2 years ago

Background

In M/EEG we call this "coordinate frame alignment" / "coregistration". See for example

https://mne.tools/dev/auto_tutorials/forward/25_automated_coreg.html https://mne.tools/dev/auto_tutorials/forward/30_forward.html#visualizing-the-coregistration

The TL;DR is that your MRI is in some native scanner space (generally) and your electrodes start out in some native digitization space (generally). If you have LPA/Nasion/RPA digitized (in your digitization space) you can trivially transform all digitized points to what we call the "head" coordinate frame which is defined by these three points. Then mne coreg is designed to allow you to manually (or automatically, but it's less accurate) give you the rotation+translation known as the head_mri_t that defines the transformation from the head coordinate frame to the MRI coordinate frame.

Does the fiducials being in the wrong spot mean I have imported the data incorrectly?

No, generally it means 1) you probably need to convert them from their arbitrary digitization coordinate frame to the standard (Neuromag) anatomical "head" coordinate frame, if you haven't already, and then 2) you need to define the head_mri_t (trans as we usually call it) properly.

If you go to the MNE sample dataset folder and call something like this, you can start playing with these things in a standard dataset:

$ cd ~/mne_data/MNE-sample-data/
$ mne coreg -s sample -d subjects/ --fif MEG/sample/sample_audvis_raw.fif 

You'll see that the identity transform trans / head_mri_t is a terrible alignment of the head and MRI coordinate frames (you might need to click "lock fiducials" to accept the MNI-based MRI fiducial estimates):

Screen Shot 2021-12-02 at 9 50 25 PM

Clicking "fit fiducials" gets you to something closer to reasonable:

Screen Shot 2021-12-02 at 9 51 25 PM

But if you want a good coreg, you have to manually adjust the MRI fiducials (unlock and click on the head a bit, then re-lock them) and then re-fit to fiducials, and now we're close enough that ticking "orient glyphs" generates something that is not terrible:

Screen Shot 2021-12-02 at 9 52 30 PM

Finally, using "Fit ICP" (iterative closest point) after setting fiducial matching to "matched" (not nearest) mode we get something pretty good:

Screen Shot 2021-12-02 at 9 54 00 PM

And it matches pretty well to the one that was actually originally created for this dataset:

$ mne coreg -s sample -d subjects/ --fif MEG/sample/sample_audvis_raw.fif --trans MEG/sample/sample_audvis_raw-trans.fif
Screen Shot 2021-12-02 at 9 55 21 PM

What you should do

  1. Ensure all these dig points and sensor locations you have are actually in head coordinates. In practice, all this means is use dig_head_t = mne.transforms.get_ras_to_neuromag_trans(nasion, lpa, rpa) to get a 4x4 transform you can dig['r'] = mne.apply_trans(dig_head_t, dig['r']) to all points.
  2. Use mne coreg as above -- or an equivalent automated approach eventually -- to align your digitized points to the head surface
  3. Someday add to MNE-Python sensor-to-scalp sensor position projection functions (maybe with an offset in mm that defaults to zero?)
larsoner commented 2 years ago

Stores the landmark coordinates for Nz, Iz, Ar, Lr, Cz

For M/EEG/(ECoG/sEEG) so far we have standardized around LPA/Nz(asion)/RPA, it would be great if you could infer from these other ones where LPA/RPA are... but if not, then we might have to "fake it" and live with the "head" coord frame definition being off. And you'll have to manually use Nz/Iz/Ar/Lr/Cz (not sure what Ar/Lr are?) along the MRI scalp surface to do the mne coreg rather than relying on stuff like Fit Fiducials.

rob-luke commented 2 years ago

Thanks @larsoner! Thats amazing feedback.

So I think I've managed steps one and two (after slight modification to coreg.py). The only thing I cant get to work is the scaling mode in the gui, is it meant to scale something in the gui (either the head or the sensors?)? I have tried using both uniform and 3-axis, and varied sX, xY, and sZ parameters and nothing in the figure changes.

not sure what Ar/Lr are

I am assuming they are LPA and RPA, everything makes sense if I treat it that way. I took that language from the vendor manual.

Now the fit looks like below... (to get the sensors outside the head I had to shift LPA/RPA further back then I would like, but at least now I know how to!) image

larsoner commented 2 years ago

Argh yes scale mode should scale the MRI but it's currently broken

rob-luke commented 2 years ago

Argh yes scale mode should scale the MRI but it's currently broken

Good to know. I wasn't sure if it was because i was using fNIRS and should hunt the cause. But I'll leave the bug fix for upstream to fix.

rob-luke commented 2 years ago

Argh yes scale mode should scale the MRI but it's currently broken

Looks like this is getting fixed in https://github.com/mne-tools/mne-python/pull/10117

So I will put this issue on pause until that's resolved.

samuelpowell commented 2 years ago

@rob-luke I'm keen to ensure that users can get their LUMO data into MNE-NIRS without any significant effort.

As such we've put together a small (MATLAB) package to convert native LUMO output files to standard compliant SNIRF files. The package is still in beta, as it requires some more testing, however master should work fine. This package should obviate the need for the first few steps you describe.

When exporting to SNIRF we provide an option for MNE-NIRS 'style' output. When choosing this option the SNIRF file is written with some modifications based upon my (possibly naïve) interpretation of what your code wants to see:

These files load into MNE-NIRS just fine, however registration does not seem to work out of the box. It's very possible I'm doing something wrong on the MNE-NIRS side, or that the SNIRF files aren't exactly what you expect. In any case if you're able to assist I can show you what we're doing and/or provide some sample SNIRF files here to make this work.

rob-luke commented 2 years ago

Hi @samuelpowell,

Thanks for taking the time to ensure your data works with MNE-NIRS. I just checked out your package, it looks great!

This package should obviate the need for the first few steps you describe.

Awesome!

When exporting to SNIRF we provide an option for MNE-NIRS 'style' output.

I am keen to ensure that MNE-NIRS can read any SNIRF compliant file. So far I have been pragmatic and only implemented reading of specification features that people actually use. I am pleased to make modifications to MNE-NIRS to accept your SNIRF files in their native format without conversion to a particular style.

For the two changes you highlight:

Let's chat this afternoon and discuss in more detail. But I am excited to get the data working smoothly with MNE!

samuelpowell commented 2 years ago

I will produce some sample files to make this work.

As discussed, the files will contain anatomical landmarks from a digitiser (which should be RAS but I will double check). I'm assuming that in MNE parlance the optode frame should "head" when the files are loaded.

To allow you to programmatically determine the optode frame and for us to guarantee the presence of the requisite landmarks, would you like for us to include an entry in /nirs(i)/probe/coordinateSystem, and if so, what would be appropriate?

samuelpowell commented 2 years ago

@rob-luke further to the above I attach a sample SNIRF file.

As per question above, if there's anything we can do to get MNE-NIRS to automatically register, including an appropriate setting in /nirs(i)/probe/coordinateSystem, just let me know.

mne_sample_54_F18_B_20_04_22_20_59_59.zip

cap54_18_B

mne_cap_54_18_B

samuelpowell commented 2 years ago

@rob-luke Note that the above file uses our mne-nirs output style that does the channel reordering that you expect. You mentioned you'd like to get rid of that requirement. I can produce another sample file which doesn't reorder, but is that something to be handled here, or in the loader in mne-python?