mne-tools / mne-nirs

Process Near-Infrared Spectroscopy Data in MNE
https://mne.tools/mne-nirs/
BSD 3-Clause "New" or "Revised" License
73 stars 35 forks source link

Meta Issue: Image-space analysis #405

Open rob-luke opened 2 years ago

rob-luke commented 2 years ago

Describe the new feature or enhancement

With the release of MNE-Python 0.24 and MNE-NIRS v0.1.2 the core sensor space functionality is complete (many improvements still required, but the API and minimum functionality is present). Next, development will be focused on implementing fNIRS specific image-space analysis. This meta issue will track the high level progress toward this goal.

An fNIRS specific solution is required, rather than simply applying the EEG/MEG techniques in MNE-Python. The analysis should be based on methods from these researchers:

Describe your proposed implementation

This will be expanded as a more concrete plan emerges, but some high level steps are:

Topics that need further thought

Additional comments

Please add comments below with specific requirements that you may have for image-space analysis, or useful resources, relevant papers or source code, use-case examples, etc. I will spend quite some time reading and formulating a plan before diving in to coding (code is the easy part).

Note: There is likely to be several API and possibly backend changes on this path (for context, the GLM API took about 3-4 iterations before I was happy). So please provide feedback at any stage, as I am always happy to improve all aspects of the project.

larsoner commented 2 years ago

See DOT-HUB

This is GPL 3, so we'd have to ask permission to relicense under BSD 3-Clause. I didn't check other libs but keep this is mind

Export image space analysis in a standard format for analysis/visualisation etc in any other program

By "image space" do you mean "on the brain surface" and/or "in brain volumetric space"? Either of these are considered "source spaces" in MNE-Python, and what we're really talking about at this point is "source space analysis", and we already have lots tools for this sort of stuff (e.g., label extraction, spatio-temporal clustering, etc.).

In other words, to me the high-level view is that:

In all of these cases, at the end of this process you should end up with a STC object in MNE-Python. Then you can use all of our tools for visualization, processing, and statistics as you wish. With this in mind, I think many of the question above are immediately answered:

How to do statistics correctly in image space?

See any tutorial / function we have for this sort of thing already. Spatio-temporal clustering, FDR, label extraction then FDR, etc. are all (non-exhaustive) options. Basically you have all established fMRI and M/EEG statistical tools to choose from at this point I think (subject to meeting the assumptions of those methods, which is likely especially for something with very few assumptions like spatio-temporal clustering).

At what stage in the pipeline is it most appropriate to move to image space?

If the sensor-to-source space transformation is linear (fingers crossed?) then it doesn't matter. If it's nonlinear, then you probably need to do the sensor-to-source space transformation, then apply your GLM :(

Plotting

Look at our M/EEG source space examples and in particular things like

larsoner commented 2 years ago

It would also be great if the source space / image space data were in units of Am like you get from M/EEG inverse models. But even if this isn't what you get (e.g., you get some other "activation" measure) we can still consider using all the existing fMRI and M/EEG tools as above I think.

RJCooperUCL commented 2 years ago

Hi-

Just to provide a brief response- I am happy for any code from our toolbox to be used if this is acknowledged in the source code somewhere. Happy to do what is needed on the license front.

the jacobian @rob-luke describes is a linearized forward operator, which is then inverted, completely analogously to EEG. The jacobian is calculated using a model of light transport in an FEM or voxel space.

We talk about ‘image’ space rather than source space because there are not discrete sources that generate our measurements, but a distributed, continuous image of haemoglobin concentrations. Perhaps this is just nomenclature. Images are in molar concentration, so definitely not in Am units.

Statistical handling in the image/source space is somewhat complicated by the spatially varying sensitivity of fNIRS measurements. This means different locations in the image have different statistical properties. This is likely a solved problem, however, I am just not sure what the best solution is.

larsoner commented 2 years ago

We talk about ‘image’ space rather than source space because there are not discrete sources that generate our measurements, but a distributed, continuous image of haemoglobin concentrations. Perhaps this is just nomenclature. Images are in molar concentration, so definitely not in Am units.

At least on the viz front I don't think it will matter much. "(Time-varying) values defined on surfaces / volumetric grids" is what all of our 3D viz is geared toward, whether that be currents, noise-normalized estimates, statistical t-values, or any other arbitrary color-mappable thing.

the jacobian @rob-luke describes is a linearized forward operator, which is then inverted, completely analogously to EEG... different locations in the image have different statistical properties. This is likely a solved problem, however, I am just not sure what the best solution is.

In M/EEG forward sensitivity varies as a function of space as well. To some extent different the inverse methods (depth weighting, noise normalization, etc.) help account for this in different ways. Maybe similar techniques could be used with fNIRS data. I assume people have thought about this sort of thing, though. It would certainly be cool if we could just pack this jacobian into a mne.Forward object and have it work with our suite of inverse methods. TBD whether or not it's a valid thing to do :)

But in any case, even with the techniques to combat sensitivity differences I doubt for M/EEG we ever totally achieve statistical uniformity anyway, so we're at least in a somewhat similar boat!

rob-luke commented 2 years ago

Hi @RJCooperUCL and @larsoner thank you both for your comments (and I am quite pleased you two have now [at least virtually] met). I expect I will lean on both of you quite heavily while on the next steps to implement fNIRS image/source space analysis. There is a very nice complementary skill set involved here.

Just to provide a brief response- I am happy for any code from our toolbox to be used if this is acknowledged in the source code somewhere. Happy to do what is needed on the license front.

Thanks @RJCooperUCL! And we will definitely have this acknowledged. Before we merge any code I will highlight to you where the acknowledgment will be (in the code and in the documentation and website etc, details not figured out yet) and get your feedback/approval before moving forward.

We talk about ‘image’ space rather than source space

I have found that many similar concepts (of course with nuances) in different fields use domain specific terminology. I will attempt to use the fNIRS-specific language where possible and also refer the M/EEG analogs at first definition. Hopefully this will then be correct for the fNIRS researchers, but also facilitate findability for M/EEG users and google searches etc.

At least on the viz front I don't think it will matter much. "(Time-varying) values defined on surfaces / volumetric grids" is what all of our 3D viz is geared toward, whether that be currents, noise-normalized estimates, statistical t-values, or any other arbitrary color-mappable thing.

This is the news I wanted to hear! The plan is to utilise as much core MNE-Python code as possible. We may need to do some tweaking along the way for small domain specific details, but based on our previous fNIRS integration, I am certain we can do this with a minimum-touch approach.

It would certainly be cool if we could just pack this jacobian into a mne.Forward object and have it work with our suite of inverse methods

This is where I am planning to start once I have figured out how to generate the Jacobian from the toast or nirfast. I will ping you when I have made some progress (I go on leave today, so probably not much progress in the next few weeks).

rob-luke commented 2 years ago

@dboas @sstucker @mayucel please see above for initial thoughts on this topic. Of particular interest may be the discussion between MNE and fMRI developers on a consistent surface data api format: https://nipy.discourse.group/c/surface-api/10 and maybe existing source visualisation examples: https://mne.tools/dev/auto_tutorials/inverse/60_visualize_stc.html

samuelpowell commented 2 years ago

I've just had a conversation with @rob-luke on this topic and would like to contribute.

General

As you've discussed, the first question is where in the analysis you move from channel to image space. The options are:

  1. Undertake 'standard' fNIRS analysis as performed in MNE-NIRS, and then use a light transport model to invert the channel-wise data back to the image space. There are some variations here, for example there are reasons for reconstructing absorption coefficients first, before performing spectroscopy in the image space. This is the approach that @RJCooperUCL takes.

  2. Move to the image space earlier in the pipeline, performing filtering, GLM etc., in the image space.

There are some really good points raised above which impact on how you want to approach this:

At what stage in the pipeline is it most appropriate to move to image space?

If the sensor-to-source space transformation is linear (fingers crossed?) then it doesn't matter. If it's nonlinear, then you probably need to do the sensor-to-source space transformation, then apply your GLM :(

As @RJCooperUCL noted, we're reconstructing a parameter of a model, rather than its source. Light transport is linear with respect to the sources, but strongly non-linear with respect to the parameters. Most practical approaches assume a linearisation of the problem around an assumed baseline. Linearisation ameliorates a lot of experimental problems (such as optical coupling), but prevents true quantitation.

Assuming a linearised approach (the alternative is another story entirely), the problem with moving the transform through the pipeline is because the inverse (the mapping of changes in the data to changes in the parameters of interest) is regularised. This is necessary owing to the ill-posedness of the inverse problem, which leads on to....

How to do statistics correctly in image space?

See any tutorial / function we have for this sort of thing already. Spatio-temporal clustering, FDR, label extraction then FDR, etc. are all (non-exhaustive) options. Basically you have all established fMRI and M/EEG statistical tools to choose from at this point I think (subject to meeting the assumptions of those methods, which is likely especially for something with very few assumptions like spatio-temporal clustering).

The physics of fNIRS / DOT (a lossy diffusion process) is such that the forward operator is smoothing. This is why inversion requires regularisation. Consequently the number of degrees of freedom in the image space is different to what would be assumed for the same image in, e.g., MRI. This aspect of the statistics is not my area of expertise but this problem has been explored. See, for example, NIRS-SPM: Statistical parametric mapping for near-infrared spectroscopy. I assume you have similar approaches for EEG.

So to link these two things together, yes one can build a linear mapping that can be moved through the pipeline at one's discretion, but there are an infinite number of different mappings you can reasonably choose. In a hand-wavy sense, you're implicitly filtering the back-projection of your data into the image space before you even begin. The mapping you select depends upon the linearisation point, and the prior knowledge you include in the regularisation (e.g. "I expect piecewise constant changes") and (say from a Bayesian perspective) the covariance of the data.

Practical

All that technical nonsense to one side, it's still possible to build something 'reasonable'. I'd suggest that it would be prudent to start by exploring approach (1) - anything more advanced will require the same tooling anyway.

Assuming the goal is a simple model in which one goes from (data in) -> (parameters out), the following will be required:

  1. a forward model and smattering of linear algebra
  2. baseline optical properties
  3. a model of the geometry (e.g. a mesh)
  4. definition of the source and detector locations

Excuse my naivety with MNE (-NIRS) here... but if we assume that I've loaded a big SNIRF file, and it's all been magically registered to a generic head model, can we get (3) and (4) from MNE? If so:

We can take (2) from the literature, and I can help with (1). To determine the appropriate solution to (1):