Closed HanBnrd closed 3 years ago
Ping @rob-luke
I think this is too complex to solve as we need 3d positions to calculate distances for beer lambert law.
I think this is too complex to solve as we need 3d positions to calculate distances for beer lambert law.
I agree, I think it's not a priority now but I would keep this issue open in case we ever modify the Beer Lamber law implementation to use source-detector distances instead of the 3D positions.
This may become a bigger issue that we may want to address if every vendor decides to export data with only 2D positions and no sourcePos3D.
if every vendor decides to export data with only 2D positions
In that case I will move to M/EEG research ๐
Hi, I'm Kristoffer, I work for Artinis, and I felt that I should comment a little on this issue.
I agree that it would be nice to always have 3D positions, but I don't see the issue with using 2D positions. MNE just takes the euclidean distance between optodes:
We have a matlab toolbox for converting our proprietary format to .nirs, .snirf etc. The snirf implementation was written by Jeonghoon Choi & Jay Dubb from David Boas's lab. They wrote it to only output 2D coordinates and we assumed that was fine since the SNIRF specification has it as an optional field (as already mentioned by Johann).
We now have some customers trying to use MNE by converting their data to .snirf, so I had to help them around this issue. What I did was simply to insert "fake" 3D coordinates by setting the Z coordinate to 0 for all optodes. MNE can then get the distances and run the MBLL conversion without issues. I verified that I get exactly the same conc values in our software and in MNE.
I don't see why 2D positions can't be used by MNE for determining the distances between optodes. Would make most sense to check for sourcePos3D first, if 3D coordinates are available, use those, if not, look for 2D positions.
For the case of 2D positions, you could either do the fake Z coordinate thing, or you'd have to rewrite https://github.com/mne-tools/mne-python/blob/df6115ead7474c0374eda5958266fd02c9b5b1e0/mne/preprocessing/nirs/nirs.py#L15
I know nothing about MNE though, so I can't contribute unfortunately.
As a side note, I just checked how the Homer3 nirs to snirf converter handles nirs files with only 2D coordinates. Tt seems to do the same thing as I did, placing all the optodes on the same Z plane, so that might be the way to go in mne, in cases where a snirf file doesn't contain 3D coordinates
The only problem I see is that using "fake" 3D coordinates is that the user will have to know if the 3D positions are real or not, but once you visually inspect the positions it would be obvious if the 3D coordinates are from a real measurement on a head or not.
Hi @kdarti, thanks for commenting on this issue ๐ @rob-luke, we've been discussing a bit by email with Kristoffer about it, here's my take on this:
The only problem I see is that using "fake" 3D coordinates is that the user will have to know if the 3D positions are real or not, but once you visually inspect the positions it would be obvious if the 3D coordinates are from a real measurement on a head or not.
I agree that it would be confusing to fake 3D positions because we wouldn't know if we have a SNIRF with fake or real 3D positions. So I think it wouldn't be good to have SNIRF files with fake 3D z coordinates or have the MNE loader creating fake 3D positions if the SNIRF file only has 2D.
In the end I think it is probably better to support SNIRF files with only 2D positions as this comes from the official SNIRF specifications (2D is required and 3D is optional).
However, this would lead to an issue with the MBLL which only works with 3D positions.
I think we should modify source_detector_distances to work with 2D positions. Using 2D positions would lead to the same source-detector distances we would get with 3D positions. This is because the SNIRF 2D positions are a flattened version of the 3D positions and not a 2D projection of the 3D positions.
Another solution I thought about is to change the API of the MBLL to add a distances
parameter like this that could short-circuit the use of positions altogether, which would have the advantage of working in cases (outside of SNIRF) where there is no positions at all, such as loading from CSV. But this is probably not a good idea to encourage the use of legacy formats, and SNIRF which is now the standard requires at least 2D positions. Besides it will soon be possible to set 3D montages with fNIRS thanks to #9141.
The plan is suggest is:
I'm happy to work on this when I have time if you think it's a good idea @rob-luke.
However, this would lead to an issue with the MBLL which only works with 3D positions.
In MNE in general we assume that 3D locations are known, so I expect only having 2D to create all sorts of issues down the line. For example, when people will follow our fNIRS tutorials, they'll do a plot_alignment
with their own data to sanity check things in 3D, and they'll see a flat disc of electrodes over the head (or worse and more likely, through it) and be rightly confused. The 2D plotting functions expect 3D locations when they do their own flattening. Any future physical modeling will also be wrong. There are probably other cases, too.
All of these problems can be solved if we can transform the electrode locations from 2D to 3D. This is what we do with template EEG positions that only give azimuth and elevation. I'd prefer to figure out a suitable way to do this in the least painful way for fNIRS electrodes if possible, rather than change our code to accommodate 2D positions.
@kdarti what is the transformation that takes the 3D locations and transforms them to 2D? Ideally I think we would just invert this to get 3D locations back.
If this transformation is unknown and all that matters is the distance, there is probably some 2D-plane-to-3D-sphere (or sphere-ish) transform that preserves distances as much as possible, and we should just use that. If we're willing to accept that the 2D approximation for 3D positions is acceptably "close enough" for MBLL, then presumably we can get the 2D back to 3D transformation to be similarly "close enough".
@kdarti what is the transformation that takes the 3D locations and transforms them to 2D? Ideally I think we would just invert this to get 3D locations back.
There's no transformation, we specify 2D locations, these are then written to snirf when converting. If a user wants they can use a polhemus digitiser to get 3D positions.
There's no transformation, we specify 2D locations, these are then written to snirf when converting. If a user wants they can use a polhemus digitiser to get 3D positions.
To ask it differently, how are the 2D positions defined? Electrodes exist in 3D space, so the 2D representation must be some idealized approximation of the physical positions, no?
To ask it differently, how are the 2D positions defined? Electrodes exist in 3D space, so the 2D representation must be some idealized approximation of the physical positions, no?
You set the 2D positions, they are relative and only function to provide the distances between optodes (sources/detectors) for mbll. You then match the relative positions/distances when placing optodes on the head.
I think that having only 2D positions will cause lots of problems downstream in MNE.
I understand that other software works in 2d (homer .nirs). But that is one of the reasons I moved to MNE. We shouldn't be held back by the choices made in other software packages. Locations are used for more than just beer Lambert law. And we wish to move beyond sensor space to source space, and that will require 3D positions.
I think a better option is for Artinis to provide a tool/script/converter for your users which takes your approximate 2D locations and creates approximate 3D locations. The snirf format can hold both 2d and 3d positions.
And i just read @larsoner response. His is much more eloquent than mine. But I think we are both of the same mind on this issue.
You set the 2D positions, they are relative and only function to provide the distances between optodes (sources/detectors) for mbll. You then match the relative positions/distances when placing optodes on the head.
I read a tiny bit about doing these transformations and I had forgotten (shame on me) that a plane is not isometric with a sphere, and it sounds like users are implicitly forced to do some form of sensor-2D-plane to head-3D-sphere (the head is a sphere approximately at least!) transformation during acquisition. This seems like it's not going to yield very accurate positions, but if experts think it's good enough for MBLL then I'm happy to defer to them.
I think a better option is for Artinis to provide a tool/script/converter for your users which takes your approximate 2D locations and creates approximate 3D locations. The snirf format can hold both 2d and 3d positions.
Agreed this would be ideal.
If that's not an option -- or not happening anytime soon -- I no longer like the idea of us doing any transformation of these 2D positions to 3D ones ourselves, lest we make things worse. @rob-like I think I can live with us upsampling 2D positions by us setting z=0
during read, with a big warning that various things probably will not work, and accuracy might be compromised, etc. That way people can at least carry out an equivalent sensor-space analysis in MNE. It seems better (even though perhaps less safe?) than not allowing any analysis at all.
I guess this is also future compatible in the sense that, if a reasonable 2D-to-3D algorithm comes forth, we can write a little mne.channels.fnirs_2d_to_3d(inst)
that checks that (z==0).all()
and applies the transformation.
Agreed this would be ideal.
Agreed, best case is for the vendor to create a 2D to 3D converter as they know how their 2D positions were generated.
Also I think it needs to be made clear that MNE has not promised to support every corner case of the SNIRF specification. There are many optional fields which we can not support. As an example, the specification provides for time gated spectroscopy which is not even a data type in MNE. We do our best to support as much as possible, but only what fits within the MNE framework.
The problem with 'fnirs_2d_to_3d()' function is that, how do we know that the transformation is the same from one vendor to another. I'd bet it isn't.
Is the Artinis 2d coordinate system and transformations clearly defined in a document on a public website that we can access? I think this is a minimum requirement to start an informed discussion.
The way I see it is that
Thus it is not the responsibility MNE to drop our standards to meet the lowest common denominator of spatial representation. And it is the responsibility of the vendor to contribute the engineering effort to overcome this issue.
So is Artinis willing to make a seperate 2d to 3d converter? If they are unwilling to invest the effort in this, only then should we invest our personal resources in to supporting 2d (seems asymmetrical to me). But I think it's in the best interest of the wider fNIRS community (beyond MNE) that all software packages would have access to the 3d locations. And this is best achieved with a vendor tool.
Worst case, we do as @larsoner describes.
Another option could be to enforce that a 3d montage file must be passed in with a snirf file if used with 2d locations.
Thanks to @HanBnrd great work, users can now read in a fNIRS montage file.
I think a better option than supporting 2d would be to say... if you are using a data file with 2d locations you must also pass in a elc file which has the 3d locations. These elc files are easily created by hand, we did it for the other PR (maybe tools already exist as it's a common format). It would only need to be done once per study and the same elc would be passed in for each participant.
Doesn't need to be elc. Just any montage file.
Hi all,
The thing you misunderstand how the fNIRS world handles optode positions. Traditionally, optode positions are defined in an arbitrary coordinate system, NOT in MNI-space or any other space related to the human brain. Most researchers, based on my experience at least 90%, work with 2D coordinates only. This means, that any 3D projection would be arbitrary, as this 2D coordinate system is unrelated to whatever you guys want to have in your toolbox.
That said, speaking of Artinis here, we do provide options for our customers to a) use a 3D digitizer for 3D positions or b) manually specify 3D placement of their optodes in our software. We also have 3D template positions for the most commonly used optode arrangements. However, this is not required, because we cannot and will not dictate fNIRS users how to do their research. I assume the same holds for other manufacturers. We do encourage them, however, to be as precise and accurate as possible by providing such possibilities in our software (even when they do not have the money to buy a 3D digitizer).
Long story short: if you want to force users to use 3D positions, it is fine with me/us, but be aware a) snirf does not enforce this and b) most fNIRS researchers will then be unable to use MNE-Python.
we do provide options for our customers to a) use a 3D digitizer for 3D positions or b) manually specify 3D placement of their optodes in our software. We also have 3D template positions for the most commonly used optode arrangements.
Fantastic. We can actively encourage users of MNE to use this option in your software then. Great news.
We do encourage them, however, to be as precise and accurate as possible by providing such possibilities in our software (even when they do not have the money to buy a 3D digitizer).
Good. We are all on the same page. (I certainly can't afford a digitiser ๐ )
you misunderstand how the fNIRS world handles optode positions ... most fNIRS researchers will then be unable to use MNE-Python.
It's precisely my understanding of the current situation that motivated me to invest my time in to this software to do something exciting and new-not to follow the crowd (the aim was not to write a python port of homer). No one is obliged to use this software and I personally have no expectation that the majority of users will jump across to MNE and python (homer is a great tool and well suited to most labs who wish to work in 2d). I just hope that the adventurous/motivated few users can use your hardware (with the 3d placement software as you described above) and get the associated benefits in MNE. And for legacy data they could either modify the data field post hoc based on data from your tool, or use the new set_montage code. I remember having to manually edit 3d locations for atlas viewer too.
Since the Artinis software supports 3D locations I don't think there is a need. But if the other MNE maintainers vote to support 2D locations then I'll be happy to help with the PR.
[you can] manually specify 3D placement of their optodes in our software. We also have 3D template positions for the most commonly used optode arrangements.
We should add this to our docs. I'm about to revamp our snirf docs. So i can add the info as a note. Can you provide some files created by your software with the 3d template or manual positions so I can add them to our test data to ensure we support your data? Preferably just a 5s data recording so it's sufficient to ensure reading but doesn't take up too much disk space. Much appreciated.
And for legacy data they could either modify the data field post hoc based on data from your tool, or use the new set_montage code. I remember having to manually edit 3d locations for atlas viewer too.
Indeed I'm thinking what we can do is allow reading these 2D positions, and set z=0 with a warning. In that warning we can encourage people to use 3D positions and set them using set_montage
or so.
Can you provide some files created by your software with the 3d template or manual positions so I can add them to our test data to ensure we support your data?
It would be great if we could get these @Horschig @kdarti ! Usually what we try to have is some tiny (few seconds?) recording of data with these positions, then we can add it to our test data for unit testing. So if you have or could make a tiny recording with associated manual and/or template 3D positions, that would be great. But if that's not feasible, at least having the manual and/or template position files (without an associated recording) would be a good start.
Indeed I'm thinking what we can do is allow reading these 2D positions, and set z=0 with a warning
Ok I'll implement in with #9338
Thanks all for the helpful inputs on this issue.
Indeed I'm thinking what we can do is allow reading these 2D positions, and set z=0 with a warning. In that warning we can encourage people to use 3D positions and set them using set_montage or so.
@larsoner I think that's probably the best call. I also had in mind another option which would be to add some kind of boolean extrapolate_2d
parameter to read_raw_snirf that would add fake z=0 3D positions if set to True
. But maybe doing it automatically is better actually.
So if you have or could make a tiny recording with associated manual and/or template 3D positions, that would be great.
@kdarti from Artinis sent me example SNIRF files which have only 2D positions so that could be good to test this. And we have the 3D positions for them with #9141. It is a whole recording but I can crop it to 5 sec.
Ok I'll implement in with #9338
@rob-luke do you plan to solve this issue together with #9338 in one PR? As you wrote:
This will not address #9308
in the other issue, I wasn't sure. Let me know if you want to split the work.
Check the time stamps and email me the 2d file actually do not send a cropped file, sometimes things change in the cropping (or a field is not updated). I would like an untouched recording that was only 5s to start with.
It would be great if we could get these @Horschig @kdarti !
I understood that @HanBnrd already received them (with #9141), so no action required from us, right?
@Horschig @HanBnrd @larsoner
To ensure we support Artinis created SNIRF files we would need some test data in the testing repository. We would like a short (10-15s) recording exactly as created by your software, not edited afterwards (e.g.crop as this might not be faithful to how your software creates the file).
We also try to keep a record of the settings used to take the measurement so that we can include additional tests in our code. And it helps to understand when someone else brings a file that doesnโt behave well.
If the data has been converted after collection as I think happens here. Please include a copy of the script used to convert the data. So that we could reproduce the steps if required.
Some examples of PRs for adding files...
https://github.com/mne-tools/mne-testing-data/pull/51
https://github.com/mne-tools/mne-testing-data/pull/70
https://github.com/mne-tools/mne-testing-data/pull/57
It would be great to get an Artinis file in the test battery so we could claim comparability. I look forward to a PR.
Compatibility! I can't spell
Describe the bug
The function read_raw_snirf is not working if the optional field sourcePos3D does not exist in the SNIRF file.
Steps to reproduce
Given a nback.snirf SNIRF file with only sourcePos2D and no sourcePos3D:
Expected results
Actual results
Additional information
This may be complex to solve as the beer_lambert_law function requires the 3D positions of optodes.