Closed larsoner closed 1 year ago
I've been thinking that it would be interesting to re-implement the coregistration model using more general transformation and object classes, but unfortunately I don't have the resources to do so currently...
Why are the rotations done around the nasion instead of e.g. the head coordinate frame center?
I don't remember exactly what the reason was during the implementation, so maybe there are other reasons, but from what I recall the nasion seemed like the best point for initial coregistration. I am usually most confident that the nasion is accurate for both models, especially when using the average brain. If the rotation was around the center, then manually rotating the head would always move the nasions apart and require manual correction using translation to bring them to match again. Similarly, the front of the head is most distinct and has more features to compare between head shape and MRI, and this is particularly relevant for NYU where head shapes include hair, making the back of the head much less reliable, and so it was easier to focus on the front first and then rotate/scale to adjust the back while leaving the nasion in place.
Maybe a better transformation model could make the center an option?
The camera center is currently not set
I just used mayavi's mouse based camera controls to adjust the camera as needed, but there might well be a better setting
For me, changing the MRI scaling only translates the MRI, it does not scale it
That is strange, maybe a platform issue? Or a recent library update? Scaling has been used a lot at NYU, I think always on OS X, and I've also used it as recently as two or so months ago.
That is strange, maybe a platform issue? Or a recent library update? Scaling has been used a lot at NYU, I think always on OS X, and I've also used it as recently as two or so months ago.
I'm on latest conda
and Linux.
Command on master
:
mne coreg -s fsaverage -f ~/mne_data/MNE-sample-data/MEG/sample/sample_audvis_raw.fif
Although looking at it now, the LPA/RPA solid balls did get twice as far out. So maybe only some things are updating properly?
I can reproduce it on OS X, and it must have something to do with the mayavi update. I can fix it by reverting to my old env:
$ conda create -n mayavi44 numpy scipy matplotlib mayavi=4.4 h5py pyqt=4.10 pygments=1
Do you have time to look soon (in the next week)? This is probably a 0.14 release blocker.
Yes this is a problem, I'll look into it
In this line, it'd be great to add the color changing of the dots as a function of whether they are inside or outside the surface (as in MNE C)
On 26 January 2017 at 13:04, Christian Brodbeck notifications@github.com wrote:
Yes this is a problem, I'll look into it
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/3934#issuecomment-275462927, or mute the thread https://github.com/notifications/unsubscribe-auth/AEp7DAodlcEQIHBEEd5RxZPQB0x5_rFlks5rWOAZgaJpZM4Luwe7 .
I actually edited that in as a point above a bit ago, and implementing it isn't as trivial as I'd hoped. MayaVi is hard :(
@christianbrodbeck as part of #3935 I managed to enable another test without requiring FreeSurfer that appears to have uncovered a Windows bug (because AppVeyor runs the test now):
https://ci.appveyor.com/project/Eric89GXL/mne-python/build/1.0.8729
I'm going to put a SkipTest
in there for now in #3935, but if you have time to fix that one it would help, too.
@christianbrodbeck I think we shoudl factor trans
out of PointObject
. It's mixing the model (data) with the view in a counterproductive way. For example, to determine whether points are inside/outside the scalp, the model should really be applying the trans
and storing the result somewhere, instead of PointObject
doing it. Also, thinking about if there ever were some more complex transformation going on with the data, we wouldn't want to have that live in PointObject
, either. So I'll remove that as part of the general rescaling refactoring.
Yes completely agree
@christianbrodbeck it occurred to me that, when optimizing MRI scaling to fit the headshape, we might benefit from adding a constraint (e.g., with fmin_cobyla
) that all points must be outside the surface, or at most x
mm inside the surface. Using the current algorithm I found that many times it found a solution that is impossible (e.g., roughly half the points inside the head). WDYT?
Yes that sounds great, I also just noticed recently that when there are additional points in the head shape at the nose which are not in the MRI they cause undesirable shits pulling the head shape into the MRI
@christianbrodbeck I'm looking at implementing the points-must-be-outside algorithm, which will mean eventually changing fit_point_cloud
in e.g. fit_scale_hsp_points
, which finds the scaling and rotation to match the MRI and head shape points. Two issues come up:
translate=False
? Stability? It seems like it would work better not to constrain this if possible.nasion
as the effective "center of the coordinate frame" here is a bit weird. It means that any rotation about the x
or z
axes will cause a huge swing in the locations of the points in the back of the head, and a tiny one in the locations in the front.I know you originally formulated the code to treat the nasion
in a special way (due to its accuracy IIRC?) but the more I think about implementing these things, the more it seems like we should change it to use a head-coordinate-frame-centric approach instead. Do you agree?
... and the reason that these two things are issues for the fitting code is that, when combined, they will commonly lead to scenarios that prevent the optimizer from finding any suitable solution. Thinking about cases where there are points near nasion
slightly inside the head (or the nasion itself slightly is), I can see this being a problem. (This is even the case for the sample
dataset trans
file IIRC.)
Yeah the reason for using the Nasion as center (and avoiding translation) was that with the optical scanner at NYU we have high confidence in the nasion, slightly less in RPA and LPA (because they are not as visually distinctive sometimes), but all the points at the back of the head will be highly uncertain due to hair. I agree that for other situations this is not as clear cut, although the nasion should still be much less ambiguous than matching two points at the back of the head, no?
Maybe a parameter that weights the nasion relative to other points?
I agree that for other situations this is not as clear cut, although the nasion should still be much less ambiguous than matching two points at the back of the head, no?
Maybe with two points, but the standard procedure in Neuromag + Polhemus systems is generally to digitize 100+ additional headshape points. There are computer-vision-based systems in deployment at some sites that give (tens of) thousands of points, too. In both of these cases the cloud-to-MRI fit can be (and might usually be) more accurate than relying on the single nasion point, depending on the measurement setup.
As an extreme example, at UW our Polhemus glasses make it difficult to get to the actual nasion for some subjects / studies.
Maybe a parameter that weights the nasion relative to other points?
The way that MNE-C addresses this in ICP fitting is to allow fixing the nasion in place.
I suspect that this is the sort of thing you had in mind from the start to motivate the nasion-centric code. But it leads to a consistency problem: everywhere else in MNE we adhere to Neuromag head coordinates as the coordinate frame to use for dig/head-centric measurements (including the transformation we eventually save with the coreg gui!). So users (and devs) have to deal with this new definition the way things are currently structured. (As a mne coreg
user I was confused when my X/Z rotations were about the nasion, for example.)
I think the cleanest thing to do would be to re-code things to use Neuromag head coords, and then add a constraint back in for the Nasion where appropriate. It will change the interaction with mne coreg
for people who have used it, but hopefully in not-too-painful ways.
I am not opposed to changing the coordinate system, especially if there will be an option to keep the nasion in place (ore increase its weight for automatic fits).
@christianbrodbeck one more idea I think could help some thing I have in mind. I'm thinking of always trying to load the low-res head (and might as well always try to load the high-res one, too, for simplicity). This will allow faster comptuations such as:
mne_analyze
does and is niceAny thoughts/objections?
... oh, and "dig point nearest point on surface" is also useful for iterative-closest-point (ICP) fitting, which is another feature mne coreg
doesn't have that mne_analyze
does.
@agramfort BTW do you think something like local outlier factor could be useful for removing dig points that are unreasonably far from the surface (e.g., points that were errantly digitized)?
the metric you are interested in is the distance from dig point to the scalp surface. so each point is summarized by a positive float (distance). No need for fancy outlier detection if you just have this. You can just threshold the distribution with some contamination factor. Let's say 10% of bad dig points max.
@christianbrodbeck with the head_to_mni
PR #4344 recently merged it occurred to me that we are not scaling the MNI transform file talairach.xfm
. Have you thought about it and not done it for some reason, or just didn't consider / know about it when implementing originally?
This would give us a way to transform any points in head coords (e.g., from dipole fits) to MNI coordinates once you fit a surrogate model like fsaverage
to the head shape points, which would be neat.
I did not consider it – for simplicity what I did was only scale the geometry needed for source estimates, and for anything else like plotting I used the unscaled fsaverage model (since the source spaces are scaled versions of the same vertices, the unscaled geometry can be used without any additional transformation). The same logic can give you vertex coordinates in MNI space, but there might be applications where it's easier with a scaled talairach.xfm
.
We got through most of the stuff at the top comment, so closing this for targeted issues as people find things they need or are broken
I'm starting to use the Coreg GUI because I want to allow spherical warping instead of just 3-axis scaling. A few potential enhancements/questions have come to mind @christianbrodbeck .
[ ] constrain MRI scaling optimization to force all points at most
x
mm inside the MRI head surface, or possibly by adding weighting for points inside vs outside[ ] unify cHPI matched-point fitting (
cobyla
) with ICP routine (leastsq
)?[ ] visualization options could saved to / loaded from JSON (e.g., background color), so could other options (e.g., ICP fitting options)
[x] allow instance of Info / DigMontage (issue #2772)
[x]
Implement spherical warping.This will probably not be done unless someone is highly motivated because a lot of code now depends on the transform being affine.[x] scale the MNI transform and MRIs, too (#5132)
[x] add coordinate frame axes (head and MRI) as in
plot_alignment
(#4971)[x] do not require MRI fiducials for transforms to take effect (#4971)
[x] allow HPI and EEG electrodes in fitting (#4971)
[x] Refactor
trans
out ofPointObject
(#4971)[x] change coordinate system and transformations to use Neuromag head coordinates instead of being nasion-centric (#4971)
[x] add option to add extra weight to Nasion (#4971)
[x] show HPI coils (#4966)
[x] color-code hsp points based on whether they are inside or outside the head, which can be done
easilywith a custom colormap + scalar values. (#4966)[x] always load low- and high-res head geometry, use low-res head for computations (#4966)
[x] orient points to the head (#4966)
[x] scale head points based on distance from scalp (#4966)
[x] Points need to be culled based on location (e.g., points outside the back of the head should not be visible from the front). The easiest way to do this is probably to 1) have a translucent head with
cull_face='back'
and 2) an opaque head withcull_face='front'
. (#4938)[x] I found a few little bugs (face culling, surface normals, argument-passing not giving expected behavior) but I will open a PR for those.
[x] Why are the rotations done around the nasion instead of e.g. the head coordinate frame center? The latter seems like a more natural choice, as it's the center of a standard coordinate frame.
[x] The camera center is currently not set, which means it takes whatever
mayavi
defaults to. We could allow setting it to 1) "auto" (current), 2) MRI origin, 3) head coordinate frame center by radio button.[x] For me, changing the MRI scaling only translates the MRI, it does not scale it, i.e. the head shape points just move relative to the MRI (because the MRI is not quite centered) but do not get wider/thinner relative to the MRI.
[x] EDIT: Need to fix coreg on windows: https://ci.appveyor.com/project/Eric89GXL/mne-python/build/1.0.8729
[x] EDIT: We should add EEG electrodes (#3991)
[x] EDIT: We should add terrain/arcball-style interaction, with the "up" given by the head coordinate system. (#3991)
[x] EDIT: We should allow non-FIF instances -- anything that has a proper
info['dig']
should work. (#4444)[x] Add a button to project the EEG electrodes (#3700; #4937)
[x] Fix points update for viewer (in _viewer.py, e.g. self.src.data.points is updated, the normals need to be updated, too)
[x] I've started trying to get the spherical warping implemented. The GUI changes are easy, but getting the actual parts working is a big pain because the MRI
scale
parameters are ingrained in many parts of the code. Would it be better to first refactor this into a genericPointTransformation
class or something? The None/1/3-axis cases are just special cases of mapping 3-space points to 3-space points (with no, uniform, or 3-axis scaling respectively). It seems like the sanest way to allow for TPS warping is to get this refactoring working first, then the warp should work in more easily.