Closed kingjr closed 6 years ago
Good point - I remember a conversation about that from the GSoC, but cannot remember the reason for keeping it, @agramfort ?
It must have been carried over from a historical example where this was done to make things look good. I remember that results used to look bad when using all sensors. We should see if this is still the case. On Tue 16 Jan 2018 at 10:34, Britta Westner notifications@github.com wrote:
Good point - I remember a conversation about that from the GSoC, but cannot remember the reason for keeping it, @agramfort https://github.com/agramfort ?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4875#issuecomment-357903366, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0fipmtEkyM8S6znWSxPcnED12ExCW2ks5tLG0WgaJpZM4RfIfl .
I can have a look at that ASAP - probably tomorrow.
the rationale is the following. Beamformers are know to fail when you have very correlated sources which is possible to binaural stimulations. Although it works ok on all sensors on sample I wanted to show how to restrict the LCMV to the temporal channels that only see the left auditory source?
shall we just clarify the doc?
IMO, examples should be restricted to a unique aim: here: how to do volume source modeling. The issue of sync sources for beamformers belongs to a doc / tutorial, iif the results aren't affected.
On 16 January 2018 at 15:09, Alexandre Gramfort notifications@github.com wrote:
the rationale is the following. Beamformers are know to fail when you have very correlated sources which is possible to binaural stimulations. Although it works ok on all sensors on sample I wanted to show how to restrict the LCMV to the temporal channels that only see the left auditory source?
shall we just clarify the doc?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4875#issuecomment-358088971, or mute the thread https://github.com/notifications/unsubscribe-auth/AEp7DKJ5fyQfgcKIKytw8TISLUjPwbbOks5tLQH6gaJpZM4RfIfl .
unless you ahve time to develop this in a PR I would suggest to start with a clear comment in this example.
@agramfort actually, the correlated sources problem was most prominent when (for computational sake) the covariance matrix was calculated on the average of the trials. Now that we use epochs, we rarely run into that problem anymore - and if the results look good in this example, I would rather vote for showing that it works than using an old "hack" :smiley:
are you saying that LCMV with bilateral auditory stim works if you compute the data cov on Epochs?
most of the time it does, yes.
can we just add a note then?
If I see it right, this example only uses the left-ear auditory stimulation trials anyway. Was the approach used to get the (weaker) ipsilateral activity then?
I don't think that was more to it than showing how it was possible to select sensors to run LCMV...
I'd prefer not to propagate the myth that one can't see auditory activations with a beamformer without special tricks. The beamformer seems to perform excellently on this dataset using all the sensors, so why should we give people the impression otherwise?
you really think it's a myth? Maybe I've hanged out too much with MNE people? :)
Perhaps better described as obsolete knowledge, and yes, clearly. :-) As @britta-wstnr pointed out above, it came up during the days of taking the covariance of the average, and even then it was only occasionally. We went through ~15 participants in our 2006 IEEE paper before finding one that failed "enough" in order to demonstrate our workaround!
The reason is that the sources need to be very highly temporally correlated (>0.85) for it to become a problem, but in most people, the two auditory cortices aren't quite so synchronized, and have some delay between them.
Now, with covariance based on epochs (and optionally time-frequency approaches), it's even less likely that the correlations will approach that level. I suspect that moving from spherical head models to more accurate ones has also helped... I literally have not seen it come up since those days. In fact, I'll challenge anyone to contribute an auditory MEG dataset that fails on this pipeline! And of course we'll see what can be done about that, if somebody manages to break it. ;-)
ok fair enough then. Feel free to send a PR to remove this selection then
I will have a look at it! PR follows.
Was fixed in #4887
In the
plot_lcmv_beamformer_volume
example, why is there a selection of left temporal channels before source reconstruction?I don't understand the rational of removing good channels before a source reconstruction.
cc @agramfort @britta-wstnr