Open PaavoHietala opened 3 years ago
what mne version are you using?
what does
mne sys_info tell you?
you just run the script / example online?
I'm using MNE 0.22.0.
sys_info:
Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
Python: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
Executable: /share/apps/anaconda/2019.11-gpu/bin/python
CPU: x86_64: 12 cores
Memory: 125.8 GB
mne: 0.22.0
numpy: 1.17.3 {blas=mkl_rt, lapack=mkl_rt}
scipy: 1.3.1
matplotlib: 3.1.1 {backend=Qt5Agg}
sklearn: 0.21.3
numba: 0.46.0
nibabel: 2.5.1
nilearn: Not found
dipy: Not found
cupy: 8.4.0
pandas: 0.25.2
mayavi: 4.7.2
pyvista: Not found
vtk: 8.2.0
PyQt5: 5.9.2
I'm running a custom script based on the example available online. Instead of the parallel wrapper in the example, I run the prepare_fwds
directly:
fwds = prepare_fwds(fwds_, src_ref, copy = False)
fwds_
is a list of subjects' forward models, based on fsaverage source space morphed to each subject's anatomy
src_ref
is fsaverage source space
I cannot replicate the crash
are you sure the datasets are up to date?
Is there some specific detail in the data that could be outdated? It is formed in the same way as in the example with mne.morph_source_spaces
and mne.make_forward_solution
and runs fine through basic MNE analysis. I created the fsaverage source space and all forward models from scratch & reinstalled groupmne but the problem persists.
Are we talking about the same version of the groupmne code? I'm using the 0.0.1dev.
Looking at the code on this page, any possible data issues aside, https://hichamjanati.github.io/groupmne/_modules/groupmne/group_model.html#prepare_fwds I don't understand how it can be run without an error.
The problem is the left side of this line which tries to add int (col_0
) to a list (pos
) which is not possible in python:
gain[:, col_0 + pos] = full_gain[:, col_1 + permutation]
If the idea is to offset column indices by nsources for the right hemisphere, then either col_0
or pos
has to be a numpy array. pos
is formed with list comprehension from dictionary values, so it's a list regardless of the data. It would probably be more intuitive to typecast pos
instead of col_0
like I proposed first, but changing either of them produces the same result.
permutation
on the other hand is a ndarray as returned by np.argsort
which should not be the culprit.
Apparently this problem only appears if the source spaces are saved to disk and loaded again with mne.read_source_spaces
.
If they are kept in memory once they have been created, no error is raised.
ok this is then a known issue yet it's not documented. It's that groupmne does not guarantee that the indices of the vertices are sorted. This would require a fix in mne which I am not sure is really possible.
@hichamjanati thoughts?
I'm looking into this, I thought groupmne covered this sorting problem but apparently not. What is odd to me is that @cnopicilin gets an (obvious, from the code) TypeError that should (1) has nothing to do with sorting and (2) should be caught by the tests 🤔
Ah, the sorting is a good thing to know and now I can circumvent this for the time being by always recreating the source spaces and forward models.
Thank you very much for taking the time to help me!
it's not the sorting (phew). When loaded on memory, src[0]["nuse"]
is of type np.int
, after writing and loading src it becomes int
. Anyway, as @cnopicilin suggested this is an easy fix by making n_sources
a numpy array. Thank you @cnopicilin for pointing this out.
Following the example in the manual https://hichamjanati.github.io/groupmne/auto_examples/plot_mtw.html with my data I got an error with group_model.prepare_fwds().
The line 129:
gain[:, col_0 + pos] = full_gain[:, col_1 + permutation]
causes a
TypeError: unsupported operand type(s) for +: 'list' and 'int'.
I assume this is caused by line 124:col_0 = i * n_sources[0]
where
n_sources
is a list, hencecol_0
is a list as well. I assume the gain matrix contains values for both hemispheres and it needs to offset incoming values bycol_0
to fit rh after lh. For an element-wise addition on line 129 it has to be a numpy array, so I fixed it by modifying the line 124 tocol_0 = i * np.array(n_sources[0])
Which to my knowledge fixed the issue and now the package produces logical output.