Closed dengemann closed 5 years ago
I suppose the digitization points are the ones at subj002_11022011.pos. The python version reads them correctly, but the C version ignores them. So we just need to convert the files using python and update the dataset.
Sounds like a terrific plan! On Wed, 12 Apr 2017 at 07:35, jaeilepp notifications@github.com wrote:
I suppose the digitization points are the ones at subj002_11022011.pos. The python reads them correctly, but the C version ignores them. So we just need to convert the files using python and update the dataset.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4184#issuecomment-293550391, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0fir9J1oQCmAb4to_avMxO4_-DHLW0ks5rvLcMgaJpZM4M6oYm .
I am not sure how / where you dis the packaging. I will download the original data anyways and redo things and add a coregistration. Let me ping you when I am done.
On Wed, 12 Apr 2017 at 07:45, Denis-Alexander Engemann < denis.engemann@gmail.com> wrote:
Sounds like a terrific plan! On Wed, 12 Apr 2017 at 07:35, jaeilepp notifications@github.com wrote:
I suppose the digitization points are the ones at subj002_11022011.pos. The python reads them correctly, but the C version ignores them. So we just need to convert the files using python and update the dataset.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4184#issuecomment-293550391, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0fir9J1oQCmAb4to_avMxO4_-DHLW0ks5rvLcMgaJpZM4M6oYm .
@jaeilepp currently the ctf reader gives me this when trying to read the noise files:
ds directory : ./Data/subj002_noise_20111104_02.ds
res4 data read.
hc data read.
Separate EEG position data file read.
Quaternion matching (desired vs. transformed):
0.00 80.00 0.00 mm <-> 0.00 80.00 0.00 mm (orig : -56.57 56.57 -270.00 mm) diff = 0.000 mm
0.00 -80.00 0.00 mm <-> 0.00 -80.00 0.00 mm (orig : 56.57 -56.57 -270.00 mm) diff = 0.000 mm
80.00 0.00 0.00 mm <-> 80.00 -0.00 0.00 mm (orig : 56.57 56.57 -270.00 mm) diff = 0.000 mm
Coordinate transformations established.
Polhemus data for 3 HPI coils added
Device coordinate locations for 3 HPI coils added
Measurement info composed.
Finding samples for ./Data/subj002_noise_20111104_02.ds/subj002_noise_20111104_02.meg4:
System clock channel is available, checking which samples are valid.
5 x 120000 = 600000 samples from 299 chs
Current compensation grade : 0
ValueErrorTraceback (most recent call last)
<ipython-input-17-7c26c30e9a9a> in <module>()
4 '.ds', '_raw.fif')
5 raw = mne.io.read_raw_ctf(fname)
----> 6 if 'spontaneous' in raw:
7 assert len(raw.info['dig']) == 101
/Users/dengemann/github/mne-python/mne/channels/channels.pyc in __contains__(self, ch_type)
176 _contains_ch_type(self.info, 'grad'))
177 else:
--> 178 has_ch_type = _contains_ch_type(self.info, ch_type)
179 return has_ch_type
180
/Users/dengemann/github/mne-python/mne/channels/channels.pyc in _contains_ch_type(info, ch_type)
79 if ch_type not in valid_channel_types:
80 raise ValueError('ch_type must be one of %s, not "%s"'
---> 81 % (valid_channel_types, ch_type))
82 if info is None:
83 raise ValueError('Cannot check for channels of type "%s" because info '
ValueError: ch_type must be one of ['bio', 'chpi', 'dipole', 'ecg', 'ecog', 'eeg', 'emg', 'eog', 'exci', 'fnirs', 'gof', 'grad', 'hbo', 'hbr', 'ias', 'mag', 'misc', 'planar1', 'planar2', 'ref_meg', 'resp', 'seeg', 'stim', 'syst'], not "spontaneous"
@jaeilepp never mind it was my bad.
Hi @dengemann , @jaeilepp! I was working with resting state dataset (http://neuroimage.usc.edu/brainstorm/Tutorials/Resting) and got to the point, where need to specify transform between mri and head in make_forward_solution
. For now I've given None
, but we should be able to make a transform based on digitisation data. I think this is related. Any update on that?
yes you need a -trans.fif file. You need to do the coregistration.
we did not include it in the rest brainstorm dataset. We have it for auditory but not the rest dataset.
I have everything locally. We can add it. On Tue 20 Feb 2018 at 13:56, Alexandre Gramfort notifications@github.com wrote:
yes you need a -trans.fif file. You need to do the coregistration.
we did not include it in the rest brainstorm dataset. We have it for auditory but not the rest dataset.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4184#issuecomment-366969186, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0fik1SP12NnjU2FXNFJn6IRW7R1rnIks5tWsEbgaJpZM4M6oYm .
Thanks! Since we are on that, notice also that mne_data/MNE-brainstorm-data/bst_resting/subjects/bst_resting/bem/
contains files with prefix: bst_resting-
(eg. bst_resting-inner_skull.surf
). It seems that current mne.make_bem_model
doesn't support that. So I guess it would be worth to either add some prefix option, or rename bem files in this dataset.
I have done all that was needed to get source connectivity results out of the bst rest data. Let's discuss tomorrow.
On Tue, Feb 20, 2018 at 2:49 PM dokato notifications@github.com wrote:
Thanks! Since we are on that, notice also that mne_data/MNE-brainstorm-data/bst_resting/subjects/bst_resting/bem/ contains files with prefix: bst_resting- (eg. bst_resting-inner_skull.surf). It seems that current mne.make_bem_model doesn't support that. So I guess it would be worth to either add some prefix option, or rename bem files in this dataset.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4184#issuecomment-366982660, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0filBnJ7hgpYtn9Zbjzz66g0VyRIoMks5tWs1ZgaJpZM4M6oYm .
@agramfort for updating the brainstorm resting dataset shall we:
yes please use the original ds files.
we should add a param in read_raw_ctf that is False by default that will call the _clean_names function internally
thanks a lot for looking into this.
Ok. Let's do that.
On Wed, Mar 7, 2018 at 9:44 AM Alexandre Gramfort notifications@github.com wrote:
yes please use the original ds files.
we should add a param in read_raw_ctf that is False by default that will call the _clean_names function internally
thanks a lot for looking into this.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mne-tools/mne-python/issues/4184#issuecomment-371065775, or mute the thread https://github.com/notifications/unsubscribe-auth/AB0fihgh94-hZbXkJDATqJHGx9PiXfNBks5tb54AgaJpZM4M6oYm .
@dengemann what's the status of that?
According to the old tutorials there should be a rich digitisation using polhemus (http://neuroimage.usc.edu/brainstorm/Tutorials/Resting#Download_and_installation) The fif files however come only with fiducials and landmarks. It seems that the head shape did not make it into the fif files that we ship. I think we need to do some updates here.
cc @jasmainak @jaeilepp