Closed Objay99 closed 5 years ago
That is a warning that can happen when obtaining an ICA decomposition. Commonly, ICA decompositions consist of exactly the same number of ICs as there are channels. Whenever this is not the case, a warning is produced -- usually however, this is not actually a problem.
Still, this warning hints that you do not actually have 11 components (or do not actually have 11 channels). Note that source locations are not the same as components. Please have a look at my reply here, where a user mistakenly used multiple source locations to create only a single component.
If that does not help, please paste a working example that exhibits the issue.
Thanks it did work.. I want to ask if it is possible to simulate continuous EEG data... and then add event markers to specified points in time.. I tried using my simulated data in bcilab.. it doesn't allow resampling and there's no need for epoch extraction since all my epochs are known and labelled. But even after I left the resampling and epoch extraction option unchecked.. I keep getting the error that "the data set does not contain the required field .epoch.target... and that has left me confused for a while now.. what can I do
SEREEGA is explicitly intended to simulate event-related data, not coninutous data. BCILAB indeed generally expects continuous data.
One way to deal with this is to force your data into a continuous format. I just uploaded the function utl_epoch2continuous
, written a while back by Javier Lopez-Calderon. This turns an epoched EEGLAB dataset into continuous data (with boundary events between the original epochs). With this you can thus give BCILAB "continuous" data. This is the method I used for the paper. Note that I'm not sure about the exact influence of the boundary events on BCILAB's filters -- I would recommend to e.g. simulate longer epochs with a long prestimulus interval to have any ringing effects happen before your actual event.
I have not tried switching off BCILAB's epoch extraction, but that might work too. The error you see apparently means that you need to tell BCILAB what your target markers are. This sometimes happens after epoch2continuous
as well. If your events are 'target' and 'nontarget', for example, the line for this would be
EEG = set_targetmarkers(EEG, {'target','nontarget'});
if you continue processing the dataset with the BCILAB, or EEG = exp_eval(set_targetmarkers(EEG, {'target','nontarget'}));
to process it immediately, if for example you want to save it. This generates the EEG.epoch.target
field.
I used the epoch2continuous function and it created a continuous dataset but i cant even edit the events or add new event in eeglab, and as for the epoched data, i added new events in eeglab and chose 2 of the events as my target and now it worked but i get an error that "NaN or inf not allowed". my guess is that the error comes in during editing the event values i've checked for empty events but there are none. I now simulated a continuous data with just 1 epoch of about 100 seconds and i expected to have my normal event marker from the code and a boundary event but it doesnt have any and that kinda makes it impossible for me to add any event to it, so.. no target marker. Am now thinking, is there a way u can make me add events to a real(1 epoch) continuous data that is simulated or u might have another ideas on how i can solve the inf or NaN errors. i dont get replies in bcilab, that is why i've been asking questions here mostly.. and @lrkrol, i hope you dont get annoyed by my too many questions.
Yeah, active maintenance of BCILAB is unfortunately not a high priority at the moment... We'll see how far we get here. :)
Could you tell me more precisely what you're doing and when you're getting this error? What do you mean by adding events in EEGLAB, and why are you doing this?
If you want to have epochs of two different classes in your dataset (for BCILAB to classify between them), I would simulate two different EEGLAB datasets and use EEGLAB's pop_mergeset
to merge them into one dataset. Then, keeping in mind my previous comments, BCILAB should be able to load that dataset and classify the events. That is how I did it for the SEREEGA paper.
Note that merging datasets puts one dataset after the other; by default, if you're using cross-validation, BCILAB cuts up your dataset in to five pieces chronologically. It can thus happen the the first and last pieces only have one class in them. In that case, utl_reorder_eeglabdataset
works with epoched data to mix up the dataset and distribute the classes more equally.
okay. am trying to model my simulated EEG data using BCILAB. after u created the epoch2continuous function, i was able to simulate a continuous EEG of 100 seconds which contains 199 events.. with half of it being a boundary event. Then, i simulated a single epoch EEG dataset with 100 seconds also.. and my reason for doing this is that i want to see if i get a dataset that will look similar to the bcilab tutorial data because i keep getting one error after the other in bcilab.. So, i load these datasets in EEGLAB and then try to edit the event field but it is not allowed unlike for epoched data.. so after a few google searches, i discovered that erplab could help with those events so i exportedthe event list and then resave the dataset, then i was able to edit the events (i was only trying to have like 3-4 different events spread through the continuous EEG data so that i can target one or two of the events.. after doing this, i used the ParadigmSIFT approach, started training a new model, then the preprocessing, model order selection, fitting and validation are all succesful but the consistency value is very low (around 50%), then after doing thee connectivity estimates i get the latest error that i attached to this mail.. it says "the given epoch indices could not be extracted from the time series field .S with error: index exceeds matrix dimensions. Field size was : 1, epoch indices we're: [1 2 3]". So, i don't know how to solve this error because the problem, i think emanated from my data.. so i dont know what to do next because the urevent in eeglab that contains the index of the event field shows an empty array [] when I tried the EEG.urevent command and it is the same with my other simulated datasets and it cannot be modified.. So, I was wondering if the error was due to my coding..
On Tue, May 21, 2019 at 2:48 PM Laurens R Krol notifications@github.com wrote:
Yeah, active maintenance of BCILAB is unfortunately not a high priority at the moment... We'll see how far we get here. :)
Could you tell me more precisely what you're doing and when you're getting this error? What do you mean by adding events in EEGLAB, and why are you doing this?
If you want to have epochs of two different classes in your dataset (for BCILAB to classify between them), I would simulate two different EEGLAB datasets and use EEGLAB's pop_mergeset to merge them into one dataset. Then, keeping in mind my previous comments, BCILAB should be able to load that dataset and classify the events. That is how I did it for the SEREEGA paper.
Note that merging datasets puts one dataset after the other; by default, if you're using cross-validation, BCILAB cuts up your dataset in to five pieces chronologically. It can thus happen the the first and last pieces only have one class in them. In that case, utl_reorder_eeglabdataset works with epoched data to mix up the dataset and distribute the classes more equally.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/lrkrol/SEREEGA/issues/8?email_source=notifications&email_token=AL7LV5GLLKR7YREZAJ45KRLPWP4UVA5CNFSM4HKWMBG2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODV3653Y#issuecomment-494399215, or mute the thread https://github.com/notifications/unsubscribe-auth/AL7LV5GHISNEY5QZANRQQHTPWP4UVANCNFSM4HKWMBGQ .
%THIS IS MY CODE epochs = struct(); epochs.n = 100; epochs.srate = 1000; epochs.length = 1000; epochs.marker = 'dbt';
epochs.prestim = 200;
lf = lf_generate_fromnyhead('labels', {'AF3', 'AF4', 'F3', 'F4', 'F7', 'F8', 'FC6', 'FC5', 'Fp1', 'Fp2', 'Fz'});
lf = lf_add_source(lf, [-48 26 4], rand(11,3)); lf = lf_add_source(lf, [-38 28 38],rand(11,3)); lf = lf_add_source(lf, [48 24 -8], rand(11,3)); lf = lf_add_source(lf, [24 60 0], rand(11,3)); lf = lf_add_source(lf, [4 62 0], rand(11,3)); lf = lf_add_source(lf, [2 32 54], rand(11,3)); lf = lf_add_source(lf, [-18 62 0], rand(11,3)); lf = lf_add_source(lf, [42 30 34], rand(11,3)); for i = 1:11
source(1) = lf_get_source_nearest(lf, [-27 83 -3]);
source(2) = lf_get_source_nearest(lf, [-36 76 24]);
source(3) = lf_get_source_nearest(lf, [-48 59 44]);
source(4) = lf_get_source_nearest(lf, [-71 51 3]);
source(5) = lf_get_source_nearest(lf, [-78 30 27]);
source(6) = lf_get_source_nearest(lf, [0 63 61]);
source(7) = lf_get_source_nearest(lf, [27 83 -3]);
source(8) = lf_get_source_nearest(lf, [36 76 24]);
source(9) = lf_get_source_nearest(lf, [48 59 44]);
source(10) = lf_get_source_nearest(lf, [71 51 -3]);
source(11) = lf_get_source_nearest(lf, [78 30 27]);
end
erp = struct(); erp.peakLatency = 300; erp.peakWidth = 100; erp.peakAmplitude = .7; erp = utl_check_class(erp, 'type', 'erp'); erp.peakLatencyDv = 50; erp.peakAmplitudeDv = .2; erp.peakAmplitudeSlope = -.55;
noise = struct( ... 'type', 'noise', ... 'color', 'brown', ... 'amplitude', 1); noise = utl_check_class(noise);
ersp = struct( ... 'type', 'ersp', ... 'frequency', [12 15 27 30], ... 'amplitude', .25); ersp.modulation = 'burst'; ersp.modLatency = 300; % ersp.modWidth = 80; ersp.modTaper = 0.5; ersp = utl_check_class(ersp);
c = utl_create_component(source, noise, lf);
utl_add_signal_tocomponent(erp, c); utl_add_signal_tocomponent(ersp, c); utl_check_component(c, lf)
[w, winv] = utl_get_icaweights(c, lf); EEG = utl_create_eeglabdataset(generate_scalpdata(c, lf, epochs), ... epochs, lf); EEG = utl_epoch2continuous(EEG); EEG = utl_add_icaweights_toeeglabdataset(EEG, c, lf);
pop_eegplot(EEG, 1, 1, 1); pop_saveset(EEG);
A couple of general things with your code:
Note that lf_add_source
adds a new, previously non-existant source to the lead field; it does not get you one of the existing sources from the lead field to simulate data from. That would be lf_get_source_*
. Your current code adds eight sources with random projection patterns, which are never used again.
When you use utl_add_signal_tocomponent
, don't forget to assign the function's output. The correct line would be c = utl_add_signal_tocomponent(erp, c);
, but note that this would add the ERP to all sources in c
. Alternatively, do something like c(1) = utl_add_signal_tocomponent(erp, c(1));
to only add the ERP to the first source. Same goes for the line after that, with the ERSP.
Now more specifically to what you're trying to do:
You are using noise, ERPs, and ERSPs. BCILAB's ParadigmSIFT is looking for connectivity measures. Noise, ERP, and ERSPs are not reflected in connectivity measures, as they are generated independently of the activity happening in other sources. If you want to use ParadigmSIFT, look at using autoregressive models in SEREEGA.
Note, by the way, that BCILAB hasn't been updated in quite a while and has issues on newer versions of MATLAB. I myself keep a copy of R2014a around just for BCILAB. Also, BCILAB probably needs more than just handful of epochs, especially when cross-validating.
I still don't quite get the purpose of editing events manually. If you need different events, then just simulate different events. As I said, if you want to classify between two different conditions, you need a dataset that reflects those two conditions both by having a) different event markers indicating the relevant epochs, and b) different EEG activity that the classifier can differentiate between. The following example code does that. It generates two different conditions, each with an ERP coming from the same source, but its amplitude is inverted between conditions.
% general config
lf = lf_generate_fromnyhead('montage', 'S64');
epochs = struct( ...
'n', 50, ...
'srate', 1000, ...
'length', 2000, ...
'prestim', 500);
% ERP for condition 1
erp_c1 = struct( ...
'type', 'erp', ...
'peakLatency', 300, ...
'peakWidth', 400, ...
'peakAmplitude', 10, ...
'peakAmplitudeSlope', -5);
erp_c1 = utl_set_dvslope(erp_c1, 'dv', .2);
erp_c1 = utl_shift_latency(erp_c1, epochs.prestim);
% ERP for condition 2: just with inversed amplitude
erp_c2 = erp_c1;
erp_c2.peakAmplitude = -10;
% noise
noise = struct( ...
'type', 'noise', ...
'color', 'brown', ...
'amplitude', 5);
% sources: ERP source at [0 10 10], rest random but with at least 25 mm between them
sources = lf_get_source_spaced(lf, 64, 25, 'sourceIdx', lf_get_source_nearest(lf, [0 10 10]));
% components condition 1
comp_c1 = utl_create_component(sources, noise, lf);
comp_c1(1) = utl_add_signal_tocomponent(erp_c1, comp_c1(1));
% components condition 2
comp_c2 = utl_create_component(sources, noise, lf);
comp_c2(1) = utl_add_signal_tocomponent(erp_c2, comp_c2(1));
% generating scalp data
data_c1 = generate_scalpdata(comp_c1, lf, epochs);
data_c2 = generate_scalpdata(comp_c2, lf, epochs);
% creating EEGLAB datasets: first one for each condition, then one that combines the two into one
EEG1 = utl_create_eeglabdataset(data_c1, epochs, lf, 'marker', 'event1');
EEG2 = utl_create_eeglabdataset(data_c2, epochs, lf, 'marker', 'event2');
EEG = utl_reorder_eeglabdataset(pop_mergeset(EEG1, EEG2), 'mode', 'interleave');
As you can see, this gives us a dataset with two different events:
And if we plot the ERP relative to these two different events, we indeed see rather extreme differences between the two conditions:
These differences can easily be classified by BCILAB. (You will need an approach that looks for ERPs in this case -- not for connectivity. In this case, that would be ParadigmWindowmeans.)
I am trying to simulate a 11 channels EEG data nd I added 11 different source locations with for loop using lf_get_source_nearest(lf, pos)... I followed d instruction in the coding tutorial but I keep getting that warning whenever I run d code..