Open currymj opened 7 years ago
Dear Michael,
I am sorry to hear that TRENTOOL's lag estimation is causing trouble. From my experience, the reasons maybe fourfold:
An insufficient amount of data; yet, this will typically render the lag variable, whereas you get a bias towards too short delays.
A low entropy (rate) of the source process, i.e. not much information is produced in the first place, to be then transferred.
Wrong embedding parameters (for the target). Try to run the Ragwitz optinization with a much larger range of tau and dim
A pathologcal case that truly requires the use of embedded source states instead of scalar values from the source.
what kind of simulated processes do you use?
Best,
Michael
On 25.07.2017 20:35, Michael J. Curry wrote:
The group I work for is trying to use transfer entropy to compare some simulated signals, in order to eventually apply the same computational pipeline to real data once it is available. In particular, our focus is on estimating the lags. We've found that in a variety of simple and complicated simulated datasets, TRENTOOL consistently hugely underestimates the delay time. Usually it picks the smallest lag from its search range, no matter what.
This may be a problem with our choice of parameters for the estimator. We're considering doing running a grid search over the parameter space, but figured we should first ask:
- Has this problem been observed before? Do you have any speculation as to what caused it?
- For successful uses of delay estimation, what parameters did people choose?
- When doing a grid search, are there any parameters that you think might be more important than others?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/trentool/TRENTOOL3/issues/24, or mute the thread https://github.com/notifications/unsubscribe-auth/AIqYGiJSch_FfC0rtIzmHDgnt-vid_Jeks5sRjVXgaJpZM4Oi80Q.
{"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/trentool/TRENTOOL3","title":"trentool/TRENTOOL3","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/trentool/TRENTOOL3"}},"updates":{"snippets":[{"icon":"DESCRIPTION","message":"How to tune parameters for lag estimation? (#24)"}],"action":{"name":"View Issue","url":"https://github.com/trentool/TRENTOOL3/issues/24"}}}
Prof. Dr. rer. nat. Michael Wibral MEG Labor, Brain Imaging Center Goethe Universität
Heinrich Hoffmann Strasse 10 60528 Frankfurt am Main
We had been using trajectories sampled from a set of stochastic differential equations that are supposed to simulate something like the dynamics observed when measuring EEG. In the interest of figuring out where we're going wrong, though, I've tried to simplify things.
When simulating a simple AR(1) process generated from the code below (the output is just a channels x time points x trials matrix which is later converted to FieldTrip format) I can't even get the code to run due to failure to meet the autocorrelation threshold, even after quite a bit of tinkering. Are there parameters you recommend raising or lowering here? Or is this just a time series that shouldn't be expected to work?
af_output = zeros(2,n_time,n_trials);
for trial=1:n_trials
af_output(1,1,trial) = 5*rand();
for i=2:30
af_output(1, i, trial) = af_output(1, i-1, trial) + (rand()-.5);
end
af_output(2,1:30,trial) = 0;
for i=31:n_time
af_output(1, i, trial) = af_output(1, i-1, trial) + (rand()-.5);
af_output(2, i, trial) = .2*af_output(1, i-30, trial) + af_output(2, i-1, trial) + (rand()-.5);
end
end
While this problem is far simpler than the original process we were using, I think it may help me understand where we're going wrong and/or what the limitations of transfer entropy are.
Thanks for your help,
Michael
The group I work for is trying to use transfer entropy to compare some simulated signals, in order to eventually apply the same computational pipeline to real data once it is available. In particular, our focus is on estimating the lags. We've found that in a variety of simple and complicated simulated datasets, TRENTOOL consistently hugely underestimates the delay time. Usually it picks the smallest lag from its search range, no matter what.
This may be a problem with our choice of parameters for the estimator. We're considering doing running a grid search over the parameter space, but figured we should first ask:
1) Has this problem been observed before? Do you have any speculation as to what caused it? 2) For successful uses of delay estimation, what parameters did people choose? 3) When doing a grid search, are there any parameters that you think might be more important than others?