f4bD3v / humanitas

A price prediction toolset for developing countries
BSD 3-Clause "New" or "Revised" License
17 stars 7 forks source link

Reservoir Computing (ESN) - Training #24

Closed f4bD3v closed 10 years ago

f4bD3v commented 10 years ago

Echo State Networks

shown in this paper

Fully-connected Reservoir Network with BPDC algorithm

No matter what training algorithm we used, we should implement a sort of 'bagging' by averaging over the results obtained by using multiple different reservoir initializations.

Bootstrapping vs Cross-validation

Cross-validation and bootstrapping are both methods for estimating generalization error based on "resampling". The resulting estimates of generalization error are often used for choosing among various models, such as different network architectures. Bootstrapping seems to work better than cross-validation in many cases (Efron, 1983). In the simplest form of bootstrapping, instead of repeatedly analyzing subsets of the data, you repeatedly analyze subsamples of the data. Each subsample is a random sample with replacement from the full sample. Depending on what you want to do, anywhere from 50 to 2000 subsamples might be used. There are many more sophisticated bootstrap methods that can be used not only for estimating generalization error but also for estimating confidence bounds for network outputs. There is more on this topic on this website.