SciML / ReservoirComputing.jl

Reservoir computing utilities for scientific machine learning (SciML)
https://docs.sciml.ai/ReservoirComputing/stable/
MIT License
206 stars 37 forks source link

Variations of the Echo State Networks #7

Closed MartinuzziFrancesco closed 4 years ago

MartinuzziFrancesco commented 4 years ago

After working for a while on issue SciML/NeuralNetDiffEq.jl#34 and on the implementation of an echo state network, I have found a lot of interesting papers regarding possible modifications and improvements of the model.

A direct improvement has been shown by Lun et al. with the proposal of the Double Activation Echo State Network (DAF-ESN), capable of improving the results of the ESN in time series predictions.

One of the problems of the original implementation is the prediction of noisy data, examined in this paper by Prater. Here it can be seen that using different methods aside from the classical ridge regression for the construction of the output layer yields better results in noisy datasets.

Another problem tackled regards the robustness in the presence of outliers. Of all the proposed models three in particular have shown interesting results in this direction:

Also the creation of the reservoir is an interesting area of study. In order to make an ESN work properly the echo state property (ESP) has to be guaranteed. The necessary condition for the ESP is that the spectral radius of the matrix of the reservoir should be less than 1. The standard procedure to achieve this is that the matrix is created under determined schemes and then scaled to have spectral radius < 1. In this paper by Yang et al. is proposed a new method for meeting the ESP without scaling of the reservoir weights using single value decomposition.

It is also known that the reservoir obtains best performances when it works on the edge of criticality. An interesting algorithm to guarantee that this condition is met is proposed in the following paper, where we can also see the improvements this control yields.

All the illustrated approaches have used a sparse matrix with random numbers as the reservoir. A curious alternative has been proposed by Ylmaz in which the reservoir consists of Cellular Automata rules. Both one dimensional and two dimensional rules are explored with good results.

These are just a few example of the different improvements that can be found in literature regarding Echo State Networks. The study of this model is somewhat still in the early days but already showing better results in chaotic prediction than other Machine Learning methods, as shown by Chattopadhyay et al..

MartinuzziFrancesco commented 4 years ago

I am going to close this issue since almost all of the variations here described were implemented during GSoC2020. I will open new issues for interesting new implementations that I find around.