Open NicolaBernini opened 5 years ago
The Neural Networks can be thought as composed of 2 main architectural elements
The Traditional Neural Networks used in Machine Learning do not show this distinction very clearly as
The Deep Neural Networks used in Deep Learning show this distinction clearly
Also in Reservoir Computing this distinction is clear
The underlying idea seems to be very similar to the SVM Kernel Trick : as the Readout performs a classification hence essentially finds linear subdivisions in input space, then it should be facilitated by the fact it is high dimensional, as it is the Reservoir output space
The idea seems to be if the Reservoir output space is enough high dimensional, there is no need to train it, hence there is no need to fit this mapping on the data, just focus on training the linear discriminator in this space: this makes training much cheaper and easier
Work in progress
Overview
Readthrough and Analysis related to Quantum reservoir processing