PlasmaControl / PyRCN

A Python 3 framework for Reservoir Computing with a scikit-learn-compatible API.
BSD 3-Clause "New" or "Revised" License
87 stars 19 forks source link

Is this package available for Mutiple Time Series and Extendable to Deep RC #12

Closed zohrehansari closed 2 years ago

zohrehansari commented 3 years ago

Hi, I want to use this package for speech recognition task. Since, we have multiple time series in the speech datasets, we need to implement the code in a way different from datasets such as mnist where we can introduce only one matrix for training. So, would you please indicate whether this package is capable of handling multiple time series or not?
Moreover, I did not see any deep RC structure in the examples. My second question is that if you have any example of extending this package for deep RC structures? I appreciate your kind help

renierts commented 3 years ago

Hi, thanks for this question. I think, you mean that your speech dataset contains many feature vector sequences, which were extracted from multiple audio files. PyRCN is capable of sequence handling. You can basically use the following outline:

esn = ESNClassifier() for f in file_list: X, y = extract_features(f) esn.partial_fit(X, y)

X any y are the features and labels of one audio file. In this way, you make sure that each time series is treated independently from the other ones.

We are currently working on extending this package to implement deep RC structures straight forward. One opportunity is to create a sklearn Pipeline including multiple NodeToNode instances. Another way would be to do that manually: Train a first ESN Compute the output of this ESN and train a second ESN that receives the output of the first one as input

Does that help you?

zohrehansari commented 3 years ago

Thank you for your clear response. It was so helpful. As I have extracted the MFCC features from the database by HTK toolbox, I do not need to the extract_features step. However, would you please help me how the structure of the X and Y should be to be fed to esn.partial_fit?

renierts commented 3 years ago

Sounds good.

Well, X and y follow the scikit-learn API:

renierts commented 3 years ago

Dear @zohrehansari, is this still an open issue or can it be closed?

zohrehansari commented 3 years ago

Dear @renierts , unfortunately, I faced with a problem when I attempted to do as you mentioned. As I had the feature vectors and labels of the audio files in the mat file format, I wrote the code in a way that each time, features vectors (X) and labels (y) of each audio file were loaded. Then, I wrote the "esn.partial_fit(X, y)". For the first audio file, the X is a numpy.ndarray of shape (782,18), where, the feature vectors were of dimension 18, and the y is a numpy.ndarray of shape (782, ), which includes the numeric values of the classes. However, running the esn.partial_fit(X, y), I receive this error:

ValueError: Expected array-like (array or non-string sequence), got None

Would you please guide me what is the problem? Thank you in advance

renierts commented 3 years ago

Dear @zohrehansari, are you using the ESNClassifier? And can you provide me example code of your training code?

Have you remembered passing classes=range(n_classes) during the first call of partial_fit if you use the ESNClassifier? You can find an example in Cell 5 of the following notebook: https://github.com/TUD-STKS/PyRCN/blob/master/examples/digits.ipynb

I will definitely update the documentation so that this gets clear.

renierts commented 2 years ago

I guess that this issue can be closed. Feel free to reopen it if necessary.