I am quite interested in using LSM. However, I have some questions regarding it and the BEE simulator.
We are not training the weights of the neurons inside the "Liquid" part of the LSM, and we are supposed to be only training the weights of the final hidden layer. However, how these weights are being trained, do we use STDP or another learning algorithm?
Is the "Liquid" part of the LSM where the weights are not updated, is only one layer, or it could be several layers stacked together?
What is the loss function we use in BEE, and what is the input to the loss function, is it the membrane potentials of the last hidden layer?
Are you following the LSM with any classification model ( for example SVM)? In such a case would you train first the LSM then apply the SVM or you gonna apply both and train both at the same time?
If I am the whole idea of training, please then provide a little illustration about the training of LSM.
The Jupyter-notebook provided as an example is not very clear for me, can you provide us a Colab version of it which may be more of getting started with BEE simulator?
Is it possible to pip install BEE simulator directly in an existing virtual environment?
Thanks a lot and looking forward to your response.
Sorry for my delay, but I was in the middle of a continental change in my life :)
BEE works based on the principle where the reservoir ("liquid") never changes after it was created. Therefore, there's no plasticity inside the reservoir.
The reservoir is a 3D shaped neural network, and you can set the dimensions so it could be only one layer if you want, but that wouldn't make much sense in my opinion.
Back to 1, the reservoir is generated following a specific distribution (small world networks) and the parameters you set. After that, nothing changes besides the membrane potentials, etc related to the spiking neural model internal workings.
You generate a reservoir ("liquid"), then you can use the same reservoir for multiple tasks. For each task you need to train an external classifier (it could be anything, I used in my PhD a simple linear regression - Ridge).
Please, I suggest you to have a look at the papers cited at the bottom of the readme because they will have examples, etc. Also, check the repositories from those papers (also linked in the readme) because they have notebooks to reproduce everything (considering Numpy, etc didn't break anything since the time the papers were published).
The Colab version is a good idea. I will try to do it in the future, but you are welcome to do it yourself and I can try to help as much as my schedule allows me.
No, I didn't prepare BEE for that. In fact, I only tested it on MacOS and Linux Ubuntu, so I don't even know where it would work as you need to compile it using gcc.
Hi @ricardodeazambuja ,
I am quite interested in using LSM. However, I have some questions regarding it and the BEE simulator.
Thanks a lot and looking forward to your response.