Open laurentperrinet opened 3 years ago
I will start off with a notebook describing how we do so far in our model with @albertoarturovergani
done! (at last!)
The way we transform an analog movie to a SpikeSourceArray: https://github.com/SpikeAI/2020-11_brainhack_Project7/blob/main/input/B_SpikeSourceArray.ipynb
I have forked tonic into https://github.com/SpikeAI/tonic to be able to fix some errors...
I have now the possibility to import tonic datasets into pyNN:
check out https://github.com/SpikeAI/2020-11_brainhack_Project7/blob/main/input/D_tonic2SpikeSourceArray.ipynb
Now, the goal is to produce events within tonic with a video (event-based simulator) and use that output for pyNN.
One question is to possibly add different layers to the input.
Event-based cameras become increasingly available and bring a new way to transform visual input from a dense representation (frame-based) to an event-based one.
In this issue, we will try to use existing event-based datasets around the tonic library to provide the network with a SpikeSourceArray