NeuromorphicProcessorProject / snn_toolbox

Toolbox for converting analog to spiking neural networks (ANN to SNN), and running them in a spiking neuron simulator.
MIT License
360 stars 105 forks source link

Is there anything wrong with the `plot_pearson_coefficients`? #4

Closed duguyue100 closed 8 years ago

duguyue100 commented 8 years ago

I was running some experiments, although the activation figures, other plots are fine, but the Pearson coefficients figure and activity distribution figure are totally different from previous.

As example, this is what I got from a average pooling experiment. pearson activity_distribution

And this is what I got from max pooling last night when I try to improve the memory usage: pearson activity_distribution

Did anyone changed the behaviour of how the model calculating the activation?

duguyue100 commented 8 years ago

I guess the problem is caused either by get_activations_batch or the snn_precomp that is available in INI_target_sim.

As I don't really see any logical difference between the old get_activations_batch and new, so maybe there is some fishy part in snn_precomp variable?

rbodo commented 8 years ago

Please specify what change you are concerned about in those plots. In one case you are using average, in the other max pooling, so of course there will be differences. I'm surprised that in both cases the activations extend to negative values. How can this be since we are using ReLU's? I have not pushed anything since yesterday at noon, because currently I'm doing major refactoring, which will also affect get_activations_batch and snn_precomp. I will push my changes on Monday or Tuesday next week. On my side the plots are fine.

duguyue100 commented 8 years ago

First two figures is from a model I trained yesterday, which is named cnn_avg_pool.py in CIFAR-10 folder. This is a direct copy of the CNN model in CIFAR-10, I just changed number of filters in third and fourth convolution layer, and there is ReLU activation after every convolution layer. And yes it is strange to have negative values. 0activations

I will check out SpikeConv2DReLU, because it doesn't make sense at all..

rbodo commented 8 years ago

OK, thanks. But if you don't find something right away, I would suggest you wait for my commit next week, because I simplified lots of things in the whole workflow and structure. Since it is working fine on my side, I expect it will fix this too.

duguyue100 commented 8 years ago

Yeah, thanks.

duguyue100 commented 8 years ago

Ok, I will wait for your change next week then, somehow the plot is still messed up after I cloned a fresh copy to the server..

The 99.9 percentile trick works beautifully by the way.

duguyue100 commented 8 years ago

@rbodo Alright, I found the problem, in keras_input_lib, when you assign get_activ variable to convolution and dense layer, you didn't assign to the activation of its activation layer instead of its own. I changed this and it works fine now..

https://github.com/NeuromorphicProcessorProject/snn_toolbox/blob/spikemp/snntoolbox/model_libs/keras_input_lib.py#L158-L171

rbodo commented 8 years ago

I see. Thanks for fixing it, but this part will be gone in the new version anyway, I'm simplifying things a lot. Will push on Monday.