Closed duguyue100 closed 8 years ago
I guess the problem is caused either by get_activations_batch
or the snn_precomp
that is available in INI_target_sim
.
As I don't really see any logical difference between the old get_activations_batch
and new, so maybe there is some fishy part in snn_precomp
variable?
Please specify what change you are concerned about in those plots. In one case you are using average, in the other max pooling, so of course there will be differences. I'm surprised that in both cases the activations extend to negative values. How can this be since we are using ReLU's?
I have not pushed anything since yesterday at noon, because currently I'm doing major refactoring, which will also affect get_activations_batch
and snn_precomp
. I will push my changes on Monday or Tuesday next week. On my side the plots are fine.
First two figures is from a model I trained yesterday, which is named cnn_avg_pool.py
in CIFAR-10 folder. This is a direct copy of the CNN model in CIFAR-10, I just changed number of filters in third and fourth convolution layer, and there is ReLU activation after every convolution layer. And yes it is strange to have negative values.
I will check out SpikeConv2DReLU
, because it doesn't make sense at all..
OK, thanks. But if you don't find something right away, I would suggest you wait for my commit next week, because I simplified lots of things in the whole workflow and structure. Since it is working fine on my side, I expect it will fix this too.
Yeah, thanks.
Ok, I will wait for your change next week then, somehow the plot is still messed up after I cloned a fresh copy to the server..
The 99.9 percentile trick works beautifully by the way.
@rbodo Alright, I found the problem, in keras_input_lib
, when you assign get_activ
variable to convolution and dense layer, you didn't assign to the activation of its activation layer instead of its own. I changed this and it works fine now..
I see. Thanks for fixing it, but this part will be gone in the new version anyway, I'm simplifying things a lot. Will push on Monday.
I was running some experiments, although the activation figures, other plots are fine, but the Pearson coefficients figure and activity distribution figure are totally different from previous.
As example, this is what I got from a average pooling experiment.
And this is what I got from max pooling last night when I try to improve the memory usage:
Did anyone changed the behaviour of how the model calculating the activation?