Closed mervess closed 11 months ago
I would suggest introducing new variables which keep track of the corresponding information and are updated inside the fs
function.
n_neurons
in _fscoding.py under print_n_neurons
part only collects Activation layer neurons. I assume that was intended as the code only focuses on these layers. Correct?
The n_neurons
seem to be over-calculating the neuron count.
For example, let's say the activation layers have x
neurons in total. When just one image gets predicted via the model, then it outputs 2x
, and lastly it ends up with 3x
neurons when the model gets evaluated.
Logically, I'd expect it to remain as x
all the time as the overall neuron count in the model doesn't change.
Is this a bug? If not, can you please explain the reason behind it?
Considering the line python extract_spikes.py --file_name=fs_spikes.txt --n_neurons=? --n_images=?
, what should be the value of _nneurons over there? x, 2x, 3x as the output of _nneurons in the above question?
print_spikes = True
via different batch sizes in model.evaluate(...)
.
I can't reason that and it doesn't make sense to me. Could you shed a light on that behaviour as well? Sorry for the late response:
n_neurons
variable to zero) would lead to an over count in the way you described. The same is true if TensorFlow retraces the Graph. Have you written some custom code there?fs_coding.py
). So if the batch size is 16, the number of spikes will be roughly 16 times higher. This is why extract_spikes.py
has a n_images
flag, which can be used to compensate for this. Many thanks for the detailed answers.
4) Oh, I see. I reckoned n_images
to be the test-set size based on _test_resnet_cifarspikes.sh.
What should be the value of n_images
to compensate batch size then?
5) New question: Is it possible or plausible that after the evaluation with FS-Neurons the accuracy increases? It is 0.01%, but still is there.
batch_size
times the number of batches that have been evaluated. Here are my test results of spike count:
Batch size: 1
Spikes count: 400000
Spikes sum: 48773019
Average number of spikes: 0.7494317609096496
Batch size: 16
Spikes count: 25000
Spikes sum: 48773033
Average number of spikes: 0.7494319760295022
Batch size: 32
Spikes count: 12520
Spikes sum: 48773026
Average number of spikes: 0.7494318684695759
Spikes count is the result of z=tf.Print(z, [tf.reduce_sum(z)])
, so it is the "count of sums" actually.
n_neurons
and n_images
are kept as the same for all.
Because the sum results (almost) the same in all, it seems solid to me keeping the n_images
without adding batch size to the equation for such use case. What is your take?
n_neurons
is the count of all Activation layer neurons and n_images
is the size of the test(evaluation) set for the above tests, to make it clearer.
I've run both n_neurons
and spikes count experiments on TF1.14.0 as well. Spike count is the same, yet n_neurons
results as 2x this time. Fyi, I use a custom model, and seemingly the innate call
func. of the model is called twice, hence 2x.
It's not quite clear to me what you mean with Spikes count
and Spikes sum
.
If you use the whole test data set you will always have 10.000 images (in case of CIFAR10), hence it is independent of the batch size, so that's why the results are (almost) the same I think.
I see, thanks for letting me know.
spikes
is as in _extractspikes.py.
count
is its length, and sum
is the output of np.sum(spikes)
.
How could I gather max. spike count/number of neurons and total spike count/number of neurons? Currently, especially spike count seems rather scattered.