google-research / lottery-ticket-hypothesis

A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.
https://arxiv.org/abs/1803.03635
Apache License 2.0
707 stars 136 forks source link

Getting activations instead of weights #11

Open ZohrehShams opened 4 years ago

ZohrehShams commented 4 years ago

I'm trying to get the activation values of neurones for each layer and store them. I have tried many variations of dense_layer method in mode_base, so that I can store activations in a dictionary, just like the way weights are stored for each layer. But given that unlike weights the dimension of activation depends on the batch size and therefore unknown before training, all my attempts have failed - including those that define a dynamic size tf variable. Is there any other way of layer-wise activation extraction that I can try? Thanks.