Closed sooyoungcha closed 3 years ago
Neurons are computational units in the neural networks. In both fully-connected layer and convolution layer, the last dimension represents the computational units (the reason why is described below). Therefore, we reduce the dimensions except for the last dimension by averaging them.
When calculating neuron coverage, I had initially expected that each element of the full activation tensor of a layer would be a neuron. However, adapt/network/network.py line 69, you take a tensor of dimensions HxWxC and turn it into a vector of dimension C by flattening and taking the mean of the first two dimensions. I believe DeepXplore does this as well, but I'm not sure I understand why this is?