Closed saihttam closed 6 years ago
Thank you so much for your comment, @saihttam . You are right, Reshape and Permute should be excluded as well if needed. I will try to update soon. You are also right about dropout. I agree it should not be included as it will do nothing after training. In the case where the last layer is divided into dense and lambda/sfotmax (currently I think all DNNs are defined following this way?) to simply avoid too small gradient computation, you are also right. They should only be considered once. Thanks!
First of all thanks for providing the code to your paper that really helps to understand the approach better.
As far as I see, neuron coverage is calculated based on the outputs of all layers except input layers and flatten layers, e.g. init_dict in utils.py. So neuron coverage is basically activation function coverage for each "relevant" layer, i.e. each layer that "performs some computation". As such Flatten is excluded and I guess simular layers such as Reshape and Permute should be excluded as well if needed. For dropout, I'm not sure if it should be considered, since it's just randomly dropping inputs during training, so I'm not sure that I can see the benefit of including it in the coverage. Intuitively, since it's only used for regularization in training I'm not sure if it should be included.
Finally, there seems to be a minor issue in the code w.r.t. coverage calculation. The last "layer" in the models is currently separated into a Dense and a Lambda layer. This last layer should probably be treated like all the other layers, where Dense and Lamba are "joined" and only the output joined layer is used for coverage. So, the final Dense layer should be excluded from coverage as well and only the Lambda one should be considered.