Open wanghs09 opened 7 years ago
This really depends on the total number of parameters in the network, and whether you assume access to the exact class probabilities or just the predicted class labels.. In some of our experiments, we achieved high extraction accuracy (>98%) with 10 times less predictions than there were parameters in the network. My guess is that if you use a much smaller number of predictions, you can still get a non-trivial extraction accuracy (maybe around 80%) but we haven't tried that out on deep networks.
Thanks, Florian! Actually 90%+ accuracy is surprisingly good enough without the training data. Just want to make sure that I understand you correctly :D,
Dear Florian,
May I ask how may predictions the code needs to get one black box ANN(artificial neural network) or MLP in your paper? Suppose there are 200 neurons per layer, the total number of parameters is a lot.
Looking forward to your reply! Thanks a lot!