Open gongzhimin opened 3 years ago
I have the same issue with your code, and also I dug a little deeper into code and found some problems, and I wonder if it’s my misinterpretation.
In optimize.py
, which I assume is the core algorithm of the paper, contains part of the triplet loss (equation 10) in your paper. I could not find the corresponding algorithm of $M$ (equation 11) in the optimizer.py
.
Also, the code is a mixture of Python 2 and Python 3, which makes it harder to configure the proper running environment.
@JiminKung It seems that the author provides a code from another paper which introduces how to generate adversarial examples in featurefool
‘s README.
@HMiao-Ian Get it, thank you.
Hi,
you proposed a novel idea to steal cloud-based models with great performance [1]. I'm very interested and try to reproduce the results obtained in [1], while there are two problems block me..
I'm a newcomer and was trapped in installing Caffe (>_<) . Can you kind to show me more details about the dependences of this project? (I think it's easy to export the requirements list on your machine with the command
conda list -e > requirements.txt
orpip freeze > requirements.txt
.)I wonder if you forget to upload the generator of adversarial examples
FeatureFool
to this repository (Since there is only a readme file in the folderfeaturefool
), or if the functionality is implemented by other code and I've overlooked it.Looking forward to your reply.
[1] CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples (https://www.ndss-symposium.org/wp-content/uploads/2020/02/24178.pdf)