Evolving-AI-Lab / fooling

Code base for "Deep Neural Networks are Easily Fooled" CVPR 2015 paper
172 stars 59 forks source link

Mnist experiment: generated images and prediction probabilities don't change over generations #13

Open TeslaH2O opened 7 years ago

TeslaH2O commented 7 years ago

I tried to run both the direct and indirect encoding mnist example using the default parameters for the evolutionary algorithm. The problem is that the prediction probabilities for each class do not change over generations and consequently the generated images are the same from generatio 0 until the last. any idea of what I am doing wrong?

anguyen8 commented 7 years ago

@TeslaH2O : It seems something you'd have to debug. But can you clarify whether the problem is in Caffe or in the EA? Does Caffe/LeNet give you the same probabilities for even different images? or EA doesn't change the population every new generation?

TeslaH2O commented 7 years ago

I tried to use another image and caffe/lenet gives the outcome changes... So I guess is related to sferes... Here I will attach my mmm folder with the results through generations and the .cpp files mmm.tar.gz source_exp.tar.gz

Can you please give it a look to see what I am doing wrong or give me at least an hint on the possible cause?

anguyen8 commented 7 years ago

@TeslaH2O : I don't see anything strange from the attached code. Did you have to modify the source code somehow to make the Caffe / Sferes integration work? Or did you run the code in this repo out of the box?

TeslaH2O commented 7 years ago

I downloaded from the innovation-engine github repo to have the latest version of caffe working. I can safely say that I did not change anything of the caffe/sferes code to make the integration work. In spite of the tutorial, I used cmake to build caffe. The generated images should differ from one generation to another, right? I don't see it happening though. I have the same problem in 2 different machine I have been able to set the toolbox to work