bogdan-kulynych / textfool

Plausible looking adversarial examples for text classification
MIT License
92 stars 32 forks source link

Fooling success rate is zero #1

Open kgramm9026 opened 6 years ago

kgramm9026 commented 6 years ago

Hi Guys, When I run the script (run_demo) with the model file generated from the run_training, I get

Model accuracy on adversarial examples: 0.6645 Fooling success rate: 0.0 Most of the adversarial examples generated are exactly similar to the original examples.I haven't changed any of the default flags in the run_demo script. Please let me know if I am doing something incorrectly.

minfeixia commented 5 years ago

Hi Guys, When I run the script (run_demo) with the model file generated from the run_training, I get

Model accuracy on adversarial examples: 0.6645 Fooling success rate: 0.0 Most of the adversarial examples generated are exactly similar to the original examples.I haven't changed any of the default flags in the run_demo script. Please let me know if I am doing something incorrectly.

I have the same problem with you. Can you solve it now?

kgramm9026 commented 5 years ago

@minfeixia I haven't had any success after

bogdan-kulynych commented 5 years ago

Folks, thanks for your interest. It seems that a bunch of dependencies (specifically, I suspect spacy is one of the culprits) have broken some of the code in this repo. I am not sure, but perhaps running on an older version of Ubuntu might help.

We are currently working on a better, extended, and a more generic version of the textfool's framework. Since I have very limited resources time-wise now, I would rather spend them on the new improved version, which should be out soon-ish.