Thanks for open this amazing trained model for all!
Just wonder that if there is a way to combat adversarial examples or malicious training like this:
https://github.com/tjwei/play_nsfw
since it reverses results of classification (attack this model?) through its algorithm.
Thanks for open this amazing trained model for all! Just wonder that if there is a way to combat adversarial examples or malicious training like this: https://github.com/tjwei/play_nsfw
since it reverses results of classification (attack this model?) through its algorithm.
Thanks again for this model !