Open manuzhang opened 7 years ago
The author said,
This post is targeted at people who already have significant experience with deep learning (e.g. people who have read chapters 1 through 8 of the book). We assume a lot of pre-existing knowledge.
so I will revisit this article when I acquire more knowledge.
Article
Notes
"Adversarial examples" could easily trick deep learning models to misclassification since they don't have any understanding of input as humans do
Human can do "extreme generalization"
while deep nets can only map input to output ("local generalization"), which fails if inputs differ slightly from training time.