yusukekyokawa / paper_list

2 stars 0 forks source link

[2020]Informative Dropout for Robust Representation Learning: A Shape-bias Perspective #37

Open yusukekyokawa opened 4 years ago

yusukekyokawa commented 4 years ago

書誌情報

論文リンク

http://arxiv.org/abs/2008.04254v1

著者/所属機関

conf/journal

year

2020

どんな論文か?

新規性

手法

結果

コメント

yusukekyokawa commented 4 years ago

abstract

Convolutional Neural Networks (CNNs) are known to rely more on local texture rather than global shape when making decisions. Recent work also indicates a close relationship between CNN's texture-bias and its robustness against distribution shift, adversarial perturbation, random corruption, etc. In this work, we attempt at improving various kinds of robustness universally by alleviating CNN's texture bias. With inspiration from the human visual system, we propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias. Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture. Through extensive experiments, we observe enhanced robustness under various scenarios (domain generalization, few-shot classification, image corruption, and adversarial perturbation). To the best of our knowledge, this work is one of the earliest attempts to improve different kinds of robustness in a unified model, shedding new light on the relationship between shape-bias and robustness, also on new approaches to trustworthy machine learning algorithms. Code is available at https://github.com/bfshi/InfoDrop.