Open yaoyz96 opened 1 year ago
Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction, NIPS 2020. [paper]
这篇文章的 Context 和 Motivation 写的很好,清楚地说明了目前模型为什么适应性差的原因。从 Manifold hypothesis 的角度设计了一个新的学习框架。
提出目前模型学习的两个 limitations:
1) It aims only to predict the labels y even if they might be mislabeled. Empirical studies show that deep networks, used as a “black box,” can even fit random labels. 2) With such an end-to-end data fitting, despite plenty of empirical efforts in trying to interpret the so-learned features, it is not clear to what extent the intermediate features learned by the network capture the intrinsic structures of the data that make meaningful classification possible in the first place.
上述问题导致模型学到的特征通常缺乏可解释性,无法保证泛化性、鲁棒性和可迁移性。本文的目标是重新制定模型学习目标,将标签 $y$ 仅仅作为辅助信息帮助模型学习更鲁棒的特征。
The precise geometric and statistical properties of the learned features are also often obscured, which leads to the lack of interpretability and subsequent performance guarantees (e.g., generalizability, transferability, and robustness, etc.) in deep learning. Therefore, the goal of this paper is to address such limitations of current learning frameworks by reformulating the objective towards learning explicitly meaningful representations for the data $x$.
论文中也提到了在 multi-modal 下现有模型学习的劣势:
When the data contain complicated multi-modal structures, naive heuristics or inaccurate metrics may fail to capture all internal subclass structures or to explicitly discriminate among them for classification or clustering purposes.
On the Principles of Parsimony and Self-Consistency for the Emergence of Intelligence
On the principles of Parsimony and Self-consistency for the emergence of intelligence [paper]
Motivation
Principles
提出两个基本原则:1)Parsimony,2)Self-consistency,用于解决智能中两个基础问题:
Parsimony
以视觉信息建模为例,简约性的目标是找到一个变换 $f$ 满足以下要求: