Open BrambleXu opened 5 years ago
一句话总结:
针对NMT,这篇文章提出了 layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoder-decoder framework (用于判断不同hidden states的贡献度)
资源:
论文信息:
笔记:
模型图:
结果:
接下来要看的论文:
一句话总结:
针对NMT,这篇文章提出了 layer-wise relevance propagation (LRP) to compute the contribution of each contextual word to arbitrary hidden states in the attention-based encoder-decoder framework (用于判断不同hidden states的贡献度)
资源:
论文信息:
笔记:
模型图:
结果:
接下来要看的论文: