Open richardtomsett opened 6 years ago
From previous review: Lei et al. (2016) developed a local explanation approach that reveals the most relevant sentences in sentiment prediction from text documents. Their method combines two modular components – a generator and encoder – that operate together and learn candidate rationales for a prediction. Rationales are simply subsets of the words from the input text that satisfy two properties: the selected words represent short, coherent pieces of text (e.g., phrases), and the selected words alone must result in the same prediction as the whole original text. For a given input text, the generator specifies a distribution over possible rationales. The encoder then maps the rationale to task specific values. The distribution that minimizes the regularized encoder loss function is used as the rationale.
Rationalizing Neural Predictions Prediction without justification has limited applicability. As a remedy, we learn to extract pieces of input text as justifications -- rationales -- that are tailored to be short and coherent, yet sufficient for making the same prediction. Our approach combines two modular components, generator and encoder, which are trained to operate well together. The generator specifies a distribution over text fragments as candidate rationales and these are passed through the encoder for prediction. Rationales are never given during training. Instead, the model is regularized by desiderata for rationales. We evaluate the approach on multi-aspect sentiment analysis against manually annotated test cases. Our approach outperforms attention-based baseline by a significant margin. We also successfully illustrate the method on the question retrieval task.
Bibtex:
@misc{1606.04155, Author = {Tao Lei and Regina Barzilay and Tommi Jaakkola}, Title = {Rationalizing Neural Predictions}, Year = {2016}, Eprint = {arXiv:1606.04155}, }