!
Check out our e-SNLI-VE, a new dataset of natural language explanations for vision-language understanding, and our e-ViL benchmark for evaluating natural language explanations: e-ViL: A Dataset and Benchmark for Natural Language Explanations in
Vision-Language Tasks accepted at ICCV, 2021.
!
New work on e-SNLI: Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations. Accepted as a short paper at ACL, 2020.
!
New dataset of visual textual entailment with natural language explanations taken from e-SNLI: e-SNLI-VE-2.0: Corrected Visual-Textual Entailment with Natural Language Explanations. At the IEEE CVPR Workshop on Fair, Data Efficient and Trusted Computer Vision, 2020
!
If are also interested in feature-based explanations besides natural language explanations, check out our new works on:
There are 2 splits for the train set due to the github sie restrictions, please simply merge them.
Clarification on the two potentially confusing headers:
Please ignore the fields SentenceHighlighted and retrieve the highlighted words from the Sentencemarked fields as stated above.
Trained models can be downloaded at:
If you use this dataset or code in your work, please cite our paper:
@incollection{NIPS2018_8163,
title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
author = {Camburu, Oana-Maria and Rockt\"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
booktitle = {Advances in Neural Information Processing Systems 31},
editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
pages = {9539--9549},
year = {2018},
publisher = {Curran Associates, Inc.},
url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
}