This repo contains source code for Invariant Grounding for Video Question Answering (CVPR 2022 Oral, Best Paper Finalists). In this work, propose a new learning framework, Invariant Grounding for VideoQA (IGV), to ground the question-critical scene, whose causal relations with answers are invariant across different interventions on the complement. With IGV, the VideoQA models are forced to shield the answering process from the negative influence of spurious correlations, which significantly improves the reasoning ability.
<
requirements.txt
for other packages.We use MSVD-QA as an example to help get farmiliar with the code. Please download the dataset in dataset.zip
and the pre-computed features here
After downloading the data, please modify your data path and feature path in run.py
.
Simply run train.sh
to reproduce the results in the paper. We have saved our checkpoint here (acc 41.42% on MSVD-QA) for your references.
@InProceedings{Li_2022_CVPR,
author = {Li, Yicong and Wang, Xiang and Xiao, Junbin and Ji, Wei and Chua, Tat-Seng},
title = {Invariant Grounding for Video Question Answering},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2022},
pages = {2928-2937}
}
Our reproduction of the methods is based on the respective official repositories and NExT-QA, we thank the authors to release their code.