Official repository for A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge.
Links: [Paper] [Website] [Leaderboard]
The Visual Question Answering (VQA) task aspires to provide a meaningful testbed for the development of AI models that can jointly reason over visual and natural language inputs. Despite a proliferation of VQA datasets, this goal is hindered by a set of common limitations. These include a reliance on relatively simplistic questions that are repetitive in both concepts and linguistic structure, little world knowledge needed outside of the paired image, and limited reasoning required to arrive at the correct answer. We introduce A-OKVQA, a crowdsourced dataset composed of a diverse set of about 25K questions requiring a broad base of commonsense and world knowledge to answer. In contrast to the existing knowledge-based VQA datasets, the questions generally cannot be answered by simply querying a knowledge base, and instead require some form of commonsense reasoning about the scene depicted in the image. We demonstrate the potential of this new dataset through a detailed analysis of its contents and baseline performance measurements over a variety of state-of-the-art vision–language models.
git clone --single-branch --recurse-submodules https://github.com/allenai/aokvqa.git
cd aokvqa
export PYTHONPATH=.
conda env create --name aokvqa
conda activate aokvqa
export AOKVQA_DIR=./datasets/aokvqa/
mkdir -p ${AOKVQA_DIR}
curl -fsSL https://prior-datasets.s3.us-east-2.amazonaws.com/aokvqa/aokvqa_v1p0.tar.gz | tar xvz -C ${AOKVQA_DIR}
Loading our dataset is easy! Just grab our load_aokvqa.py file and refer to the following code.
import os
aokvqa_dir = os.getenv('AOKVQA_DIR')
from load_aokvqa import load_aokvqa, get_coco_path
train_dataset = load_aokvqa(aokvqa_dir, 'train') # also 'val' or 'test'
Please prepare predictions_{split}.json
files (for split: {val,test}
) in the format below. You may omit either multiple_choice
or direct_answer
field if you only want to evaluate one setting.
{
'<question_id>' : {
'multiple_choice' : '<prediction>',
'direct_answer' : '<prediction>'
}
}
You can run evaluation on the validation set as follows.
python evaluation/eval_predictions.py --aokvqa-dir ${AOKVQA_DIR} --split val --preds ./predictions_val.json
You may submit predictions_test.json
to the leaderboard.
We provide all code and pretrained models necessary to replicate our experiments for Large-Scale Pretrained Models (sec. 5.2) and Rationale Generation (sec. 5.3).
export FEATURES_DIR=./features/
mkdir -p ${FEATURES_DIR}
You can compute CLIP features for our vocabulary and dataset. These are most commonly used by our other experiments.
python data_scripts/encode_vocab_clip.py --vocab ${AOKVQA_DIR}/large_vocab_train.csv --model-type ViT-B/32 --out ${FEATURES_DIR}/clip-ViT-B-32_large_vocab.pt
for split in train val test; do
python data_scripts/extract_clip_features.py --aokvqa-dir ${AOKVQA_DIR} --coco-dir ${COCO_DIR} --split ${split} --model-type ViT-B/32 --out ${FEATURES_DIR}/clip-ViT-B-32_${split}.pt
done
export LOG_DIR=./logs/
export PREDS_DIR=./predictions/
export PT_MODEL_DIR=./pretrained_models/
mkdir -p ${LOG_DIR} ${PREDS_DIR} ${PT_MODEL_DIR}
We have included instructions for replicating each of our experiments (see README.md files below).
All Python scripts should be run from the root of this repository. Please be sure to first run the installation and data preparation as directed above.
For each experiment, we follow this prediction file naming scheme: {model-name}_{split}-{setting}.json
(e.g. random-weighted_val-mc.json
or random-weighted_test-da.json
). As examples in these Readme files, we produce predictions on the validation set.
We unify predictions for each split before evaluation. (You can omit one of --mc
or --da
prediction file if you only want to evaluate one setting.)
python evaluation/prepare_predictions.py --aokvqa-dir ${AOKVQA_DIR} --split val --mc ./predictions_val-mc.json --da ./predictions_val-da.json --out ./predictions_val.json
# repeat for test split ...