MILVLG / openvqa

A lightweight, scalable, and general framework for visual question answering research
Apache License 2.0
318 stars 64 forks source link
benchmark deep-learning pytorch visual-question-answering vqa

OpenVQA

Documentation Status powered-by MILVLG

OpenVQA is a general platform for visual question ansering (VQA) research, with implementing state-of-the-art approaches (e.g., BUTD, MFH, BAN, MCAN and MMNasNet) on different benchmark datasets like VQA-v2, GQA and CLEVR. Supports for more methods and datasets will be updated continuously.

Documentation

Getting started and learn more about OpenVQA here.

Benchmark and Model Zoo

Supported methods and benchmark datasets are shown in the below table. Results and models are available in MODEL ZOO.

VQA-v2 GQA CLEVR
BUTD
MFB
MFH
BAN
MCAN
MMNasNet

News & Updates

v0.7.5 (30/12/2019)

v0.7 (29/11/2019)

v0.6 (18/09/2019)

v0.5 (31/07/2019)

License

This project is released under the Apache 2.0 license.

Contact

This repo is currently maintained by Zhou Yu (@yuzcccc) and Yuhao Cui (@cuiyuhao1996).

Citation

If this repository is helpful for your research or you want to refer the provided results in the modelzoo, you could cite the work using the following BibTeX entry:


@misc{yu2019openvqa,
  author = {Yu, Zhou and Cui, Yuhao and Shao, Zhenwei and Gao, Pengbing and Yu, Jun},
  title = {OpenVQA},
  howpublished = {\url{https://github.com/MILVLG/openvqa}},
  year = {2019}
}