Microsoft COCO Caption Evaluation
Evaluation codes for MS COCO caption generation.
No longer maintained.
The SPICE metric has been incorporated into the official COCO caption evaluation code, so this repo is no longer maintained.
Requirements
Files
./
- cocoEvalCapDemo.py (demo script)
./annotation
- captions_val2014.json (MS COCO 2014 caption validation set)
- Visit MS COCO download page for more details.
./results
- captions_val2014_fakecap_results.json (an example of fake results for running demo)
- Visit MS COCO format page for more details.
./pycocoevalcap: The folder where all evaluation codes are stored.
- evals.py: The file includes COCOEavlCap class that can be used to evaluate results on COCO.
- tokenizer: Python wrapper of Stanford CoreNLP PTBTokenizer
- bleu: Bleu evalutation codes
- meteor: Meteor evaluation codes
- rouge: Rouge-L evaluation codes
- cider: CIDEr evaluation codes
- spice: SPICE evaluation codes
Setup
- You will first need to download the Stanford CoreNLP 3.6.0 code and models for use by SPICE. To do this, run:
./get_stanford_models.sh
References
Developers
- Xinlei Chen (CMU)
- Hao Fang (University of Washington)
- Tsung-Yi Lin (Cornell)
- Ramakrishna Vedantam (Virgina Tech)
Acknowledgement
- David Chiang (University of Norte Dame)
- Michael Denkowski (CMU)
- Alexander Rush (Harvard University)