This repository includes a PyTorch implementation of [Training for Diversity in Image Paragraph Captioning](). Our code is based on Ruotian Luo's implementation of Self-critical Sequence Training for Image Captioning, available here..
If training from scratch, you also need:
To clone this repository with submodules, use:
git clone --recurse-submodules https://github.com/lukemelas/image-paragraph-captioning
download.sh
in data/captions
spacy
English tokenizer with python -m spacy download en
cd scripts && python prepro_text.py
UNK
token) with the following command. Note that image/vocab information is stored in data/paratalk.json
and caption data is stored in data/paratalk\_label.h5
python scripts/prepro_labels.py --input_json data/captions/para_karpathy_format.json --output_json data/paratalk.json --output_h5 data/paratalk
scripts/prepro\_captions.py
(Spice(), "SPICE")
) in coco-caption/pycocoevalcap/eval.py
to disable Spice testingpython scripts/prepro_ngrams.py --input_json data/captions/para_karpathy_format.json --dict_json data/paratalk.json --output_pkl data/para_train --split train
parabu_fc
and parabu_att
from here into data/bu_data
scripts/make_bu_data.py
to convert the image features to .npz
files for faster data loadingAs explained in Self-Critical Sequence Training, training occurs in two steps:
Training hyperparameters may be accessed with python train.py --help
.
A reasonable set of hyperparameters is provided in train_xe.sh
(for cross-entropy) and train_sc.sh
(for self-critical).
mkdir log_xe
./train_xe.sh
You can then copy the model:
./scripts/copy_model.sh xe sc
And train with self-critical:
mkdir log_sc
./train_xe.sh
You can download a pretrained captioning model here.
In case you would like to cite our paper/code (no obligation at all):
@article{melaskyriazi2018paragraph,
title={Training for diversity in image paragraph captioning},
author={Melas-Kyriazi, Luke and Rush, Alexander and Han, George},
journal={EMNLP},
year={2018}
}
And Ruotian Luo's code, on which this repo is built:
@article{luo2018discriminability,
title={Discriminability objective for training descriptive captions},
author={Luo, Ruotian and Price, Brian and Cohen, Scott and Shakhnarovich, Gregory},
journal={CVPR},
year={2018}
}