lukemelas / image-paragraph-captioning

[EMNLP 2018] Training for Diversity in Image Paragraph Captioning
90 stars 23 forks source link

Training for Diversity in Image Paragraph Captioning

This repository includes a PyTorch implementation of [Training for Diversity in Image Paragraph Captioning](). Our code is based on Ruotian Luo's implementation of Self-critical Sequence Training for Image Captioning, available here..

Requirements

If training from scratch, you also need:

To clone this repository with submodules, use:

Train your own network

Download and preprocess cations

Train the network

As explained in Self-Critical Sequence Training, training occurs in two steps:

  1. The model is trained with a cross-entropy loss (~30 epochs)
  2. The model is trained with a self-critical loss (30+ epochs)

Training hyperparameters may be accessed with python train.py --help.

A reasonable set of hyperparameters is provided in train_xe.sh (for cross-entropy) and train_sc.sh (for self-critical).

mkdir log_xe
./train_xe.sh 

You can then copy the model:

./scripts/copy_model.sh xe sc

And train with self-critical:

mkdir log_sc
./train_xe.sh 

Pretrained Network

You can download a pretrained captioning model here.

Citation

In case you would like to cite our paper/code (no obligation at all):

@article{melaskyriazi2018paragraph, 
  title={Training for diversity in image paragraph captioning},
  author={Melas-Kyriazi, Luke and Rush, Alexander and Han, George},
  journal={EMNLP},
  year={2018}
}     

And Ruotian Luo's code, on which this repo is built:

@article{luo2018discriminability,
  title={Discriminability objective for training descriptive captions},
  author={Luo, Ruotian and Price, Brian and Cohen, Scott and Shakhnarovich, Gregory},
  journal={CVPR},
  year={2018}
}