GAN has considerable success in generating real-valued data. However, it has limitations when the goal is for generating sequences of discrete tokens.
the discrete outputs from the generative model make it difficult to pass the gradient update from the discriminative model to the generative model.
the discriminative model can only assess a complete sequence, while for a partially generated sequence, it is nontrivial to balance its current score and the future one once the entire sequence has been generated.
Approach
SeqGAN: a sequence generation framework.
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.
The RL reward signal comes from the GAN discriminator judged on a complete sequence and is passed back to the intermediate state-action steps using a Monte Carlo search.
However, SeqGAN does not generate sentences properly when the text length is long (more than 20 words in English) due to signal attenuation.
The LeakGAN model is the idea of leaking the reward of a discriminator feature during sentence generation. (Jiaxian Guo et al, 2018, Long Text Generation via Adversarial Training with Leaked Information)
Abstract
Problem
Approach
Result