HongyuGong / TextStyleTransfer

Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus
26 stars 9 forks source link

This is implementation for the paper "Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus" accepted by NAACL 2019.

  1. python3 style_transfer_rl.py -- data DATA
    • DATA: gyafc_family / yelp

yelp data pretrain CUDA_VISIBLE_DEVICES=0,1 python3 style_transfer_RL.py --data_type yelp --max_sent_len 18 --lm_seq_length 18 --lm_epochs 5 --style_epochs 1 --pretrain_epochs 2 --beam_width 1 --pretrained_model_path best_pretrained_model --batch_size 32

yelp data RL CUDA_VISIBLE_DEVICES=0,1 python3 style_transfer_RL.py --data_type yelp --max_sent_len 18 --lm_seq_length 18 --use_pretrained_model --pretrained_model_path best_pretrained_model --rollout_num 2 --beam_width 1 --rl_learning_rate 1e-6 --batch_size 16 --epochs 1

gyafc_family pretrain CUDA_VISIBLE_DEVICES=0,1 python3 style_transfer_RL.py --data_type gyafc_family --max_sent_len 30 --lm_seq_length 30 --lm_epochs 8 --style_epochs 3 --pretrain_epochs 4 --beam_width 1 --pretrained_model_path best_pretrained_model --batch_size 32

gyafc_family RL CUDA_VISIBLE_DEVICES=2,3 python3 style_transfer_RL.py --data_type yelp --max_sent_len 30 --lm_seq_length 30 --use_pretrained_model --pretrained_model_path best_pretrained_model --rollout_num 2 --beam_width 1 --rl_learning_rate 1e-6 --batch_size 16 --epochs 1

  1. Test yelp data CUDA_VISIBLE_DEVICES=2 python3 style_transfer_test.py --data_type yelp --max_sent_len 18 --lm_seq_length 18 --use_beamsearch_decode --beam_width 1 --model_path MODEL_PATH --output_path OUTPUT_PATH --batch_size 32

Hypeparameters:

In reinforcement learning, we use a combination rewards from style, semantic discriminator and language model as the training reward. You may want to change style_weight, semantic_weight and lm_weight in params.py to tune the model. The larger the weight is, the more dominant the corresponding metric is.

Also, both pretrain.py and style_transfer_RL.py enable model selection, the default method in the implementation is to select the model with the highest semantic reward with a preset style threshold. You may want to try other methods for model selection.


If you're considering using our code, please cite our paper:

@article{gong2019reinforcement, title={Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus}, author={Gong, Hongyu and Bhat, Suma and Wu, Lingfei and Xiong, Jinjun and Hwu, Wen-mei}, journal={arXiv preprint arXiv:1903.10671}, year={2019} }

Gong H, Bhat S, Wu L, Xiong J, Hwu WM. Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus. arXiv preprint arXiv:1903.10671. 2019 Mar 26.