ryonakamura / parlai_agents

# ParlAI Agent examples with PyTorch, Chainer and TensorFlow
Other
46 stars 7 forks source link

ParlAI Agent examples with PyTorch, Chainer and TensorFlow

ParlAI is a unified platform for training and evaluating dialog models across many tasks.
Currently, the following agents are implemented in this repository.

I will implement it soon.

I also wrote an article on ParlAI introduction in Japanese. Please see here.

Usage

Please download and install ParlAI first.

git clone https://github.com/facebookresearch/ParlAI.git ~/ParlAI
cd ~/ParlAI

pip install -r requirements.txt
sudo python setup.py develop

Then download and put this repository in ~/ParlAI/parlai/.

git clone https://github.com/ryonakamura/parlai_agents.git
mv parlai_agents ~/ParlAI/parlai/

Simple Agents

bAbI is a pure text-based QA dataset [Weston, 2015]. There are 20 tasks, each corresponding to a particular type of reasoning, such as deduction, induction, or counting.

According to [Sukhbaatar, NIPS 2015], the mean test accuracy by LSTM in bAbI All 10k is 63.6%.
The following RNNAgent achieves similar mean test accuracy in any library.

bAbI Task 10k comparing PyTorch, Chainer and TensorFlow with RNN (Joint)

Note that the correct labels in Task 19 (path finding) are two words, but RNN generates only one word.
The correct labels in Task 8 (lists/sets) also has two or more words, but the majority is one word.

The meaning of the arguments are as follows.

I examined the simple benchmark of 1000 parleys (iterations) speed on single GPU.

in Joint Training

PyTorch Chainer TensorFlow
96 sec 157 sec 320 sec

in Single Training (Task 1)

PyTorch Chainer TensorFlow
36 sec 71 sec 64 sec

TensorFlow may be slow because it doesn't use truncated BPTT.

RNNAgent by PyTorch

PyTorch is an easy and powerful way to implement ParlAI Agent.
When using GPU PyTorch is 1.5 ~ 2 times faster than Chainer.

cd ~/ParlAI
python examples/train_model.py -m parlai.parlai_agents.pytorch_rnn.pytorch_rnn:RNNAgent -t babi:Task1k:1 -mf './parlai/parlai_agents/pytorch_rnn/model_file/babi1' -e 20 -rnn GRU -bs 32 -hs 64 -nl 2 -lr 0.5 -dr 0.2 -ltim 2 -vtim 30

RNNAgent by Chainer

This implementation use multiple inheritance of ParlAI's Agent class and Chainer's Chain classe.
You can choose RNN from links.NStepGRU or links.NStepLSTM.

cd ~/ParlAI
python examples/train_model.py -m parlai.parlai_agents.chainer_rnn.chainer_rnn:RNNAgent -t babi:Task1k:1 -mf './parlai/parlai_agents/chainer_rnn/model_file/babi1' -e 20 -rnn GRU -bs 32 -hs 64 -nl 2 -lr 0.5 -dr 0.2 -ltim 2 -vtim 30

RNNAgent by TensorFlow

TensorFlow can handle variable length input using tf.nn.dynamic_rnn.
dynamic_rnn internally executes loop processing to unroll time series using control_flow_ops.while_loop (same as tf.while_loop).

If you want to run with only CPU on the GPU server, you can set the environment variable to CUDA_VISIBLE_DEVICES="".

cd ~/ParlAI
python examples/train_model.py -m parlai.parlai_agents.tensorflow_rnn.tensorflow_rnn:RNNAgent -t babi:Task1k:1 -mf './parlai/parlai_agents/tensorflow_rnn/model_file/babi1' -e 20 -rnn GRU -bs 32 -hs 64 -nl 2 -lr 0.5 -dr 0.2 -ltim 2 -vtim 30

More Advanced Agents

AttentionAgent by PyTorch

attention

bAbI Task 10k comparing MemN2N, Attention, seq2seq and RNN (Joint)

bAbI Task 10k comparing Bidirectional Encoder and Dropout with seq2seq (Joint)

bAbI Task 10k comparing 1, 2, 3 and 4 Layers with seq2seq (Joint)

bAbI Task 10k with Attention (Luong's General) - sample 0

The meaning of the additional arguments are as follows.

cd ~/ParlAI
python examples/train_model.py -m parlai.parlai_agents.pytorch_attention.pytorch_attention:AttentionAgent -t babi:Task1k:1 -mf './parlai/parlai_agents/pytorch_attention/model_file/babi1' -e 20 -rnn GRU -bi True -atte True -sf general -tf 1. -bs 32 -hs 64 -nl 2 -lr 0.5 -dr 0.2 -ltim 2 -vtim 30

MemN2NAgent by Chainer

memn2n

bAbI Task 10k comparing Position Encoding and Temporal Encoding with MemN2N (Joint)

bAbI Task 10k comparing Adjacent, Layer-wise and No Weight Tying with MemN2N (Joint)

bAbI Task 10k comparing Linear Start and Random Noise with MemN2N (Joint)

bAbI Task 10k comparing 1, 2, 3, 4, 5 and 6 Hops with MemN2N (Joint)

bAbI Task 10k with MemN2N

bAbI Task 10k with End-To-End Memory Network - sample 0

The meaning of the additional arguments are as follows.

Chainer version

cd ~/ParlAI
python examples/train_model.py -m parlai.parlai_agents.chainer_memn2n.chainer_memn2n:MemN2NAgent -t babi:Task1k:1 -mf './parlai/parlai_agents/chainer_memn2n/model_file/babi1' -e 100 -bs 32 -hs 20 -ms 50 -nl 3 -wt Adjacent -pe True -te True -rn True -ls False -opt Adam -lr 0.05 -ltim 2 -vtim 60 -vp -1

PyTorch version

cd ~/ParlAI
python examples/train_model.py -m parlai.parlai_agents.pytorch_memn2n.pytorch_memn2n:MemN2NAgent -t babi:Task1k:1 -mf './parlai/parlai_agents/pytorch_memn2n/model_file/babi1' -e 100 -bs 32 -hs 20 -ms 50 -nl 3 -wt Adjacent -pe True -te True -rn True -ls False -opt Adam -lr 0.05 -ltim 2 -vtim 60 -vp -1

Outperform the Results of the Paper!

Benchmark results to compare this repo implementation, the author's original Matlab code on the bAbI tasks and the paper description.

Default Configuration: 3 Hops, Position Encoding (PE), Temporal Encoding (TE), Linear Start (LS), Random Noise (RN) and Adjacent Weight Tying.

bAbI Task 10k comparing This Repo, Author's Matlab and Paper with MemN2N (Joint)

bAbI Task paper consider a task successfully passed if ≥ 95% accuracy is obtained.

In the best results of the paper, 14/20 tasks succeeded.
The best settings of this repo has succeeded in 15/20 tasks!

Visualize Position Encoding

pe

Other Agents

SaveAgent

Save losses and attention weights.

The meaning of the arguments are as follows.

Contact

If you have any questions or anything, please do not hesitate to contact me or post on my Github Issues page.

License

All codes in this repository are BSD-licensed.