Closed 0b01 closed 7 years ago
Looks like the issue is that training is unstable and the loss hits nan. Probably need some different hyperoarameter settings. I'll investigate and get back to you but in the meantime, feel free to fiddle with the learning rate and other learning settings.
You can override individual hparam settings by flag: --hparams='learning_rate=0.1,another_hparam=blah'
Could you give an example of the reverse task using transformer?
Here is my run.sh. The loss goes down to 0.00001 but its output is []
.
[g@pc:/home/g/Desktop/tensor2tensor/reverse]$ cat run.sh
PROBLEM=algorithmic_reverse_decimal40
MODEL=transformer
HPARAMS=transformer_tiny
DATA_DIR=./t2t_data
TMP_DIR=./t2t_datagen
TRAIN_DIR=./t2t_train/$PROBLEM/$MODEL-$HPARAMS
mkdir -p $DATA_DIR $TMP_DIR $TRAIN_DIR
# Generate data
t2t-datagen \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR \
--problem=$PROBLEM
mv $TMP_DIR/tokens.vocab.32768 $DATA_DIR
# Train
t2t-trainer \
--data_dir=$DATA_DIR \
--problems=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR
# Decode
DECODE_FILE=$DATA_DIR/decode_this.txt
echo "8 7 2 6 8 5 2 10 5 1 9 1 8 2 6 10 1 9 10 1 8 7 10 3 9 9 2" > $DECODE_FILE
BEAM_SIZE=4
ALPHA=0.6
t2t-trainer \
--data_dir=$DATA_DIR \
--problems=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR \
--train_steps=0 \
--eval_steps=10 \
--beam_size=$BEAM_SIZE \
--alpha=$ALPHA \
--decode_from_file=$DECODE_FILE
cat $DECODE_FILE.$MODEL.$HPARAMS.beam$BEAM_SIZE.alpha$ALPHA.decodes
I tried and I believe it's a decoding problem -- we use 1 to mean "end of sequence" in decoding, but the algorithmic generator only avoids 0s (padding). Will try to prepare a fix soon, thanks for reporting the problem!
@rickyhan -- the most recent 1.0.4 version should include all needed corrections to make the above instructions work well. I tried and I find that transformer still has some problems with determining the end of inputs, as it's not marked in the algorithmic tasks. So it sometimes reverses a bit too much, but otherwise seems to work. I'm closing this, but could you please test and let me know if it works for you? And if it doesn't, please re-open. Thanks!
Here is my run script:
Output: