my params are {
"cell_type": "lstm",
"depth": 2,
"attention_type": "Luong",
"bidirectional": true,
"use_residual": true,
"use_dropout": false,
"time_major": true,
"hidden_units": 1024,
"optimizer": "adam",
"learning_rate": 0.001
}
and i am using fasttext pre-trained word2vec as well
batch size is 128 which i haven't changed it.
But the loss is always fluctuating and sometimes jumps to a huge number (200), it usually started as 10-20. It's cuz this version of seq2seq add a part of reinforcement learning? plz tell u how to fix it?
my params are { "cell_type": "lstm", "depth": 2, "attention_type": "Luong", "bidirectional": true, "use_residual": true, "use_dropout": false, "time_major": true, "hidden_units": 1024, "optimizer": "adam", "learning_rate": 0.001 } and i am using fasttext pre-trained word2vec as well batch size is 128 which i haven't changed it. But the loss is always fluctuating and sometimes jumps to a huge number (200), it usually started as 10-20. It's cuz this version of seq2seq add a part of reinforcement learning? plz tell u how to fix it?