We need to create benchmarks to prove the performance gains of QRNN. @sebastianruder do you mind helping out with a guideline of what are the best measurements and variables to control for.
I am thinking of fixing vocabulary size to 15 or 30k. Comparing the speed of QRNN and LSTM.
1) Language-Model (1K training sentences of length bptt)
2) Unfrozen Classifier (1K training examples)
We need to create benchmarks to prove the performance gains of QRNN. @sebastianruder do you mind helping out with a guideline of what are the best measurements and variables to control for.
I am thinking of fixing vocabulary size to 15 or 30k. Comparing the speed of QRNN and LSTM. 1) Language-Model (1K training sentences of length bptt) 2) Unfrozen Classifier (1K training examples)
Is that what you had in mind?