n-waves / multifit

The code to reproduce results from paper "MultiFiT: Efficient Multi-lingual Language Model Fine-tuning" https://arxiv.org/abs/1909.04761
MIT License
284 stars 56 forks source link

Compare QRNN Performance Metrics #35

Closed eisenjulian closed 5 years ago

eisenjulian commented 5 years ago

We need to create benchmarks to prove the performance gains of QRNN. @sebastianruder do you mind helping out with a guideline of what are the best measurements and variables to control for.

I am thinking of fixing vocabulary size to 15 or 30k. Comparing the speed of QRNN and LSTM. 1) Language-Model (1K training sentences of length bptt) 2) Unfrozen Classifier (1K training examples)

Is that what you had in mind?

sebastianruder commented 5 years ago

Yep, that's what I had in mind. I think vocabulary size of 30k is good.

eisenjulian commented 5 years ago

Created a pull request here #37