JonathanRaiman / theano_lstm

:microscope: Nano size Theano LSTM module
Other
304 stars 111 forks source link

Speed Benchmark #1

Closed davidBelanger closed 9 years ago

davidBelanger commented 9 years ago

Thanks for making this code available. LSTMs are so hot right now.

I'm wondering if you know how your code compares to the rnnlm package in terms of speed (using yours with or without a GPU)? I've found rnnlm impossibly slow if you don't want to do their class-based lm trick.

JonathanRaiman commented 9 years ago

David,

Class-based lm tricks are probably the way to go. I'm trying to obtain good binary codes to do this myself. I've tried with and without gpu and I'm getting almost no edge running on gpu (I also don't have the beefiest gpu). I've got to admit I find LSTMs a bit too slow to do any good large scale models right now on a laptop. There might be a trick to speed these guys up, but depending on your parameters, training a 4 level LSTM recurrent net takes me ~ 2.5 sec per mini batch (200 examples) ~ at that speed I can only get small models (e.g. not billion word corpuses) to optimize in 12 hours. So patience is a virtue.

I also recommend looking at https://github.com/wojciechz/learning_to_execute where they also have LSTMs and it's running using LLVM (luajit) and Torch: so it's probably a bit faster than theano depending on the size of your minibatches. It's a bit trickier to customize your error function, (specifically for different sized sequences), but so far I've only heard good things about Torch.

davidBelanger commented 9 years ago

Thanks for the info. According to Google's NIPS paper on LSTMs for translation, they had to do some serious multi-GPU programming, and it still took 10 days to train. Thanks for making your stuff publicly-available. Time to fool around with 'small data.'

JonathanRaiman commented 9 years ago

Glad it's useful. I think Machine Translation (production level machine translation, that is) is still an industrial job, but I'm sure small data will do wonders