harvardnlp / seq2seq-attn

Sequence-to-sequence model with LSTM encoder/decoders and attention
http://nlp.seas.harvard.edu/code
MIT License
1.26k stars 278 forks source link

Cleanup lua code #38

Closed cyril-tiquet closed 8 years ago

cyril-tiquet commented 8 years ago

Hi Yoon,

I did a little cleanup of your lua scripts. We plan to add new features to seq2seq-attn and use it in our translation engines, and It's important for us to start with a clean code base. The current inconsistencies make the code unusable in a team environment and it would be great if you could integrate this. Otherwise, we won't be able to merge future upstream changes easily.

The patch is really big because basically all lines have been changed so let me know if there is anything unclear.

Patch details:

cyril-tiquet commented 8 years ago

A little precision regarding the requires cleanup:

I just came across the following issue #10, where require 'util' didn't work with torch. My first iteration to make it work with lua broke your fix you did in this issue.

I fixed it in my second commit, it now works both with th and lua interpreters. I use 'dofile' instead of 'require' to load your scripts, which is safer. See commit message for details.

Cyril

yoonkim commented 8 years ago

thanks! merged

srush commented 8 years ago

Hey Cyril,

Thanks for using seq2seq-attn. Let us know if there is anything else we can do to make it usable for your purposes. It is certainly research code, but it is getting better and we are very willing to add/take pull request for more features.

Cheers, Sasha

cyril-tiquet commented 8 years ago

Hi Sasha,

Thank you both for your quick replies, amazing work you've done here. Actually yes I think I will have at least one more pull request to submit in the next few days (for making the code a little more flexible to be able to call it more easily in command line as well as from C/C++ libraries). I'll keep you posted!

Cyril