Closed D0nPiano closed 2 years ago
Hi @D0nPiano and thanks for your interest!
As ours is a work in progress, we haven't activated yet the attention mechanism, but we definitely plan to evaluate that. Thanks for pointing this out, though.
Cheers!
Solved in PR #50.
Hello, I really like your work on using seq2seq for creating SPARQL queries - just one question: Was there a specific reason not to include attention while training? As far as I understood the tensorflow NMT guilde, you would have to add something like
--attention=scaled_luong
to the options in your train.sh. Did you evaluate whether it works better with/without attention? Greetings!