Open rottik opened 7 years ago
I'm looking for the same thing! @rottik , did you find out how to do that?
Good issue, I am meetting the same problem, I also wanna use pre-trained embedding, somebody help?
https://github.com/google/seq2seq/issues/111 does that means it doesn't support by now?
I try to use seq2seq in summarization task. In more detail, I have 60k pairs of abstracts and titles and I'm using modified codes from NMT tutorial. I want to improve my results using word2vec embedding. How can i use pre-trained embedding?
Same samples from training (the first line is predicted title and the second is reference):