google / seq2seq

A general-purpose encoder-decoder framework for Tensorflow
https://google.github.io/seq2seq/
Apache License 2.0
5.6k stars 1.3k forks source link

Using embedding #221

Open rottik opened 7 years ago

rottik commented 7 years ago

I try to use seq2seq in summarization task. In more detail, I have 60k pairs of abstracts and titles and I'm using modified codes from NMT tutorial. I want to improve my results using word2vec embedding. How can i use pre-trained embedding?

Same samples from training (the first line is predicted title and the second is reference):

a model features for audio sounds recordings signals SEQUENCE_END autoregressive acoustical modelling of free field cough sound SEQUENCE_END

printer classification using the evaluation biomimetic pattern recognition SEQUENCE_END cancer classification using the extended biomimetic pattern recognition SEQUENCE_END

a of the polytonic term historical indian texts SEQUENCE_END hmms SEQUENCE_END recognition of greek polytonic on historical degraded texts using hmms SEQUENCE_END

assessing the intended enthusiasm of singing voice using spectral spectrum SEQUENCE_END assessing the intended enthusiasm of singing voice using energy variance SEQUENCE_END

micheletufano commented 6 years ago

I'm looking for the same thing! @rottik , did you find out how to do that?

stevenkwong commented 6 years ago

Good issue, I am meetting the same problem, I also wanna use pre-trained embedding, somebody help?

stevenkwong commented 6 years ago

https://github.com/google/seq2seq/issues/111 does that means it doesn't support by now?