Open chaitjo opened 7 years ago
Hey @chaitjo Were you able to figure it out? What did you end up doing?
I do not remember if I did, but I think there are better ways to obtain sentence embeddings from language models now. Look at BERT, ULMFit or OpenAI Transformer codes and open source repos.
On Fri, 8 Feb, 2019, 4:22 AM Zeeshan Sayyed, notifications@github.com wrote:
Hey @chaitjo https://github.com/chaitjo Were you able to figure it out? What did you end up doing?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/mkroutikov/tf-lstm-char-cnn/issues/7#issuecomment-461581708, or mute the thread https://github.com/notifications/unsubscribe-auth/AGjcNfB4XM0gyynVTVlT0jHiPQJudm6Zks5vLIrqgaJpZM4LqBUB .
Thanks. What would recommend for training character embeddings that can be plugged into other algorithms?
Can you give me some pointers on how to modify the code to use the final hidden state of the LSTM as an embedding/representation of a sequence of words?
What I want to achieve is to train the language model on my data and then obtain sentence representations by passing the sentence into the trained model.