Open HaimianYu opened 6 years ago
Dear HaimianYu,
Thanks for your email. word_length_tensor should be made when you are preprocessing the data. You need to label out the word data set respective to the character data set. Then word_length_tensor is a tensor that use the corresponding word’s length as elements. For more detail, you can see the examples in the python class. lexicon_word_embedding_inputs is just the embedding tensor for the words data input
從我的 iPhone 傳送
HaimianYu notifications@github.com 於 2018年9月4日 上午10:18 寫道:
Hello, thank you very much for sharing the source code of lattice-lstm. I have some questions about lattice-LSTM, that is, how do we get word_length_tensor and lexicon_word_embedding_inputs. In other words, How do we get the words corresponding to each character in a sentence?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.
Hello, thank you very much for sharing the source code of lattice-lstm. I have some questions about lattice-LSTM, that is, how do we get word_length_tensor and lexicon_word_embedding_inputs. In other words, How do we get the words corresponding to each character in a sentence?