brightmart / text_classification

all kinds of text classification models and more with deep learning
MIT License
7.83k stars 2.57k forks source link

p1_HierarchicalAttention_model.py中关于维度问题 #90

Closed searchlink closed 5 years ago

searchlink commented 5 years ago

inference()中, 对于input_x = tf.split(self.input_x, self.num_sentences,axis=1), 其shape为:num_sentences个[None,self.sequence_length/num_sentences] input_x = tf.stack(input_x, axis=1) 其shape为:[None, num_sentences,self.sequence_length/num_sentences] self.embedded_words = tf.nn.embedding_lookup(self.Embedding,input_x) 其shape不应该是[None,self.num_sentences,self.sequence_length/num_sentences, self.embed_size]吗? 为什么我看注释是[None,num_sentences,sentence_length,embed_size]

brightmart commented 5 years ago

you are right. comment may not correct.

searchlink commented 5 years ago

多谢~