graykode / nlp-tutorial

Natural Language Processing Tutorial for Deep Learning Researchers
https://www.reddit.com/r/MachineLearning/comments/amfinl/project_nlptutoral_repository_who_is_studying/
MIT License
14.29k stars 3.95k forks source link

TextCNN_Torch have wrong comment #44

Open jnakor opened 4 years ago

jnakor commented 4 years ago

def forward(self, X): embedded_chars = self.W[X] # [batch_size, sequence_length, sequence_length]

I think the shape is [batch_size, sequence_length,embedding_size]

endeavor11 commented 4 years ago

yes,I think so

Yuhuishishishi commented 4 years ago

Filed PR #49

AgaigetS commented 4 years ago

can somebody tell me, why need three conv layer to convolve the word embedding matrix? I dont understand.