galsang / BiDAF-pytorch

Re-implementation of BiDAF(Bidirectional Attention Flow for Machine Comprehension, Minjoon Seo et al., ICLR 2017) on PyTorch.
244 stars 85 forks source link

why change char_dim and word_len dimesion and then use conv2d #6

Closed VectorChanger0 closed 5 years ago

VectorChanger0 commented 5 years ago

around line 82~88 in model.py _# (batch seq_len, 1, char_dim, wordlen) x = x.view(-1, self.args.char_dim, x.size(2)).unsqueeze(1) _# (batch seq_len, char_channel_size, 1, conv_len) -> (batch * seq_len, char_channel_size, convlen) x = self.char_conv(x).squeeze() why need change the dims first ? Why not directly use conv1D ?

galsang commented 5 years ago

There's no reason for using conv2D instead of conv1D... You can, of course, use conv1D if you want.