shyamupa / snli-entailment

attention model for entailment on SNLI corpus implemented in Tensorflow and Keras
178 stars 43 forks source link

doubt in output_shape of lambda layer #3

Closed shwetgarg closed 8 years ago

shwetgarg commented 8 years ago

I am a bit confused about dimensions. As per my understanding from here, you will get output of dimension (nb_samples, timesteps, 2 * opts.lstm_units). Then in this line you are choosing last element from time dimension, which should result in an output dimension of (nb_samples, 2 * opts.lstm_units), but you mentioned output_shape=(k,).

It would be really nice if you can help me understand that. Thanks

shyamupa commented 8 years ago

Copying reply from email, I think in keras whenever you specify output shape for a layer, you omit the "batch" dimension. So, a softmax layer with output over 300 classes, is actually of shape (NB,300). Hope that makes things easier to understand.

Personally, I feel this makes things a little hard to follow, but that is how keras is designed (as far as I know).