hunkim / word-rnn-tensorflow

Multi-layer Recurrent Neural Networks (LSTM, RNN) for word-level language models in Python using TensorFlow.
MIT License
1.31k stars 495 forks source link

Why is args.vocab_size passed in sequence_loss_by_example? #52

Open chiphuyen opened 7 years ago

chiphuyen commented 7 years ago

I looked up the documentation for sequence_loss_by_example and it doesn't seem to be taking vocab_size as argument. I'd really appreciate it if you could help me understand what this argument is doing. Thanks a lot! loss = seq2seq.sequence_loss_by_example([self.logits], [tf.reshape(self.targets, [-1])], [tf.ones([args.batch_size * args.seq_length])], args.vocab_size)

hksonngan commented 7 years ago

tensorflow version < 1.0 have arg num_decoder_symbols: Integer, number of decoder symbols (output classes).