abaheti95 / Deep-Learning

I'll put all the valuable tutorials and starter codes of deep learning here.
41 stars 18 forks source link

Why G.vocab_size+3 but not G.vocab_size #4

Open lmd1993 opened 5 years ago

lmd1993 commented 5 years ago

Hi, as far as I can see the embedding layer should be the vocab_size as input. But in cbow_model.py of Keras

shared_embedding_layer = Embedding(input_dim=(G.vocab_size+3), output_dim=G.embedding_dimension, weights=[embedding])

Where is the 3 from?

lmd1993 commented 5 years ago

I think +1 is enough. I tried and worked. I think this is because the vocabulary is from 1 - voc + 1 label.