In the function movie_embedding_model-> embedding layer , we have used input_dim as the len(top_links) or len(movie_to_idx) .. how is this input dimension used?, what does it do?
In batch we send a tuple of ({linkids, movieids}, label ). In the book you mentioned that : allocates a vector of embedding_size for each possible input. I understand it to allocate vector of embedding_size to each possible ID sent in batch. How are we using the input_dim ?
ok i think input dim is len(top_links) for each id in batch, but the output dim allocated would be embedding_size. can embedding size be same as input_dim?
Hi,
I am reading your book, and its great so far !
In the function movie_embedding_model-> embedding layer , we have used input_dim as the len(top_links) or len(movie_to_idx) .. how is this input dimension used?, what does it do?
In batch we send a tuple of ({linkids, movieids}, label ). In the book you mentioned that : allocates a vector of embedding_size for each possible input. I understand it to allocate vector of embedding_size to each possible ID sent in batch. How are we using the input_dim ?
DO correct me if i am thinking wrong.
Thanks. Anne.