ajamjoom / Image-Captions

BERT + Image Captioning
130 stars 30 forks source link

Get word embedding #4

Open arcobaleno1996 opened 4 years ago

arcobaleno1996 commented 4 years ago

I found when getting word embedding, the embedding matrix's size is changed to (batch_size, max_length+1, embedding_dim). The position of [CLS] is calculated to the embedding matrix. Can I change stack of token embedding to cap_embedding = torch.stack(tokens_embedding[1:])?

IshanX111 commented 1 year ago

Did you find anything inside the checkpoint folder? Help me if you did.