google-research / xmcgan_image_generation

98 stars 15 forks source link

Question about the sentence embedding. #3

Closed SUNJIMENG closed 2 years ago

SUNJIMENG commented 3 years ago

Excuse me, in your code, the sentence embedding is calculated by averaging the word embedding in the word_num dim. While, in BERT, the encoding corresponding to the '[CLS]' token can represent the whole sentence. Why not use this as sentence embedding? Which one performs better?

kohjingyu commented 2 years ago

Sorry for the late reply. In our experiments we found that there was no difference, using either one produces similar results.