Open kundtx opened 1 year ago
G1 Haizhou Liu: A very good job! May I ask, though, have you found any physical interpretation as to why lower dimensions and un-pretrained embeddings lead to higher accuracy scores?
G29 Yuyan Wang: Excellent project! May I ask what's the difference of embedding layer when using pretrained word embedding and unpretrained word embedding?
@Prof-Greatfellow G1 Haizhou Liu: A very good job! May I ask, though, have you found any physical interpretation as to why lower dimensions and un-pretrained embeddings lead to higher accuracy scores?
G25 Citong Que: From my experience, I think un-pretrain embeddings can perform better when the corpus (size of vocabulary) is not very large. About why lower embedding dimension could be better, I think it's also related to the size of training set or length of each sentence. Actually I don't have a very reasonable interpretation about it.
@yuyan12138 G29 Yuyan Wang: Excellent project! May I ask what's the difference of embedding layer when using pretrained word embedding and unpretrained word embedding?
G25 Citong Que: When using un-pretrained word embedding, the embedding layer is a part of model and will be included in the training process. if using pretrained word embedding, the embedding layer only hold word representations and will not be included in the training process.
G1 Zhisen Jiang: Have you tested different models performance? I thick some more complicated models may perform better.
http://8.129.175.102/lfd2022fall-poster-session/25.html