kundtx / lfd2022-comments

0 stars 0 forks source link

Learning from Data (Fall 2022) #37

Open kundtx opened 1 year ago

kundtx commented 1 year ago

http://8.129.175.102/lfd2022fall-poster-session/25.html

Prof-Greatfellow commented 1 year ago

G1 Haizhou Liu: A very good job! May I ask, though, have you found any physical interpretation as to why lower dimensions and un-pretrained embeddings lead to higher accuracy scores?

yuyan12138 commented 1 year ago

G29 Yuyan Wang: Excellent project! May I ask what's the difference of embedding layer when using pretrained word embedding and unpretrained word embedding?

min108 commented 1 year ago

@Prof-Greatfellow G1 Haizhou Liu: A very good job! May I ask, though, have you found any physical interpretation as to why lower dimensions and un-pretrained embeddings lead to higher accuracy scores?

G25 Citong Que: From my experience, I think un-pretrain embeddings can perform better when the corpus (size of vocabulary) is not very large. About why lower embedding dimension could be better, I think it's also related to the size of training set or length of each sentence. Actually I don't have a very reasonable interpretation about it.

min108 commented 1 year ago

@yuyan12138 G29 Yuyan Wang: Excellent project! May I ask what's the difference of embedding layer when using pretrained word embedding and unpretrained word embedding?

G25 Citong Que: When using un-pretrained word embedding, the embedding layer is a part of model and will be included in the training process. if using pretrained word embedding, the embedding layer only hold word representations and will not be included in the training process.

TimberJ99 commented 1 year ago

G1 Zhisen Jiang: Have you tested different models performance? I thick some more complicated models may perform better.