dinghanshen / SWEM

The Tensorflow code for this ACL 2018 paper: "Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms"
283 stars 54 forks source link

Not find SWEM-hier #2

Open chenghuige opened 6 years ago

chenghuige commented 6 years ago

Hi, seems not find hier encoder as paper mentioned. Very interested to see it :)

dinghanshen commented 6 years ago

Sure, I will merge the hierarchical pooling encoder into the model.py file soon.

cuteapi commented 6 years ago

+1 Very interested to see it :)

ariwaranosai commented 6 years ago

is there any progress on this issue?

hanhao0125 commented 6 years ago

any progress?thanks @dinghanshen

OliverKehl commented 6 years ago

still looking forward to this. thanks @dinghanshen

pemywei commented 6 years ago

still looking forward to this. thanks @dinghanshen

ericxsun commented 6 years ago

Still looking forward to this. thanks @dinghanshen

qichaotang commented 6 years ago

Still looking forward to this. thanks @dinghanshen

LittleSummer114 commented 6 years ago

Still looking forward to this. thanks @dinghanshen

hanxiao commented 6 years ago

please refer to the level-mean-max for hierarchical pooling: https://github.com/hanxiao/tf-nlp-blocks/blob/8f14a864a66f976857adc04a5f3f0797dd877731/nlp/pool_blocks.py#L26

It's part of a bigger project called tf-nlp-block

windpls commented 5 years ago

Still looking forward to this. thanks @dinghanshen Or, could you tell what's the stride when setting local window size = 5?

beyondguo commented 5 years ago

read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.

LittleSummer114 commented 5 years ago

Hi The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not. Best Regards,

---Original--- From: "beyondguo"notifications@github.com Date: Wed, Jul 3, 2019 00:47 AM To: "dinghanshen/SWEM"SWEM@noreply.github.com; Cc: "LittleSummer114"582326366@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2)

read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

JayYip commented 4 years ago

Hi The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not. Best Regards, ---Original--- From: "beyondguo"notifications@github.com Date: Wed, Jul 3, 2019 00:47 AM To: "dinghanshen/SWEM"SWEM@noreply.github.com; Cc: "LittleSummer114"582326366@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2) read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

Hi, could you share the code to me? Thanks.

LLIKKE commented 9 months ago

Still looking forward to this. thanks

LittleSummer114 commented 9 months ago

感谢您的来信。