Open chenghuige opened 6 years ago
Sure, I will merge the hierarchical pooling encoder into the model.py file soon.
+1 Very interested to see it :)
is there any progress on this issue?
any progress?thanks @dinghanshen
still looking forward to this. thanks @dinghanshen
still looking forward to this. thanks @dinghanshen
Still looking forward to this. thanks @dinghanshen
Still looking forward to this. thanks @dinghanshen
Still looking forward to this. thanks @dinghanshen
please refer to the level-mean-max
for hierarchical pooling: https://github.com/hanxiao/tf-nlp-blocks/blob/8f14a864a66f976857adc04a5f3f0797dd877731/nlp/pool_blocks.py#L26
It's part of a bigger project called tf-nlp-block
Still looking forward to this. thanks @dinghanshen Or, could you tell what's the stride when setting local window size = 5?
read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.
Hi The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not. Best Regards,
---Original--- From: "beyondguo"notifications@github.com Date: Wed, Jul 3, 2019 00:47 AM To: "dinghanshen/SWEM"SWEM@noreply.github.com; Cc: "LittleSummer114"582326366@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2)
read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Hi The author gave me the swer-hier embedding, but I do not re-run it, I am also confused why so simple operation can achieve so good performance. However, our group recently finish some experiments, actually simple-operation can achieve comparable performance. For me, if you don not believe this result, you can forget this paper or you can re-run it to show whether you are right or not. Best Regards, … ---Original--- From: "beyondguo"notifications@github.com Date: Wed, Jul 3, 2019 00:47 AM To: "dinghanshen/SWEM"SWEM@noreply.github.com; Cc: "LittleSummer114"582326366@qq.com;"Comment"comment@noreply.github.com; Subject: Re: [dinghanshen/SWEM] Not find SWEM-hier (#2) read through the paper, I didn’t find what w2v embedding other models(such as LSTM,CNN) are using. It is amazing that SWEM -ave can achieve better results than LSTM or CNN in some tasks, which in fact I don’t believe! I have done a lot of nlp tasks and I know that simply average the word embedding of a text is usually very poor. I don’t think the comparisons of other models are fair. They don’t even use the same pretrained w2v. So maybe it’s just the Glove you used is better than the embedding other models used. — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.
Hi, could you share the code to me? Thanks.
Still looking forward to this. thanks
感谢您的来信。
Hi, seems not find hier encoder as paper mentioned. Very interested to see it :)