huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.79k stars 26.96k forks source link

not good when I use BERT for seq2seq model in keyphrase generation #59

Closed whqwill closed 5 years ago

whqwill commented 5 years ago

Hi,

recently, I am researching about Keyphrase generation. Usually, people use seq2seq with attention model to deal with such problem. Specifically I use the framework: https://github.com/memray/seq2seq-keyphrase-pytorch, which is implementation of http://memray.me/uploads/acl17-keyphrase-generation.pdf .

Now I just change its encoder part to BERT, but the result is not good. The experiment comparison of two models is in the attachment.

Can you give me some advice if what I did is reasonable and if BERT is suitable for doing such a thing?

Thanks. RNN vs BERT in Keyphrase generation.pdf

waynedane commented 5 years ago

have u tried transformer decoder ?instead of rnn decoder.

whqwill commented 5 years ago

not yet, I will try. But I think rnn decoder should not be such bad.

waynedane commented 5 years ago

not yet, I will try. But I think rnn decoder should not be such bad.

emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer. I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.

waynedane commented 5 years ago

I think the batch size of RNN with BERT is too small. pleas see

https://github.com/memray/seq2seq-keyphrase-pytorch/blob/master/pykp/dataloader.py line 377-378

whqwill commented 5 years ago

I don't know what you mean by giving me this link. I set to 10 really because of the memory problem. Actually, when sentence length is 512, the max batch size is only 5, if it is 6 or bigger there will be memory error for my GPU.

whqwill commented 5 years ago

not yet, I will try. But I think rnn decoder should not be such bad.

emmm,maybe u should used mean of last layer to initialize decoder, not the last token representation of last layer. I am also very concerned about the results of using transformer decoder. If you are done, can you tell me? Thank you.

You are right. Maybe the mean is better, I will try as well. Thanks.

waynedane commented 5 years ago

May i ask a question? R u chinese?23333

waynedane commented 5 years ago

Cause for one example, it has N targets. We wanna put all targets in the same batch. 10 is too small that the targets of one example would be in different batches probably.

whqwill commented 5 years ago

I know, but ... the same problem ... my memory is limited .. so ...

PS. I am Chinese

waynedane commented 5 years ago

I know, but ... the same problem ... my memory is limited .. so ...

PS. I am Chinese

i am as well hahaha

waynedane commented 5 years ago

是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder?

whqwill commented 5 years ago

这个80%具体是什么数值这么高?f1 score吗? 你的encoder能不能发来看一下呢 谢

waynedane notifications@github.com于2018年11月28日 周三下午11:14写道:

是不是语料的问题,bert是在wiki上训练的。我用kp20k训练了一个mini bert,在测试集上的accuracy目前是80%,你要不要试试用我这个作为encoder?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/huggingface/pytorch-pretrained-BERT/issues/59#issuecomment-442482124, or mute the thread https://github.com/notifications/unsubscribe-auth/AHCjdIT1G6Icse3LK2SZXO194JJTiM1Qks5uzqhMgaJpZM4Y3HWV .

waynedane commented 5 years ago

accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。

whqwill commented 5 years ago

你提到的mini bert 是什么意思?

whqwill commented 5 years ago

我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。

waynedane commented 5 years ago

我大概理解你的意思了,你相当于是用kp20重新预训练一个bert,不过这样做... 感觉确实蛮麻烦。

是的,用的是 Junseong Kim的代码:https://github.com/codertimo/BERT-pytorch ,模型规模比谷歌的BERT-Base Uncased都小很多。这个是L-8 H-256 A-8.我把目前训练的checkpoint和vocab文件发给你

whqwill commented 5 years ago

但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?

whqwill commented 5 years ago

你可以发到我邮箱 whqwill@126.com , 谢

waynedane commented 5 years ago

但是你这个checkpoint,我的这个版本能直接用吗,还是说我必须装你的那个版本的代码?

可以根据Junseong Kim 的代码创建一个bert model然后加载参数,不一定得安装

whqwill commented 5 years ago

好的把。那你把checkpoint 发给我试试。

thomwolf commented 5 years ago

Hi guys, I would like to keep the issues of this repository focused on the package it-self. I also think it's better to keep the conversation in english so everybody can participate. Please move this conversation to your repository: https://github.com/memray/seq2seq-keyphrase-pytorch or emails. Thanks, I am closing this discussion. Best,

InsaneLife commented 5 years ago

accuracy 是masklm和nextsentence两个任务的,不是key phrase generation,我没说清楚,抱歉。我的算力有限,两块p100, 快一个月了,目前还没训练完。80%是当前的表现。 你好,能把mini版模型发我一下吗,993001803@qq.com,谢谢啦。

Accagain2014 commented 5 years ago

hi, @whqwill I have some doubts about the usage manner of bert with RNN. In bert with RNN method, I see you only consider the last term's representation (I mean the TN's) as the input to RNN decoder, why not use the other term's representation, like T1 to TN-1 ? I think the last term's information is too less to represent all the context information.