Open Heart-beatsss opened 2 years ago
Hi Thanks for your comment. all of previous baselines used Bert based model for a single task, aspect term extraction, or polarity prediction. we are the first one to reformulate this task as language generation, and achieved very good few shot performance results, along with multi tasking. There are extensive research on how to get the best few-shot performance from unidirectional and bidirectional models. I hope this answers your question
Hi Thanks for your comment. all of previous baselines used Bert based model for a single task, aspect term extraction, or polarity prediction. we are the first one to reformulate this task as language generation, and achieved very good few shot performance results, along with multi tasking. There are extensive research on how to get the best few-shot performance from unidirectional and bidirectional models. I hope this answers your question
Thanks for your reply. but so sorry, i disagree with you.
sorry, i don't know your contribution just reformulating the tasks? and proving GPT-2 is powerful???