shenwzh3 / RGAT-ABSA

MIT License
213 stars 51 forks source link

There could be a severe error in your code 您的代码中可能有严重问题 #15

Closed coffee-white closed 3 years ago

coffee-white commented 3 years ago

我仔细研读了您所提供的部分代码,并对比了您论文中的模型设计,目前发现了一个相对严重的问题。我注意到您在model.py的第77行定义了一个名为dmask的掩膜,该掩膜未在该模型(Aspect_Text_GAT_ours)的函数(forward)中使用过。经过对比您对于 RelationAttention 这一函数的参数命名,我怀疑您在99行中将dmask错写成fmask,我相信如果确实出现掩膜错用的情况,模型准确度会大幅降低,relation attention可能反而会成为模型训练的噪声,建议您修改错误,重新测试。

I have carefully viewed part of your code and checked it with the model described in your article, finding out a bug in the meantime. I have noticed that you defined a mask called dmask at 99th line of model.py. While the mask is not used in the function forward() under the model Aspect_Text_GAT_ours. After checking your notation of the arguments in the class RelationAttention(), I suspect that you mistaken dmask for fmask. I believe such a fault will decrease the accuracy of your model by transforming the relation attention as noises. I would like to recommend fixing this bug and re-testing your model.

shenwzh3 commented 3 years ago

Sorry to causing a misunderstanding here. The dmask denotes the adjacency matrix in the original parsed tree. However, in the reshaped tree, we already let each node in the sentence directly connect to the aspect term, therefore we only use fmask to mask the tokens. The relation attention will work well since dep_features is passed to it. You should simply neglect dmask here. Thank you for your reminder.