its-mayank / SqueezeAttention-PyTorch

It has mine implementation of Squeeze and Attention Networks in PyTorch
28 stars 3 forks source link

关于论文 #1

Open doudou123456 opened 4 years ago

doudou123456 commented 4 years ago

个人感觉 你的实现 没有什么问题,但是我咋感觉这篇论文写得逻辑有问题呢,就是很多细节忽略了

its-mayank commented 4 years ago

@doudou123456 I am really sorry, I do not understand Chinese however I did translate with google translate, and from the conversion, I understand that you have something about the explanation in the paper. I don't know how much I agree with you on these terms as I have performed experiments myself and it seems to work in fact we are using these blocks in one of our papers which we have submitted.

zyxu1996 commented 3 years ago

It would definitely work since it adopts a similar structure with seblock. The main key is the multiple loss functions. As for the SAblock, I don't think it is convincing.