Diego999 / pyGAT

Pytorch implementation of the Graph Attention Network model by Veličković et. al (2017, https://arxiv.org/abs/1710.10903)
MIT License
2.88k stars 688 forks source link

Minus sign in front of leaky relu #16

Open sangho-vision opened 5 years ago

sangho-vision commented 5 years ago

Hello, In your implementation of SpGAT,

there is this line: edge_e = torch.exp(-self.leakyrelu(self.a.mm(edge_h).squeeze()))

However, I cannot understand why you added the minus sign in front of the leak relu operation.

Is that right?

SongBaiHust commented 5 years ago

I think it is wrong. No need to add this.

How do you think?

Diego999 commented 5 years ago

@sh0416

sh0416 commented 5 years ago

Yes, it is wrong.

I want to avoid numerical instability.

Instead of minus sign, you need to subtract the max value of tensor like logsumexp.

sgdantas commented 5 years ago

Hey @sh0416 , sorry but what do you mean by "subtract the max value of tensor like logsumexp" Thanks!

iamlockelightning commented 4 years ago

Hi @sh0416 , could you give a modification of the code?

sh0416 commented 4 years ago

I read the code and concluded that just removing the minus sign will just work.

gyxzhao commented 9 months ago

Hi! @sh0416 I am new in deep-learning. I find that there will be inf during torch.exp(self.leakyrelu(self.a.mm(edge_h).squeeze())) (without the minus sign). Do you think the minus necessary for for numerical stability?Or we should switch to another function?