shionhonda / gae-dgl

Reimplementation of Graph Autoencoder by Kipf & Welling with DGL.
https://arxiv.org/abs/1611.07308
MIT License
65 stars 12 forks source link

loss should add independent graph mask #2

Open lkfo415579 opened 3 years ago

lkfo415579 commented 3 years ago

Because the final adj matrix(label) will be a big graph which across all batch sample. There will be so many extra negative labels if you just calculate bce loss simply.

singh0777 commented 2 years ago

Could you please elaborate? Loss on negative labels shouldn't be a problem right ?

singh0777 commented 2 years ago

Oh, I see what you mean. All graphs are placed along the diagonal of the batched graph so that the individual graphs are independent and disjoint. I tried masking the loss matrix by removing the (-ve) edges loss by 0. However, my loss goes out of bound and gets thrown to a huge negative value.