lucidrains / egnn-pytorch

Implementation of E(n)-Equivariant Graph Neural Networks, in Pytorch
MIT License
412 stars 68 forks source link

Pytorch-Geometric Version Attention code is buggy #39

Open ItamarChinn opened 1 year ago

ItamarChinn commented 1 year ago

First of all - this is a great repo and thank you for this. The pyg version however has some bugs with the attention.

Just a few that I have encountered:

  1. In forward method attention layer is at index -1 not 0 and EGNN layer is index 0 not -1 (which is the opposite in the other implementation).
  2. self.global_tokens init has undefined var dim
  3. Uses GlobalLinearAttention from other implementation although GlobalLinearAttention_Sparse is defined in the file (not sure if this is a bug or on purpose?

I have refactored a lot of the code, but can try and do a PR in a few days

lucidrains commented 1 year ago

@ItamarChinn yea sure, a PR would be greatly appreciated! unless if @hypnopump gets to it first