lucidrains / invariant-point-attention

Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module
MIT License
148 stars 11 forks source link

Computing point dist - use cartesian dimension instead of hidden dimension #4

Closed aced125 closed 3 years ago

aced125 commented 3 years ago

https://github.com/lucidrains/invariant-point-attention/blob/2f1fb7ca003d9c94d4144d1f281f8cbc914c01c2/invariant_point_attention/invariant_point_attention.py#L130

I think it should be dim=-1, thus using the cartesian (xyz) axis, rather than dim=-2, which uses the hidden dimension.

hypnopump commented 3 years ago

yea it seems to me as well it'd be the way you're saying

lucidrains commented 3 years ago

@aced125 hey Aced - so I think while you are right, it doesn't really affect anything, since I later went on to reduce out the cartesian a few lines later https://github.com/lucidrains/invariant-point-attention/blob/2f1fb7ca003d9c94d4144d1f281f8cbc914c01c2/invariant_point_attention/invariant_point_attention.py#L135

do you want to see if https://github.com/lucidrains/invariant-point-attention/commit/de337568959eb7611ba56eace2f642ca41e26216 checks out?

hypnopump commented 3 years ago

Yupp i think thats mainly it ;)