lucidrains / invariant-point-attention

Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module
MIT License
147 stars 9 forks source link

Subtle mistake in the implementation #7

Closed pengzhangzhi closed 2 years ago

pengzhangzhi commented 2 years ago

Hi. Thanks for your implementation. It is very helpful. However, I find that you miss the dropout in the IPAModule.

https://github.com/lucidrains/invariant-point-attention/blob/de337568959eb7611ba56eace2f642ca41e26216/invariant_point_attention/invariant_point_attention.py#L239

In the alphafold2 supplementary, the dropout is nested in the layer norm, which also holds true in the layer norm at transition layer (line 9 in the figure below). image

If you think this is a problem, please let me know. I will submit a pr to fix it. Thanks again for sharing such an amazing repo.

Best, Zhangzhi Peng

lucidrains commented 2 years ago

@pengzhangzhi Hi ZhangZei, thank you for pointing this out! https://github.com/lucidrains/invariant-point-attention/commit/70e685cc5b4123133e04d2bf14f97df843160682