lucidrains / invariant-point-attention

Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module
MIT License
148 stars 11 forks source link

#126 maybe omit the 'self.point_attn_logits_scale'? #1

Closed CiaoHe closed 3 years ago

CiaoHe commented 3 years ago

Hi luci:

I read the original paper and compare it to your implement, found one place might be some mistake:

126. attn_logits_points = -0.5 * (point_dist * point_weights).sum(dim = -1),

I thought it should be attn_logits_points = -0.5 * (point_dist * point_weights * self.point_attn_logits_scale).sum(dim = -1)

Thanks for your sharing!

lucidrains commented 3 years ago

@CiaoHe yes, you are right, fixed in 0.0.4! :pray: