Closed aced125 closed 3 years ago
yea it seems to me as well it'd be the way you're saying
@aced125 hey Aced - so I think while you are right, it doesn't really affect anything, since I later went on to reduce out the cartesian a few lines later https://github.com/lucidrains/invariant-point-attention/blob/2f1fb7ca003d9c94d4144d1f281f8cbc914c01c2/invariant_point_attention/invariant_point_attention.py#L135
do you want to see if https://github.com/lucidrains/invariant-point-attention/commit/de337568959eb7611ba56eace2f642ca41e26216 checks out?
Yupp i think thats mainly it ;)
https://github.com/lucidrains/invariant-point-attention/blob/2f1fb7ca003d9c94d4144d1f281f8cbc914c01c2/invariant_point_attention/invariant_point_attention.py#L130
I think it should be dim=-1, thus using the cartesian (xyz) axis, rather than dim=-2, which uses the hidden dimension.