Qualcomm-AI-research / geometric-algebra-transformer

BSD 3-Clause Clear License
154 stars 22 forks source link

Cross attention is not implemented with positional encoding #7

Closed binxi0629 closed 5 months ago

binxi0629 commented 5 months ago

Hi,

Thanks for your great GATr framework. When I am using GATr, I found there's no positional encoding during cross attention. May I know the reason? Does it mean GATr can only support cross attention without positional encoding?

Below is what I found in cross_attention.py,

image

pimdh commented 5 months ago

Yes, this is not implemented as we didn't have a need for it ourselves. Note that this pertains to position encoding of tokens in a sequence, NOT an encoding of the position of a token in 3D space. In GATr, this is done by the input encoding.

It should be easy to implement sequence position encoding for cross attention, taking inspiration from SelfAttention.

binxi0629 commented 5 months ago

I see, thanks for quick reply.