Closed binxi0629 closed 5 months ago
Yes, this is not implemented as we didn't have a need for it ourselves. Note that this pertains to position encoding of tokens in a sequence, NOT an encoding of the position of a token in 3D space. In GATr, this is done by the input encoding.
It should be easy to implement sequence position encoding for cross attention, taking inspiration from SelfAttention.
I see, thanks for quick reply.
Hi,
Thanks for your great GATr framework. When I am using GATr, I found there's no positional encoding during cross attention. May I know the reason? Does it mean GATr can only support cross attention without positional encoding?
Below is what I found in
cross_attention.py
,