Open vdvchen opened 2 years ago
Hi, thanks for realsing code for this great work!
While reading the code I find the GAT block a little confusing for me. The attention matrix in GAT is generated as (https://github.com/zju3dv/OnePose/blob/59b88385f3477d0142290f23ca7ce2e1c7534a8e/src/models/GATsSPG_architectures/GATs.py#L43), which seems neither linear attention nor vanilla attention. Coud you give some brief explanations on this part?
Thanks!
Hi, thanks for realsing code for this great work!
While reading the code I find the GAT block a little confusing for me. The attention matrix in GAT is generated as (https://github.com/zju3dv/OnePose/blob/59b88385f3477d0142290f23ca7ce2e1c7534a8e/src/models/GATsSPG_architectures/GATs.py#L43), which seems neither linear attention nor vanilla attention. Coud you give some brief explanations on this part?
Thanks!