matyasbohacek / spoter

Repository accompanying the "Sign Pose-based Transformer for Word-level Sign Language Recognition" paper
https://spoter.signlanguagerecognition.com
Apache License 2.0
73 stars 24 forks source link

about position embedding #12

Open gcbanana opened 1 year ago

gcbanana commented 1 year ago
self.pos = nn.Parameter(torch.cat([self.row_embed[0].unsqueeze(0).repeat(1, 1, 1)], dim=-1).flatten(0, 1).unsqueeze(0))

the position embedding shape is 1 1 108, so the position of each frame skeleton embedding is same?