Open gcbanana opened 1 year ago
self.pos = nn.Parameter(torch.cat([self.row_embed[0].unsqueeze(0).repeat(1, 1, 1)], dim=-1).flatten(0, 1).unsqueeze(0))
the position embedding shape is 1 1 108, so the position of each frame skeleton embedding is same?
the position embedding shape is 1 1 108, so the position of each frame skeleton embedding is same?