In opencood/models/sub_modules/att_bev_backbone.py
Attention block is fuse_network = AttFusion(num_filters[idx])
And AttFusion is :
class AttFusion(nn.Module):
def init(self, feature_dim):
super(AttFusion, self).init()
self.att = ScaledDotProductAttention(feature_dim)
def forward(self, x, record_len):
split_x = self.regroup(x, record_len)
C, W, H = split_x[0].shape[1:]
out = []
for xx in split_x:
cav_num = xx.shape[0]
xx = xx.view(cav_num, C, -1).permute(2, 0, 1)
h = self.att(xx, xx, xx)
h = h.permute(1, 2, 0).view(cav_num, C, W, H)[0, ...]
out.append(h)
return torch.stack(out)
However, xx just a H*W,cav_num,C feature , do a self-attention will not fuse the feature for different agents ?
You only do attention in each agent self.
In opencood/models/sub_modules/att_bev_backbone.py
Attention block is fuse_network = AttFusion(num_filters[idx])
And AttFusion is : class AttFusion(nn.Module): def init(self, feature_dim): super(AttFusion, self).init() self.att = ScaledDotProductAttention(feature_dim)
However, xx just a H*W,cav_num,C feature , do a self-attention will not fuse the feature for different agents ? You only do attention in each agent self.