DerrickXuNu / OpenCOOD

[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
https://mobility-lab.seas.ucla.edu/opv2v/
Other
674 stars 100 forks source link

Question about attnfusion #138

Closed sidiangongyuan closed 4 months ago

sidiangongyuan commented 4 months ago

In opencood/models/sub_modules/att_bev_backbone.py

Attention block is fuse_network = AttFusion(num_filters[idx])

And AttFusion is : class AttFusion(nn.Module): def init(self, feature_dim): super(AttFusion, self).init() self.att = ScaledDotProductAttention(feature_dim)

def forward(self, x, record_len):
    split_x = self.regroup(x, record_len)
    C, W, H = split_x[0].shape[1:]
    out = []
    for xx in split_x:
        cav_num = xx.shape[0]
        xx = xx.view(cav_num, C, -1).permute(2, 0, 1)
        h = self.att(xx, xx, xx)
        h = h.permute(1, 2, 0).view(cav_num, C, W, H)[0, ...]
        out.append(h)
    return torch.stack(out)

However, xx just a H*W,cav_num,C feature , do a self-attention will not fuse the feature for different agents ? You only do attention in each agent self.

sidiangongyuan commented 4 months ago

Sorry, I just think it's a normal attention