Open Yjhan2 opened 1 year ago
Hi,
I think the dimension is right. But you did not include the shared strategy or the two-stream strategy (您好,我觉得维度是没问题的,但是你没有加共享分类或者双流的策略). The forward should be
if not self.training:
return x
old_x = x
bt, c. h, W= x.size()
x = x. view(bt, c, -1) . permute(2,0,1)
x = self.bf(x) .permute(1,2,0)
x = x.view(bt, c, h,w)
# sharing the modules for stream with and without batchformer
x = torch.cat([old_x, x], dim=0)
作者你好,我想将batchformer用于我的人体解析任务,主要对backbone所提取的特征进行batchformer操作,batchformer的输入为(N,C,H,W)的特征请问我这样实现是否有问题?