Closed kuangqi93 closed 3 years ago
Sorry for this mistake. It should be uncommented in bts.py. I guess these results are evaluated during training while this kind of result is prone to be high than after the training test.
I have fixed this error.
Excuse me, the current code does not seem to use AGD because it is commented in bts.py
def forward(self, x, focal,rank=0):
skip_feat = self.encoder(x)
# for i in range(len(skip_feat)):
# print(skip_feat[i].shape)
# skip_feat[5] = self.AttentionGraphCondKernel(skip_feat[2],skip_feat[3],skip_feat[4],skip_feat[5],rank)
return self.decoder(skip_feat, focal)
But I found that it works as good as using AGD during training. Why is that?