ygjwd12345 / TransDepth

Code for Transformers Solve Limited Receptive Field for Monocular Depth Prediction
MIT License
173 stars 20 forks source link

About AGD #9

Closed kuangqi93 closed 3 years ago

kuangqi93 commented 3 years ago

Excuse me, the current code does not seem to use AGD because it is commented in bts.py def forward(self, x, focal,rank=0): skip_feat = self.encoder(x) # for i in range(len(skip_feat)): # print(skip_feat[i].shape) # skip_feat[5] = self.AttentionGraphCondKernel(skip_feat[2],skip_feat[3],skip_feat[4],skip_feat[5],rank) return self.decoder(skip_feat, focal) But I found that it works as good as using AGD during training. Why is that? image

ygjwd12345 commented 3 years ago

Sorry for this mistake. It should be uncommented in bts.py. I guess these results are evaluated during training while this kind of result is prone to be high than after the training test.

ygjwd12345 commented 3 years ago

I have fixed this error.