Open kuwt opened 5 years ago
I found the same question. I want to ask you have you changed the code to be consistent with the paper ? Thank you~
Not yet. I'm now trying to reproduce the result as stated in the paper using the original code and coco dataset to see if the network is still valid.
shuang0112 notifications@github.com 於 2019年9月12日 週四 上午10:26寫道:
I found the same question. I want to ask you have you changed the code to be consistent with the paper ? Thank you~
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/qijiezhao/M2Det/issues/94?email_source=notifications&email_token=AE2R5EYV4DDSNQ5IXNTU5YTQJGSFBA5CNFSM4ISJUIKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD6QOIVA#issuecomment-530637908, or mute the thread https://github.com/notifications/unsubscribe-auth/AE2R5E2WTU5C4M2VWWNWZ73QJGSFBANCNFSM4ISJUIKA .
I have found the statement from the author before. He said the structrue of TUM is indeed changed, because he wants to have a try of this new one. But it is not better than before. You can have a try of this new one. I also want to know the result.
In the reference image, the first deconvolution feature(128,1,1) is not used for the next deconvolution feature.
But in code, the first deconvolution feature is used for the next deconvolution feature.
What I want to point out is: there is no deconvolution pipeline in the code. The code simply upsample the last convolution layer in the TUM again and again and add it to the convolution of one of the layer in the convolution pipeline.
I think what the code describes is quite different from common deconvolution concept or the concept as mentioned in the reference image or paper.