Open DiligentPanda opened 5 years ago
I use the faster rcnn provided by bottom up to extract another set of visual features of VG and then re-train the motif using these features to own a relation classifier. You can find the code provided in https://github.com/zjuchenlong/faster-rcnn.pytorch, which is a pytorch version of bottom-up feature extractor.
发件人: DiligentPanda notifications@github.com 发送时间: 2019年9月11日 11:40 收件人: yangxuntu/SGAE SGAE@noreply.github.com 抄送: #YANG XU# S170018@e.ntu.edu.sg; Mention mention@noreply.github.com 主题: [yangxuntu/SGAE] About Detection Results in Caption Model and Scene Graph Generation Model (#13)
Hi, @yangxuntuhttps://github.com/yangxuntu, thanks for your codes. When reading your paper, I have a problem that the object detection results provided by the Caption Model like Bottom Up and Top Down should be different from the ones obtained from Scene Graph Generation Model like Neural Motifs. May I ask how this problem is addressed in your work?
― You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/yangxuntu/SGAE/issues/13?email_source=notifications&email_token=AJEJUOSXDE2LJPTKCGLHZYLQJBSCVA5CNFSM4IVO3KTKYY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HKTKGDA, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AJEJUOQL333BFAUPJ3L5YODQJBSCVANCNFSM4IVO3KTA.
Hi, @yangxuntu, thanks for your codes. When reading your paper, I have a problem that the object detection results provided by the Caption Model like Bottom Up and Top Down should be different from the ones obtained from Scene Graph Generation Model like Neural Motifs. May I ask how this problem is addressed in your work?