yuweihao / KERN

Code for Knowledge-Embedded Routing Network for Scene Graph Generation (CVPR 2019)
MIT License
121 stars 35 forks source link

Question regarding training time #2

Closed dorarad closed 5 years ago

dorarad commented 5 years ago

Thank you so much for releasing this repository looks awesome! Quick question how much time does it take you to train the graph classification/detection? let's say time per epoch?

Thanks!

yuweihao commented 5 years ago

Hi, @dorarad , thanks for noticing this repository.

To be honest, the speeds are slow. We use a TITAN X (Pascal) GPU to train it. The training time is about 200 minutes, 240 minutes and 400 minutes per epoch for predcls, sglcs and sgdet task respectively.

The contribution of our work is that we explicitly embed the statistical knowledge into deep architecture for this task, giving a thought to relieve the problem of unbalanced distribution of relationships. However, we also find the speed of the model is slow and it should be improved in the future. For example, the model classifies relationships for each object pairs, but according to [1], "each image has a scene graph of around 11.5 objects and 6.2 relationships". Thus, there is no need to use GGNN to reason relationships for each object pairs. Like [2], a low-cost classifier can be added to filter out "no relationship" pairs, then GGNN just needs to reason pairs which have relationship.

Our work introduce a method to relieve the problem of unbalanced distribution of relationships, but it is not perfect. Look forward to seeing more better methods. (^_^)

[1] Danfei Xu, Yuke Zhu, Christopher B. Choy, and Li Fei-Fei. "Scene graph generation by iterative message passing." CVPR. 2017. [2] Bo Dai, Yuqi Zhang, and Dahua Lin. "Detecting visual relationships with deep relational networks." CVPR. 2017.

dorarad commented 5 years ago

Thank you so much for the quick and detailed response! :)

yuweihao commented 5 years ago

(^_^)

dorarad commented 5 years ago

*one more question actually, if I use the multi-gpu setting does it reduce time significantly (like about approximately 1/num_gpus or doesn't help a lot)? are there things like learning rate etc that I have to tune to make performance comparable?

yuweihao commented 5 years ago

*one more question actually, if I use the multi-gpu setting does it reduce time significantly (like about approximately 1/num_gpus or doesn't help a lot)? are there things like learning rate etc that I have to tune to make performance comparable?

Hi @dorarad , we have tried to train it by using 2 TITAN X (Pascal) GPUs but it didn't reduce time per epoch and I am confused about it. I guess the data communication efficiency between different GPUs is not good on our server or some code needs to be optimized. Since the GPUs are not enough, later I didn't spend time investigating this question and didn't try it again. It is sorry that I can't give you a clear answer about your first question. As for second question, reducing learning rate does help, which can help you to make performance comparable. : )

dorarad commented 5 years ago

alright thanks a lot! :)

yuweihao commented 5 years ago

(^_^)