KaihuaTang / Scene-Graph-Benchmark.pytorch

A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiased Scene Graph Generation from Biased Training CVPR 2020”
MIT License
1.03k stars 228 forks source link

An question about iteration #157

Closed AllenM97 closed 2 years ago

AllenM97 commented 2 years ago

❓ Questions and Help

Hi! When I perform training on SGG, why does the code always stop while not reaching the MAX_ITER?

Ljy-an commented 2 years ago

Hi!I meet the same error,can i know how you fix it?Thanks a lot.

qiaomu-miao commented 1 year ago

I think it is because of the lr scheduler: https://github.com/KaihuaTang/Scene-Graph-Benchmark.pytorch/blob/45cd54f7465b81d3154e94fcab2b554a09637f6f/tools/relation_train_net.py#L210-L212

zhanwenchen commented 1 year ago

This iteration-based training is a setup inherited from the original maskrcnn_benchmark codebase. Refer to this issue for context: https://github.com/facebookresearch/maskrcnn-benchmark/issues/1012.

To put it plainly, if you are training on VG-stanford-filtered, there are 108,073 images as seen in the assertion statement. If you have a global batch size (imgs_per_batch) of 32, then it takes 108,073/32 = 3377 iterations to go through the dataset once (that is, one epoch = 3377 iterations for VG). Now, how many epochs is determined by you. If you want 15 epochs, you need a max_iter of 3377 * 15 = 50655.

Edit: not relevant to this issue.