hasanirtiza / Pedestron

[Pedestron] Generalizable Pedestrian Detection: The Elephant In The Room. @ CVPR2021
https://openaccess.thecvf.com/content/CVPR2021/papers/Hasan_Generalizable_Pedestrian_Detection_The_Elephant_in_the_Room_CVPR_2021_paper.pdf
Apache License 2.0
687 stars 159 forks source link

Image_scale of Caltech while training #155

Closed wzczc closed 1 year ago

wzczc commented 1 year ago

Hi, there is a question about the image_scale in Caltech training. The parameters of image_scale in your code are [(416,320),(960,720)]. image Whlie training, the error(cuda out of memory) occured (I have 2 GeForce RTX 2080 and img_per_gpu is 1) . I want to know how the scale parameters are set and whether they can be changed. And is it influenced a lot to result if I change them? Thanks!

hasanirtiza commented 1 year ago

Just use the smaller image scale for the initial try (416, 320) and see the results, later on you can play with the scale.

wzczc commented 1 year ago

Just use the smaller image scale for the initial try (416, 320) and see the results, later on you can play with the scale.

Hi, I have trained with image_scale(448,336), imgs_per_gpu 2, workers_per_gpu 1 and other configs are as same as yours. But I only got MR12.79% in R sets (tested at 14 epoch). I noticed that from epoch 8 to 14, there wasn't a downward trend for the loss. Should I change lr or do something else while training?

hasanirtiza commented 1 year ago

Just use the smaller image scale for the initial try (416, 320) and see the results, later on you can play with the scale.

Hi, I have trained with image_scale(448,336), imgs_per_gpu 2, workers_per_gpu 1 and other configs are as same as yours. But I only got MR12.79% in R sets (tested at 14 epoch). I noticed that from epoch 8 to 14, there wasn't a downward trend for the loss. Should I change lr or do something else while training?

Did you evaluate every checkpoint ?

wzczc commented 1 year ago

Just use the smaller image scale for the initial try (416, 320) and see the results, later on you can play with the scale.

Hi, I have trained with image_scale(448,336), imgs_per_gpu 2, workers_per_gpu 1 and other configs are as same as yours. But I only got MR12.79% in R sets (tested at 14 epoch). I noticed that from epoch 8 to 14, there wasn't a downward trend for the loss. Should I change lr or do something else while training?

Did you evaluate every checkpoint ?

Yes, and my best MR is 10.8%. How should I triain to get the MR close to yours? By the way, I have tried using the weights you provide to test on Caltech. Your result is 1.7%MR, but I get 1.48%MR. Is it normal?

hasanirtiza commented 1 year ago

If you are only training it on Caltech, our best MR was around 6.2, Table 4 of our paper. But we used 7 GPUs (v100). What I suggest is that you try to increase also the resolution (train it with say 960, 720), if you have not done it already. Secondly, as for the better MR, after our paper got accepted, we did manage to further improve some models, and potentially it is a reflection of that, and we overlooked the correction of table in Github.

wzczc commented 1 year ago

If you are only training it on Caltech, our best MR was around 6.2, Table 4 of our paper. But we used 7 GPUs (v100). What I suggest is that you try to increase also the resolution (train it with say 960, 720), if you have not done it already. Secondly, as for the better MR, after our paper got accepted, we did manage to further improve some models, and potentially it is a reflection of that, and we overlooked the correction of table in Github.

Ok, I get it. Thank you very much!