Pongpisit-Thanasutives / Variations-of-SFANet-for-Crowd-Counting

The official implementation of "Encoder-Decoder Based Convolutional Neural Networks with Multi-Scale-Aware Modules for Crowd Counting"
https://ieeexplore.ieee.org/document/9413286
GNU General Public License v3.0
110 stars 32 forks source link

Train detail? #17

Open niepei opened 3 years ago

niepei commented 3 years ago

I have trained 1000 epoche in shanghaiTech part A samples using the M_SFANet,but the test mea is 68.08,the learning rate used 5e-4, the batch size used 8, and the sample size croped as 400*400. And can you tell me the train detail?

Pongpisit-Thanasutives commented 3 years ago

Do you use the training preprocessing (for SHA) as employed in the Training details section of SFANet paper (https://arxiv.org/pdf/1902.01115.pdf). And also, in my experiments, I used ADAM with lookahead (https://arxiv.org/abs/1907.08610, See https://github.com/rwightman/pytorch-image-models/blob/master/timm/optim/lookahead.py for the implementation). You can also play with the learning rate (5e-4, 6e-4, ...).

knightyxp commented 3 years ago

how could you set crop size to 400x400 when i process the shha data according to the bayesian code, erroe happen like this : File "/home/../.jupyter/Variations-of-SFANet-for-Crowd-Counting-master/datasets/crowd.py", line 91, in train_transform │················· assert st_size >= self.c_size │················· AssertionError
only when i set the crop size to 256( other setiings like yours ),the code could run,but the test mae is only 73.6, far from the paper record. and i was strcucked about the min_size and max_size in the bayesian_preprocess_sh.py(shha setting is min_size = 256, max_size = 5096

knightyxp commented 3 years ago

I have trained 1000 epoche in shanghaiTech part A samples using the M_SFANet,but the test mea is 68.08,the learning rate used 5e-4, the batch size used 8, and the sample size croped as 400*400. And can you tell me the train detail?

how could you set crop size to 400x400 when i process the shha data according to the bayesian code, erroe happen like this : File "/home/../.jupyter/Variations-of-SFANet-for-Crowd-Counting-master/datasets/crowd.py", line 91, in train_transform │················· assert st_size >= self.c_size │················· AssertionError only when i set the crop size to 256( other setiings like yours ),the code could run,but the test mae is only 73.6, far from the paper record. and i was strcucked about the min_size and max_size in the bayesian_preprocess_sh.py(shha setting is min_size = 256, max_size = 5096

Pongpisit-Thanasutives commented 3 years ago

Just to clarify some points, (1) The purpose of "bayesian_preprocess_sh.py" is for fine-tuning the models which are pretrained on UCF_QNRF. So, the crop size should be 256x256 rather than 400x400 if you are using (fine-tuning) "bayesian_preprocess_sh.py" according to their paper (https://openaccess.thecvf.com/content_ICCV_2019/papers/Ma_Bayesian_Loss_for_Crowd_Count_Estimation_With_Point_Supervision_ICCV_2019_paper.pdf).

(2) If you are training from scratch, the crop size is 400x400 and please refer to the SHA preprocessing code here => https://github.com/pxq0312/SFANet-crowd-counting/blob/master/transforms.py.

knightyxp commented 3 years ago

3qu for clarification, another problem is how to get the average prediction of two models(M-segNet and M-sfanet),just to train the two model on the same dataset and average the prediction to get the final attracting results?

Pongpisit-Thanasutives commented 3 years ago

3qu for clarification, another problem is how to get the average prediction of two models(M-segNet and M-sfanet),just to train the two model on the same dataset and average the prediction to get the final attracting results?

Yes!, We used simple averaging model predictions.