cvlab-stonybrook / DM-Count

Code for NeurIPS 2020 paper: Distribution Matching for Crowd Counting.
MIT License
218 stars 52 forks source link

How to reduce the big experiment randomness? #25

Open jingliang95 opened 3 years ago

jingliang95 commented 3 years ago

Using the code, I observe very big experimental randomness. For example, on QNRF dataset, I obtain results on test set as follows (MAE and MSE): run 1: 87.621, 149.75 run 2: 92.988, 168.47 run 3: 96.175, 167.79

In the paper, 85.6 and 148.3 are reported. May I ask if the authors have some ideas to reduce the big experiment randomness? With this big randomness, how can we draw conclusion on which model performs well and which doesn't?

Thanks a lot.

Boyu-Wang commented 3 years ago

Could you also share what's the version of pytorch, cuda, GPU you are using?

jingliang95 commented 3 years ago

Sure, pytorch version: 1.7.1, cuda: 11.1, GPU: V100.

Did you repeat your experiments on QNRF? If so, can you report the results for different runs? Thanks a lot.

midasklr commented 3 years ago

I met the same problem with sha dataset, I event got better mae(57.72) and mse(93.81) than the paper(59.7 and 95.7) when I change some parameters...